Feb 23 17:30:42 crc systemd[1]: Starting Kubernetes Kubelet... Feb 23 17:30:42 crc restorecon[4678]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 23 17:30:42 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 17:30:43 crc restorecon[4678]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 17:30:43 crc restorecon[4678]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 23 17:30:44 crc kubenswrapper[4724]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 17:30:44 crc kubenswrapper[4724]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 23 17:30:44 crc kubenswrapper[4724]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 17:30:44 crc kubenswrapper[4724]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 17:30:44 crc kubenswrapper[4724]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 23 17:30:44 crc kubenswrapper[4724]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.642039 4724 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.644924 4724 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.644946 4724 feature_gate.go:330] unrecognized feature gate: Example Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.644951 4724 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.644956 4724 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.644962 4724 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.644967 4724 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.644972 4724 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.644977 4724 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.644991 4724 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.644996 4724 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.645001 4724 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.645005 4724 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.645009 4724 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.645013 4724 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.645017 4724 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.645022 4724 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.645026 4724 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.645030 4724 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.645033 4724 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.645038 4724 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.645042 4724 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.645047 4724 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.645052 4724 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.645057 4724 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.645062 4724 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.645067 4724 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.645072 4724 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.645077 4724 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.645083 4724 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.645091 4724 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.645097 4724 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.645102 4724 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.645108 4724 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.645114 4724 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.645118 4724 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.645124 4724 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.645128 4724 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.645132 4724 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.645136 4724 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.645143 4724 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.645148 4724 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.645153 4724 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.645157 4724 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.645162 4724 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.645166 4724 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.645170 4724 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.645175 4724 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.645180 4724 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.645185 4724 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.645189 4724 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.645195 4724 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.645201 4724 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.645208 4724 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.645215 4724 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.645221 4724 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.645226 4724 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.645231 4724 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.645235 4724 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.645239 4724 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.645244 4724 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.645248 4724 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.645252 4724 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.645255 4724 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.645259 4724 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.645262 4724 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.645265 4724 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.645269 4724 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.645272 4724 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.645276 4724 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.645279 4724 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.645282 4724 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646260 4724 flags.go:64] FLAG: --address="0.0.0.0" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646276 4724 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646284 4724 flags.go:64] FLAG: --anonymous-auth="true" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646291 4724 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646296 4724 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646301 4724 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646307 4724 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646312 4724 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646316 4724 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646320 4724 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646336 4724 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646340 4724 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646344 4724 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646349 4724 flags.go:64] FLAG: --cgroup-root="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646353 4724 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646357 4724 flags.go:64] FLAG: --client-ca-file="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646361 4724 flags.go:64] FLAG: --cloud-config="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646364 4724 flags.go:64] FLAG: --cloud-provider="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646368 4724 flags.go:64] FLAG: --cluster-dns="[]" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646373 4724 flags.go:64] FLAG: --cluster-domain="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646377 4724 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646382 4724 flags.go:64] FLAG: --config-dir="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646401 4724 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646406 4724 flags.go:64] FLAG: --container-log-max-files="5" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646412 4724 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646417 4724 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646421 4724 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646426 4724 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646431 4724 flags.go:64] FLAG: --contention-profiling="false" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646437 4724 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646441 4724 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646445 4724 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646450 4724 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646455 4724 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646459 4724 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646464 4724 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646468 4724 flags.go:64] FLAG: --enable-load-reader="false" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646472 4724 flags.go:64] FLAG: --enable-server="true" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646476 4724 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646485 4724 flags.go:64] FLAG: --event-burst="100" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646490 4724 flags.go:64] FLAG: --event-qps="50" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646494 4724 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646499 4724 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646503 4724 flags.go:64] FLAG: --eviction-hard="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646508 4724 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646512 4724 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646516 4724 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646521 4724 flags.go:64] FLAG: --eviction-soft="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646525 4724 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646529 4724 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646533 4724 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646537 4724 flags.go:64] FLAG: --experimental-mounter-path="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646541 4724 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646545 4724 flags.go:64] FLAG: --fail-swap-on="true" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646549 4724 flags.go:64] FLAG: --feature-gates="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646554 4724 flags.go:64] FLAG: --file-check-frequency="20s" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646560 4724 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646564 4724 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646569 4724 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646573 4724 flags.go:64] FLAG: --healthz-port="10248" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646578 4724 flags.go:64] FLAG: --help="false" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646582 4724 flags.go:64] FLAG: --hostname-override="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646586 4724 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646591 4724 flags.go:64] FLAG: --http-check-frequency="20s" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646595 4724 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646600 4724 flags.go:64] FLAG: --image-credential-provider-config="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646603 4724 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646607 4724 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646612 4724 flags.go:64] FLAG: --image-service-endpoint="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646616 4724 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646620 4724 flags.go:64] FLAG: --kube-api-burst="100" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646625 4724 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646630 4724 flags.go:64] FLAG: --kube-api-qps="50" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646635 4724 flags.go:64] FLAG: --kube-reserved="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646640 4724 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646644 4724 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646649 4724 flags.go:64] FLAG: --kubelet-cgroups="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646654 4724 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646659 4724 flags.go:64] FLAG: --lock-file="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646663 4724 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646668 4724 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646672 4724 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646678 4724 flags.go:64] FLAG: --log-json-split-stream="false" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646682 4724 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646686 4724 flags.go:64] FLAG: --log-text-split-stream="false" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646690 4724 flags.go:64] FLAG: --logging-format="text" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646694 4724 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646698 4724 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646702 4724 flags.go:64] FLAG: --manifest-url="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646706 4724 flags.go:64] FLAG: --manifest-url-header="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646712 4724 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646716 4724 flags.go:64] FLAG: --max-open-files="1000000" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646721 4724 flags.go:64] FLAG: --max-pods="110" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646725 4724 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646730 4724 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646734 4724 flags.go:64] FLAG: --memory-manager-policy="None" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646740 4724 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646744 4724 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646748 4724 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646753 4724 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646763 4724 flags.go:64] FLAG: --node-status-max-images="50" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646767 4724 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646772 4724 flags.go:64] FLAG: --oom-score-adj="-999" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646776 4724 flags.go:64] FLAG: --pod-cidr="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646779 4724 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646787 4724 flags.go:64] FLAG: --pod-manifest-path="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646791 4724 flags.go:64] FLAG: --pod-max-pids="-1" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646795 4724 flags.go:64] FLAG: --pods-per-core="0" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646799 4724 flags.go:64] FLAG: --port="10250" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646803 4724 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646807 4724 flags.go:64] FLAG: --provider-id="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646811 4724 flags.go:64] FLAG: --qos-reserved="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646815 4724 flags.go:64] FLAG: --read-only-port="10255" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646819 4724 flags.go:64] FLAG: --register-node="true" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646823 4724 flags.go:64] FLAG: --register-schedulable="true" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646827 4724 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646834 4724 flags.go:64] FLAG: --registry-burst="10" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646838 4724 flags.go:64] FLAG: --registry-qps="5" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646842 4724 flags.go:64] FLAG: --reserved-cpus="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646846 4724 flags.go:64] FLAG: --reserved-memory="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646851 4724 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646855 4724 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646859 4724 flags.go:64] FLAG: --rotate-certificates="false" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646863 4724 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646867 4724 flags.go:64] FLAG: --runonce="false" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646871 4724 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646875 4724 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646880 4724 flags.go:64] FLAG: --seccomp-default="false" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646884 4724 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646888 4724 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646893 4724 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646897 4724 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646901 4724 flags.go:64] FLAG: --storage-driver-password="root" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646905 4724 flags.go:64] FLAG: --storage-driver-secure="false" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646909 4724 flags.go:64] FLAG: --storage-driver-table="stats" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646913 4724 flags.go:64] FLAG: --storage-driver-user="root" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646918 4724 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646922 4724 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646926 4724 flags.go:64] FLAG: --system-cgroups="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646930 4724 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646937 4724 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646941 4724 flags.go:64] FLAG: --tls-cert-file="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646945 4724 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646949 4724 flags.go:64] FLAG: --tls-min-version="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646953 4724 flags.go:64] FLAG: --tls-private-key-file="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646957 4724 flags.go:64] FLAG: --topology-manager-policy="none" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646961 4724 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646965 4724 flags.go:64] FLAG: --topology-manager-scope="container" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646969 4724 flags.go:64] FLAG: --v="2" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646975 4724 flags.go:64] FLAG: --version="false" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646981 4724 flags.go:64] FLAG: --vmodule="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646986 4724 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.646990 4724 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647095 4724 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647099 4724 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647103 4724 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647108 4724 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647113 4724 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647119 4724 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647125 4724 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647131 4724 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647136 4724 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647141 4724 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647145 4724 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647150 4724 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647153 4724 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647158 4724 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647162 4724 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647167 4724 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647171 4724 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647175 4724 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647179 4724 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647183 4724 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647186 4724 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647190 4724 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647193 4724 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647197 4724 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647201 4724 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647204 4724 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647208 4724 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647211 4724 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647214 4724 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647218 4724 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647223 4724 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647227 4724 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647231 4724 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647235 4724 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647238 4724 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647247 4724 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647250 4724 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647254 4724 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647258 4724 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647261 4724 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647265 4724 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647269 4724 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647272 4724 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647275 4724 feature_gate.go:330] unrecognized feature gate: Example Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647279 4724 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647282 4724 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647286 4724 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647289 4724 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647293 4724 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647296 4724 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647299 4724 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647303 4724 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647307 4724 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647310 4724 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647314 4724 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647317 4724 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647321 4724 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647324 4724 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647327 4724 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647331 4724 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647334 4724 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647337 4724 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647341 4724 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647344 4724 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647347 4724 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647351 4724 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647354 4724 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647359 4724 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647364 4724 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647370 4724 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.647374 4724 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.648077 4724 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.659937 4724 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.659987 4724 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660059 4724 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660067 4724 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660072 4724 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660076 4724 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660081 4724 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660085 4724 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660090 4724 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660094 4724 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660099 4724 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660105 4724 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660110 4724 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660117 4724 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660123 4724 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660128 4724 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660132 4724 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660136 4724 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660141 4724 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660145 4724 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660149 4724 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660153 4724 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660157 4724 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660161 4724 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660165 4724 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660168 4724 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660172 4724 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660176 4724 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660180 4724 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660184 4724 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660188 4724 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660192 4724 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660196 4724 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660199 4724 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660203 4724 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660207 4724 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660210 4724 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660214 4724 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660217 4724 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660221 4724 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660224 4724 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660228 4724 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660232 4724 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660235 4724 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660239 4724 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660244 4724 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660248 4724 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660251 4724 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660256 4724 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660262 4724 feature_gate.go:330] unrecognized feature gate: Example Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660266 4724 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660270 4724 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660275 4724 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660279 4724 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660284 4724 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660288 4724 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660292 4724 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660297 4724 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660300 4724 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660304 4724 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660308 4724 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660311 4724 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660315 4724 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660319 4724 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660323 4724 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660328 4724 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660333 4724 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660336 4724 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660340 4724 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660343 4724 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660347 4724 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660351 4724 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660354 4724 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.660362 4724 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660498 4724 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660506 4724 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660511 4724 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660517 4724 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660522 4724 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660526 4724 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660530 4724 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660534 4724 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660537 4724 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660541 4724 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660545 4724 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660549 4724 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660553 4724 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660556 4724 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660560 4724 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660563 4724 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660568 4724 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660571 4724 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660576 4724 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660580 4724 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660583 4724 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660587 4724 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660591 4724 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660596 4724 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660600 4724 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660604 4724 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660607 4724 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660612 4724 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660616 4724 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660619 4724 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660623 4724 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660627 4724 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660631 4724 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660636 4724 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660641 4724 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660647 4724 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660652 4724 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660656 4724 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660661 4724 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660664 4724 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660668 4724 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660672 4724 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660676 4724 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660681 4724 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660685 4724 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660690 4724 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660693 4724 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660697 4724 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660701 4724 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660704 4724 feature_gate.go:330] unrecognized feature gate: Example Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660708 4724 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660712 4724 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660715 4724 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660719 4724 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660722 4724 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660726 4724 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660730 4724 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660734 4724 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660738 4724 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660742 4724 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660746 4724 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660751 4724 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660755 4724 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660759 4724 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660763 4724 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660768 4724 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660772 4724 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660776 4724 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660782 4724 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660786 4724 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.660790 4724 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.660797 4724 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.661771 4724 server.go:940] "Client rotation is on, will bootstrap in background" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.666988 4724 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.667078 4724 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.668285 4724 server.go:997] "Starting client certificate rotation" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.668320 4724 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.669236 4724 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-19 20:22:59.753812253 +0000 UTC Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.669297 4724 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.689962 4724 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 23 17:30:44 crc kubenswrapper[4724]: E0223 17:30:44.693631 4724 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.174:6443: connect: connection refused" logger="UnhandledError" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.693774 4724 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.706640 4724 log.go:25] "Validated CRI v1 runtime API" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.737487 4724 log.go:25] "Validated CRI v1 image API" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.740183 4724 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.744598 4724 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-23-17-25-27-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.744656 4724 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.774032 4724 manager.go:217] Machine: {Timestamp:2026-02-23 17:30:44.771531989 +0000 UTC m=+0.587731679 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:883aa43b-ee67-45aa-9f6b-7760dc931d5e BootID:aaac6a71-65af-4ded-9945-71c01ce15653 Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:d3:42:3c Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:d3:42:3c Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:b9:4a:8f Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:ae:a3:5c Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:a4:94:f7 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:d7:c3:18 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:52:70:6d:6e:ee:56 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:86:e4:8e:82:eb:31 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.774452 4724 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.774656 4724 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.777258 4724 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.778367 4724 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.778452 4724 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.778784 4724 topology_manager.go:138] "Creating topology manager with none policy" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.778804 4724 container_manager_linux.go:303] "Creating device plugin manager" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.779448 4724 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.779501 4724 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.779831 4724 state_mem.go:36] "Initialized new in-memory state store" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.779964 4724 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.784313 4724 kubelet.go:418] "Attempting to sync node with API server" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.784352 4724 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.784423 4724 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.784449 4724 kubelet.go:324] "Adding apiserver pod source" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.784470 4724 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.789651 4724 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.789668 4724 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.174:6443: connect: connection refused Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.789670 4724 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.174:6443: connect: connection refused Feb 23 17:30:44 crc kubenswrapper[4724]: E0223 17:30:44.789795 4724 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.174:6443: connect: connection refused" logger="UnhandledError" Feb 23 17:30:44 crc kubenswrapper[4724]: E0223 17:30:44.789832 4724 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.174:6443: connect: connection refused" logger="UnhandledError" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.790801 4724 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.792741 4724 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.794487 4724 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.794516 4724 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.794525 4724 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.794535 4724 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.794548 4724 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.794559 4724 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.794567 4724 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.794580 4724 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.794592 4724 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.794605 4724 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.794617 4724 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.794625 4724 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.795442 4724 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.796022 4724 server.go:1280] "Started kubelet" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.796210 4724 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.796320 4724 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.797088 4724 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.797359 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.174:6443: connect: connection refused Feb 23 17:30:44 crc systemd[1]: Started Kubernetes Kubelet. Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.799459 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.799507 4724 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.799536 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 12:49:29.140035841 +0000 UTC Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.799799 4724 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.799856 4724 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 23 17:30:44 crc kubenswrapper[4724]: E0223 17:30:44.799889 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.799956 4724 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 23 17:30:44 crc kubenswrapper[4724]: E0223 17:30:44.800233 4724 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" interval="200ms" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.802433 4724 factory.go:55] Registering systemd factory Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.802466 4724 factory.go:221] Registration of the systemd container factory successfully Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.811563 4724 factory.go:153] Registering CRI-O factory Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.811606 4724 factory.go:221] Registration of the crio container factory successfully Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.811792 4724 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.811830 4724 factory.go:103] Registering Raw factory Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.811859 4724 manager.go:1196] Started watching for new ooms in manager Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.815157 4724 manager.go:319] Starting recovery of all containers Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.817549 4724 server.go:460] "Adding debug handlers to kubelet server" Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.817720 4724 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.174:6443: connect: connection refused Feb 23 17:30:44 crc kubenswrapper[4724]: E0223 17:30:44.817830 4724 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.174:6443: connect: connection refused" logger="UnhandledError" Feb 23 17:30:44 crc kubenswrapper[4724]: E0223 17:30:44.817669 4724 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.174:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1896f06ac60078d1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-23 17:30:44.795988177 +0000 UTC m=+0.612187777,LastTimestamp:2026-02-23 17:30:44.795988177 +0000 UTC m=+0.612187777,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 23 17:30:44 crc kubenswrapper[4724]: E0223 17:30:44.900305 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.912170 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.912289 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.912307 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.912319 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.912408 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.912424 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.912438 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.912505 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.912524 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.912579 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.912593 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.912604 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.912614 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.912630 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.912642 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.912665 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.912680 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.912692 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.912705 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.912722 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.912733 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.912743 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.912754 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.912769 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.912780 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.912802 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.912917 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.912938 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.912949 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.912966 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.912982 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.912996 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.913011 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.913021 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.913037 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.913073 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.913113 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.913129 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.913139 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.913152 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.913164 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.915242 4724 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.915270 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.915284 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.915294 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.915305 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.915314 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.915328 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.915340 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.915351 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.915364 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.915376 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.915402 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.915424 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.915439 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.915455 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.915467 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.915481 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.915492 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.915504 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.915515 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.915526 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.915543 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.915562 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.915574 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.915584 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.915594 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.915609 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.915626 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.915636 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.915647 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.915658 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.915669 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.915681 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.915692 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.915702 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.915713 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.915731 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.915743 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.915773 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.915787 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.915799 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.915811 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.915824 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.915837 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.915862 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.915873 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.915891 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.915905 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.915917 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.915928 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.915940 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.915952 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.915963 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.915976 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.915988 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.916001 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.916021 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.916035 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.916071 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.916084 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.916097 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.916111 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.916121 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.916134 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.916152 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.916167 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.916182 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.916198 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.916213 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.916227 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.916242 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.916256 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.916269 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.916284 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.916307 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.916320 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.916331 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.916343 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.916356 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.916368 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.916379 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.916411 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.916427 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.916440 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.916453 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.916464 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.916477 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.916488 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.916501 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.916512 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.916523 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.916536 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.916547 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.916558 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.916568 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.916580 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.916591 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.916602 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.916612 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.916623 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.916632 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.916643 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.916655 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.916665 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.916675 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.916686 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.916699 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.916710 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.916721 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.916730 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.918521 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.918576 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.918620 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.918638 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.918655 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.918686 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.918704 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.918731 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.918746 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.918761 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.918784 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.918801 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.918830 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.918848 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.918864 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.918888 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.918906 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.918929 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.918948 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.918949 4724 manager.go:324] Recovery completed Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.918962 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.920490 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.920522 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.920536 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.920557 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.920571 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.920617 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.920662 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.920677 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.920724 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.920765 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.920783 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.920798 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.920813 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.920857 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.920873 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.920889 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.920946 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.920961 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.920976 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.920987 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.921002 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.921014 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.921025 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.921043 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.921082 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.921098 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.921137 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.921150 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.921204 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.921217 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.921230 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.921247 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.921295 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.921316 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.921327 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.921339 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.921353 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.921364 4724 reconstruct.go:97] "Volume reconstruction finished" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.921373 4724 reconciler.go:26] "Reconciler: start to sync state" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.938808 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.940982 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.941027 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.941039 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.942016 4724 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.942049 4724 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.942082 4724 state_mem.go:36] "Initialized new in-memory state store" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.946526 4724 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.949626 4724 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.949681 4724 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.949724 4724 kubelet.go:2335] "Starting kubelet main sync loop" Feb 23 17:30:44 crc kubenswrapper[4724]: E0223 17:30:44.949914 4724 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 23 17:30:44 crc kubenswrapper[4724]: W0223 17:30:44.950661 4724 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.174:6443: connect: connection refused Feb 23 17:30:44 crc kubenswrapper[4724]: E0223 17:30:44.950757 4724 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.174:6443: connect: connection refused" logger="UnhandledError" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.963982 4724 policy_none.go:49] "None policy: Start" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.965018 4724 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 23 17:30:44 crc kubenswrapper[4724]: I0223 17:30:44.965056 4724 state_mem.go:35] "Initializing new in-memory state store" Feb 23 17:30:45 crc kubenswrapper[4724]: E0223 17:30:45.000540 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:30:45 crc kubenswrapper[4724]: E0223 17:30:45.000991 4724 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" interval="400ms" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.027335 4724 manager.go:334] "Starting Device Plugin manager" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.027438 4724 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.027464 4724 server.go:79] "Starting device plugin registration server" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.028018 4724 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.028042 4724 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.028460 4724 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.028545 4724 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.028556 4724 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 23 17:30:45 crc kubenswrapper[4724]: E0223 17:30:45.036224 4724 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.050473 4724 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.050595 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.052695 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.052768 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.052782 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.053091 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.053229 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.053281 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.054652 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.054715 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.054727 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.054979 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.055218 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.055295 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.055383 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.055452 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.055468 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.056256 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.056292 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.056307 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.056363 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.056405 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.056420 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.056766 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.056966 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.057034 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.057824 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.057858 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.057872 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.057874 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.057965 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.058010 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.058269 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.058359 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.058428 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.059674 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.059717 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.059733 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.059758 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.059781 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.059792 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.060114 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.060158 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.060958 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.060986 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.060996 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.125058 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.125178 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.125249 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.125295 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.125342 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.125387 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.125477 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.125521 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.125564 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.125878 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.125988 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.126057 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.126106 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.126212 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.126255 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.128960 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.130294 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.130338 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.130354 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.130414 4724 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 17:30:45 crc kubenswrapper[4724]: E0223 17:30:45.130873 4724 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.174:6443: connect: connection refused" node="crc" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.227959 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.228007 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.228030 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.228060 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.228088 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.228112 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.228135 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.228155 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.228236 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.228278 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.228241 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.228332 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.228303 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.228470 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.228447 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.228448 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.228604 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.228536 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.228648 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.228566 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.228747 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.228739 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.228783 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.228752 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.228829 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.228860 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.228787 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.228715 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.229178 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.229186 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.331008 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.332358 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.332411 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.332425 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.332454 4724 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 17:30:45 crc kubenswrapper[4724]: E0223 17:30:45.332802 4724 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.174:6443: connect: connection refused" node="crc" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.387110 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.392756 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 23 17:30:45 crc kubenswrapper[4724]: E0223 17:30:45.402798 4724 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" interval="800ms" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.407173 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.426301 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.436483 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 23 17:30:45 crc kubenswrapper[4724]: W0223 17:30:45.441693 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-12a5181a4a9ae74f6c820830db8cce0ffd8d4c245ee16ee52a9c72764fdb3fb9 WatchSource:0}: Error finding container 12a5181a4a9ae74f6c820830db8cce0ffd8d4c245ee16ee52a9c72764fdb3fb9: Status 404 returned error can't find the container with id 12a5181a4a9ae74f6c820830db8cce0ffd8d4c245ee16ee52a9c72764fdb3fb9 Feb 23 17:30:45 crc kubenswrapper[4724]: W0223 17:30:45.443520 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-dda77573554c8544eb004af4710aa98c6be60bae77ff9c358ce1fbf5417d83da WatchSource:0}: Error finding container dda77573554c8544eb004af4710aa98c6be60bae77ff9c358ce1fbf5417d83da: Status 404 returned error can't find the container with id dda77573554c8544eb004af4710aa98c6be60bae77ff9c358ce1fbf5417d83da Feb 23 17:30:45 crc kubenswrapper[4724]: W0223 17:30:45.451380 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-1d54e18da56b762d051073bc9da7c74cab963a4d661f0c2bca0f6290f52bb82a WatchSource:0}: Error finding container 1d54e18da56b762d051073bc9da7c74cab963a4d661f0c2bca0f6290f52bb82a: Status 404 returned error can't find the container with id 1d54e18da56b762d051073bc9da7c74cab963a4d661f0c2bca0f6290f52bb82a Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.733091 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.735177 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.735241 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.735258 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.735296 4724 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 17:30:45 crc kubenswrapper[4724]: E0223 17:30:45.735723 4724 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.174:6443: connect: connection refused" node="crc" Feb 23 17:30:45 crc kubenswrapper[4724]: W0223 17:30:45.787277 4724 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.174:6443: connect: connection refused Feb 23 17:30:45 crc kubenswrapper[4724]: E0223 17:30:45.787412 4724 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.174:6443: connect: connection refused" logger="UnhandledError" Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.799043 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.174:6443: connect: connection refused Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.800051 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 01:32:17.897194665 +0000 UTC Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.955790 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a939097cf5a3cef2039e4b89baaa33c4b4118467e91ee5eea1e7cf8627a399e9"} Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.957074 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e578002e278da5455df2bfeaa69ededbabb7303b82bff14e9ec9c44e2925e55f"} Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.958216 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1d54e18da56b762d051073bc9da7c74cab963a4d661f0c2bca0f6290f52bb82a"} Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.959410 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"dda77573554c8544eb004af4710aa98c6be60bae77ff9c358ce1fbf5417d83da"} Feb 23 17:30:45 crc kubenswrapper[4724]: I0223 17:30:45.967423 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"12a5181a4a9ae74f6c820830db8cce0ffd8d4c245ee16ee52a9c72764fdb3fb9"} Feb 23 17:30:46 crc kubenswrapper[4724]: E0223 17:30:46.203978 4724 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" interval="1.6s" Feb 23 17:30:46 crc kubenswrapper[4724]: W0223 17:30:46.263872 4724 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.174:6443: connect: connection refused Feb 23 17:30:46 crc kubenswrapper[4724]: E0223 17:30:46.264016 4724 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.174:6443: connect: connection refused" logger="UnhandledError" Feb 23 17:30:46 crc kubenswrapper[4724]: W0223 17:30:46.310144 4724 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.174:6443: connect: connection refused Feb 23 17:30:46 crc kubenswrapper[4724]: E0223 17:30:46.310834 4724 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.174:6443: connect: connection refused" logger="UnhandledError" Feb 23 17:30:46 crc kubenswrapper[4724]: W0223 17:30:46.384581 4724 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.174:6443: connect: connection refused Feb 23 17:30:46 crc kubenswrapper[4724]: E0223 17:30:46.384726 4724 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.174:6443: connect: connection refused" logger="UnhandledError" Feb 23 17:30:46 crc kubenswrapper[4724]: I0223 17:30:46.536596 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:30:46 crc kubenswrapper[4724]: I0223 17:30:46.572316 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:30:46 crc kubenswrapper[4724]: I0223 17:30:46.572425 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:30:46 crc kubenswrapper[4724]: I0223 17:30:46.572453 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:30:46 crc kubenswrapper[4724]: I0223 17:30:46.572504 4724 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 17:30:46 crc kubenswrapper[4724]: E0223 17:30:46.573237 4724 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.174:6443: connect: connection refused" node="crc" Feb 23 17:30:46 crc kubenswrapper[4724]: I0223 17:30:46.781995 4724 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 23 17:30:46 crc kubenswrapper[4724]: E0223 17:30:46.783497 4724 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.174:6443: connect: connection refused" logger="UnhandledError" Feb 23 17:30:46 crc kubenswrapper[4724]: I0223 17:30:46.798908 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.174:6443: connect: connection refused Feb 23 17:30:46 crc kubenswrapper[4724]: I0223 17:30:46.800161 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 11:15:12.149539412 +0000 UTC Feb 23 17:30:46 crc kubenswrapper[4724]: I0223 17:30:46.972348 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3ff1c2b88a730048902a1360cdbcddfb4f82c6a58c427edb24a792b254f55383"} Feb 23 17:30:46 crc kubenswrapper[4724]: I0223 17:30:46.972412 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"cbce4ddd851479ee9b40b8c773a052d5a1d1420df107d8ea0e9fa2b927300910"} Feb 23 17:30:46 crc kubenswrapper[4724]: I0223 17:30:46.973941 4724 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74" exitCode=0 Feb 23 17:30:46 crc kubenswrapper[4724]: I0223 17:30:46.973998 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74"} Feb 23 17:30:46 crc kubenswrapper[4724]: I0223 17:30:46.974036 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:30:46 crc kubenswrapper[4724]: I0223 17:30:46.975110 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:30:46 crc kubenswrapper[4724]: I0223 17:30:46.975152 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:30:46 crc kubenswrapper[4724]: I0223 17:30:46.975167 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:30:46 crc kubenswrapper[4724]: I0223 17:30:46.975911 4724 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="200faace6300bc9dc3138b270b3fd029a6580cb06bfdd89eb73f97635abc7ac3" exitCode=0 Feb 23 17:30:46 crc kubenswrapper[4724]: I0223 17:30:46.976000 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"200faace6300bc9dc3138b270b3fd029a6580cb06bfdd89eb73f97635abc7ac3"} Feb 23 17:30:46 crc kubenswrapper[4724]: I0223 17:30:46.976038 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:30:46 crc kubenswrapper[4724]: I0223 17:30:46.976957 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:30:46 crc kubenswrapper[4724]: I0223 17:30:46.976982 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:30:46 crc kubenswrapper[4724]: I0223 17:30:46.976992 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:30:46 crc kubenswrapper[4724]: I0223 17:30:46.977045 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:30:46 crc kubenswrapper[4724]: I0223 17:30:46.978029 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:30:46 crc kubenswrapper[4724]: I0223 17:30:46.978050 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:30:46 crc kubenswrapper[4724]: I0223 17:30:46.978059 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:30:46 crc kubenswrapper[4724]: I0223 17:30:46.978412 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2"} Feb 23 17:30:46 crc kubenswrapper[4724]: I0223 17:30:46.978476 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:30:46 crc kubenswrapper[4724]: I0223 17:30:46.978938 4724 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2" exitCode=0 Feb 23 17:30:46 crc kubenswrapper[4724]: I0223 17:30:46.980176 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:30:46 crc kubenswrapper[4724]: I0223 17:30:46.980201 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:30:46 crc kubenswrapper[4724]: I0223 17:30:46.980211 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:30:46 crc kubenswrapper[4724]: I0223 17:30:46.984232 4724 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92" exitCode=0 Feb 23 17:30:46 crc kubenswrapper[4724]: I0223 17:30:46.984271 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92"} Feb 23 17:30:46 crc kubenswrapper[4724]: I0223 17:30:46.984373 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:30:46 crc kubenswrapper[4724]: I0223 17:30:46.985031 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:30:46 crc kubenswrapper[4724]: I0223 17:30:46.985074 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:30:46 crc kubenswrapper[4724]: I0223 17:30:46.985377 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:30:47 crc kubenswrapper[4724]: I0223 17:30:47.799307 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.174:6443: connect: connection refused Feb 23 17:30:47 crc kubenswrapper[4724]: I0223 17:30:47.800381 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 15:31:45.012039434 +0000 UTC Feb 23 17:30:47 crc kubenswrapper[4724]: E0223 17:30:47.804939 4724 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" interval="3.2s" Feb 23 17:30:47 crc kubenswrapper[4724]: I0223 17:30:47.992961 4724 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="3268af99383f73c85412d56d39515a43362510c7a3a972e83752d5929d7d4935" exitCode=0 Feb 23 17:30:47 crc kubenswrapper[4724]: I0223 17:30:47.993071 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"3268af99383f73c85412d56d39515a43362510c7a3a972e83752d5929d7d4935"} Feb 23 17:30:47 crc kubenswrapper[4724]: I0223 17:30:47.993093 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:30:47 crc kubenswrapper[4724]: I0223 17:30:47.994251 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:30:47 crc kubenswrapper[4724]: I0223 17:30:47.994293 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:30:47 crc kubenswrapper[4724]: I0223 17:30:47.994306 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:30:47 crc kubenswrapper[4724]: I0223 17:30:47.995609 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"e581ef5d1fb9752e7582661dffceba50fc7b408467d9b0f0cca3b29e1838e72b"} Feb 23 17:30:47 crc kubenswrapper[4724]: I0223 17:30:47.995629 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:30:47 crc kubenswrapper[4724]: I0223 17:30:47.997101 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:30:47 crc kubenswrapper[4724]: I0223 17:30:47.997142 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:30:47 crc kubenswrapper[4724]: I0223 17:30:47.997157 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:30:48 crc kubenswrapper[4724]: I0223 17:30:48.002044 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"0f971a2a092b595760de8b7a9b6a0a13fa802d3fcb6a2c2b8922b01c9c57d782"} Feb 23 17:30:48 crc kubenswrapper[4724]: I0223 17:30:48.002132 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d26478861fb89db7eb6b5ba0a2089cad4360893bd95a71883be07831745c5c87"} Feb 23 17:30:48 crc kubenswrapper[4724]: I0223 17:30:48.007234 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f1bfa4a72a4a487fbc7703a3b60a9ef9a18b7fea6e58fd427f50361d8df9731d"} Feb 23 17:30:48 crc kubenswrapper[4724]: I0223 17:30:48.007295 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2591d81bed9e5dbad670e16b2266367cfb386a3d68ffad9d09e0652130ee2609"} Feb 23 17:30:48 crc kubenswrapper[4724]: I0223 17:30:48.007334 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:30:48 crc kubenswrapper[4724]: I0223 17:30:48.009002 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:30:48 crc kubenswrapper[4724]: I0223 17:30:48.009034 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:30:48 crc kubenswrapper[4724]: I0223 17:30:48.009047 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:30:48 crc kubenswrapper[4724]: W0223 17:30:48.010542 4724 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.174:6443: connect: connection refused Feb 23 17:30:48 crc kubenswrapper[4724]: E0223 17:30:48.010607 4724 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.174:6443: connect: connection refused" logger="UnhandledError" Feb 23 17:30:48 crc kubenswrapper[4724]: I0223 17:30:48.015324 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"65e108d74a635da102e857e2ad4f18384861e4e630ae31af9aa05841e6bbc2de"} Feb 23 17:30:48 crc kubenswrapper[4724]: I0223 17:30:48.015362 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"fe805066e5e61f09a4f5cf1f9e912da6664229c8db86a898cbe670278e4c2530"} Feb 23 17:30:48 crc kubenswrapper[4724]: I0223 17:30:48.174336 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:30:48 crc kubenswrapper[4724]: I0223 17:30:48.175598 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:30:48 crc kubenswrapper[4724]: I0223 17:30:48.175628 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:30:48 crc kubenswrapper[4724]: I0223 17:30:48.175638 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:30:48 crc kubenswrapper[4724]: I0223 17:30:48.175665 4724 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 17:30:48 crc kubenswrapper[4724]: E0223 17:30:48.176024 4724 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.174:6443: connect: connection refused" node="crc" Feb 23 17:30:48 crc kubenswrapper[4724]: W0223 17:30:48.272029 4724 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.174:6443: connect: connection refused Feb 23 17:30:48 crc kubenswrapper[4724]: E0223 17:30:48.272134 4724 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.174:6443: connect: connection refused" logger="UnhandledError" Feb 23 17:30:48 crc kubenswrapper[4724]: I0223 17:30:48.798704 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.174:6443: connect: connection refused Feb 23 17:30:48 crc kubenswrapper[4724]: I0223 17:30:48.800813 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 23:45:50.769874491 +0000 UTC Feb 23 17:30:49 crc kubenswrapper[4724]: I0223 17:30:49.026775 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7d892cd1f6ea7fabb2e64e3e78d329f6e0efbf8585cc4f72fbcd29f31a035cef"} Feb 23 17:30:49 crc kubenswrapper[4724]: I0223 17:30:49.026884 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:30:49 crc kubenswrapper[4724]: I0223 17:30:49.028554 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:30:49 crc kubenswrapper[4724]: I0223 17:30:49.028610 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:30:49 crc kubenswrapper[4724]: I0223 17:30:49.028625 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:30:49 crc kubenswrapper[4724]: I0223 17:30:49.040005 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5d09bb8047e8dbfa75906b4c61f51eda4b3045409f0aba10a882f2d5700d6acd"} Feb 23 17:30:49 crc kubenswrapper[4724]: I0223 17:30:49.040230 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b550694e49325768a2674a4d2b7089dd267b91a273751fb62d060a0663f06619"} Feb 23 17:30:49 crc kubenswrapper[4724]: I0223 17:30:49.040453 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"beda3a9d030f2d4aa9c6a78508a84f5bc122debebb2c5b423233834433b8fd5d"} Feb 23 17:30:49 crc kubenswrapper[4724]: I0223 17:30:49.040130 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:30:49 crc kubenswrapper[4724]: I0223 17:30:49.041779 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:30:49 crc kubenswrapper[4724]: I0223 17:30:49.041831 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:30:49 crc kubenswrapper[4724]: I0223 17:30:49.041849 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:30:49 crc kubenswrapper[4724]: I0223 17:30:49.043879 4724 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="5ae1a073932b6e36d2af6015c8cd09df8d031c65d67e2d06c5bcc2fcd6b35407" exitCode=0 Feb 23 17:30:49 crc kubenswrapper[4724]: I0223 17:30:49.043988 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:30:49 crc kubenswrapper[4724]: I0223 17:30:49.044074 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:30:49 crc kubenswrapper[4724]: I0223 17:30:49.044186 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"5ae1a073932b6e36d2af6015c8cd09df8d031c65d67e2d06c5bcc2fcd6b35407"} Feb 23 17:30:49 crc kubenswrapper[4724]: I0223 17:30:49.044317 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:30:49 crc kubenswrapper[4724]: I0223 17:30:49.045168 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:30:49 crc kubenswrapper[4724]: I0223 17:30:49.045211 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:30:49 crc kubenswrapper[4724]: I0223 17:30:49.045227 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:30:49 crc kubenswrapper[4724]: I0223 17:30:49.045891 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:30:49 crc kubenswrapper[4724]: I0223 17:30:49.045934 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:30:49 crc kubenswrapper[4724]: I0223 17:30:49.045951 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:30:49 crc kubenswrapper[4724]: I0223 17:30:49.046151 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:30:49 crc kubenswrapper[4724]: I0223 17:30:49.046182 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:30:49 crc kubenswrapper[4724]: I0223 17:30:49.046200 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:30:49 crc kubenswrapper[4724]: I0223 17:30:49.801676 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 04:24:43.635784971 +0000 UTC Feb 23 17:30:49 crc kubenswrapper[4724]: I0223 17:30:49.970953 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 17:30:50 crc kubenswrapper[4724]: I0223 17:30:50.054381 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:30:50 crc kubenswrapper[4724]: I0223 17:30:50.054455 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c8b4c85243660568a55df3eb0867530e60512a01960c5be951763ccd9459be30"} Feb 23 17:30:50 crc kubenswrapper[4724]: I0223 17:30:50.054530 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"47ad3efcec8d953297d16ec3aa049656f57e69121a9dfc2f83572d1fe0c1f8ba"} Feb 23 17:30:50 crc kubenswrapper[4724]: I0223 17:30:50.054558 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"65474f581d37631a7193e3ebc061e9df69206ec245113f16c7930111a5f74d24"} Feb 23 17:30:50 crc kubenswrapper[4724]: I0223 17:30:50.054758 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:30:50 crc kubenswrapper[4724]: I0223 17:30:50.054428 4724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 17:30:50 crc kubenswrapper[4724]: I0223 17:30:50.055387 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 17:30:50 crc kubenswrapper[4724]: I0223 17:30:50.055627 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:30:50 crc kubenswrapper[4724]: I0223 17:30:50.055738 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:30:50 crc kubenswrapper[4724]: I0223 17:30:50.055921 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:30:50 crc kubenswrapper[4724]: I0223 17:30:50.055961 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:30:50 crc kubenswrapper[4724]: I0223 17:30:50.056322 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:30:50 crc kubenswrapper[4724]: I0223 17:30:50.056375 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:30:50 crc kubenswrapper[4724]: I0223 17:30:50.056421 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:30:50 crc kubenswrapper[4724]: I0223 17:30:50.058362 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:30:50 crc kubenswrapper[4724]: I0223 17:30:50.058549 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:30:50 crc kubenswrapper[4724]: I0223 17:30:50.058581 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:30:50 crc kubenswrapper[4724]: I0223 17:30:50.507793 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 17:30:50 crc kubenswrapper[4724]: I0223 17:30:50.518698 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 17:30:50 crc kubenswrapper[4724]: I0223 17:30:50.753654 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 17:30:50 crc kubenswrapper[4724]: I0223 17:30:50.802875 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 03:14:21.011411723 +0000 UTC Feb 23 17:30:51 crc kubenswrapper[4724]: I0223 17:30:51.066465 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2b43f0bff4db1bc23f440c84a11c3018196009ec0fe47a9cb4b4cd136e5fffd3"} Feb 23 17:30:51 crc kubenswrapper[4724]: I0223 17:30:51.066556 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ee21f0fcd431e1f400bb68f6a5c7b5b8d38ced5a33133684160a3255ac61ec20"} Feb 23 17:30:51 crc kubenswrapper[4724]: I0223 17:30:51.066601 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:30:51 crc kubenswrapper[4724]: I0223 17:30:51.066647 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:30:51 crc kubenswrapper[4724]: I0223 17:30:51.066729 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:30:51 crc kubenswrapper[4724]: I0223 17:30:51.068555 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:30:51 crc kubenswrapper[4724]: I0223 17:30:51.068595 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:30:51 crc kubenswrapper[4724]: I0223 17:30:51.068606 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:30:51 crc kubenswrapper[4724]: I0223 17:30:51.068851 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:30:51 crc kubenswrapper[4724]: I0223 17:30:51.068925 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:30:51 crc kubenswrapper[4724]: I0223 17:30:51.068985 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:30:51 crc kubenswrapper[4724]: I0223 17:30:51.068934 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:30:51 crc kubenswrapper[4724]: I0223 17:30:51.069012 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:30:51 crc kubenswrapper[4724]: I0223 17:30:51.069054 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:30:51 crc kubenswrapper[4724]: I0223 17:30:51.171517 4724 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 23 17:30:51 crc kubenswrapper[4724]: I0223 17:30:51.377139 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:30:51 crc kubenswrapper[4724]: I0223 17:30:51.379528 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:30:51 crc kubenswrapper[4724]: I0223 17:30:51.379616 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:30:51 crc kubenswrapper[4724]: I0223 17:30:51.379638 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:30:51 crc kubenswrapper[4724]: I0223 17:30:51.379698 4724 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 17:30:51 crc kubenswrapper[4724]: I0223 17:30:51.804083 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 12:11:52.629132412 +0000 UTC Feb 23 17:30:52 crc kubenswrapper[4724]: I0223 17:30:52.069702 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:30:52 crc kubenswrapper[4724]: I0223 17:30:52.069702 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:30:52 crc kubenswrapper[4724]: I0223 17:30:52.070572 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:30:52 crc kubenswrapper[4724]: I0223 17:30:52.071463 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:30:52 crc kubenswrapper[4724]: I0223 17:30:52.071505 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:30:52 crc kubenswrapper[4724]: I0223 17:30:52.071534 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:30:52 crc kubenswrapper[4724]: I0223 17:30:52.071537 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:30:52 crc kubenswrapper[4724]: I0223 17:30:52.071635 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:30:52 crc kubenswrapper[4724]: I0223 17:30:52.071569 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:30:52 crc kubenswrapper[4724]: I0223 17:30:52.071743 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:30:52 crc kubenswrapper[4724]: I0223 17:30:52.071781 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:30:52 crc kubenswrapper[4724]: I0223 17:30:52.071803 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:30:52 crc kubenswrapper[4724]: I0223 17:30:52.226030 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 23 17:30:52 crc kubenswrapper[4724]: I0223 17:30:52.226379 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:30:52 crc kubenswrapper[4724]: I0223 17:30:52.228137 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:30:52 crc kubenswrapper[4724]: I0223 17:30:52.228201 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:30:52 crc kubenswrapper[4724]: I0223 17:30:52.228229 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:30:52 crc kubenswrapper[4724]: I0223 17:30:52.579297 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 17:30:52 crc kubenswrapper[4724]: I0223 17:30:52.768992 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 23 17:30:52 crc kubenswrapper[4724]: I0223 17:30:52.805041 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 08:20:35.507151135 +0000 UTC Feb 23 17:30:53 crc kubenswrapper[4724]: I0223 17:30:53.071811 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:30:53 crc kubenswrapper[4724]: I0223 17:30:53.071864 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:30:53 crc kubenswrapper[4724]: I0223 17:30:53.072786 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:30:53 crc kubenswrapper[4724]: I0223 17:30:53.072823 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:30:53 crc kubenswrapper[4724]: I0223 17:30:53.072837 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:30:53 crc kubenswrapper[4724]: I0223 17:30:53.073423 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:30:53 crc kubenswrapper[4724]: I0223 17:30:53.073469 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:30:53 crc kubenswrapper[4724]: I0223 17:30:53.073489 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:30:53 crc kubenswrapper[4724]: I0223 17:30:53.164446 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 17:30:53 crc kubenswrapper[4724]: I0223 17:30:53.164667 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:30:53 crc kubenswrapper[4724]: I0223 17:30:53.165979 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:30:53 crc kubenswrapper[4724]: I0223 17:30:53.166023 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:30:53 crc kubenswrapper[4724]: I0223 17:30:53.166035 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:30:53 crc kubenswrapper[4724]: I0223 17:30:53.805652 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 16:38:25.9280953 +0000 UTC Feb 23 17:30:55 crc kubenswrapper[4724]: I0223 17:30:55.059335 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 08:22:05.627663133 +0000 UTC Feb 23 17:30:55 crc kubenswrapper[4724]: I0223 17:30:55.079234 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 17:30:55 crc kubenswrapper[4724]: I0223 17:30:55.079512 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:30:55 crc kubenswrapper[4724]: E0223 17:30:55.079856 4724 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 23 17:30:55 crc kubenswrapper[4724]: I0223 17:30:55.081604 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:30:55 crc kubenswrapper[4724]: I0223 17:30:55.081666 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:30:55 crc kubenswrapper[4724]: I0223 17:30:55.081692 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:30:56 crc kubenswrapper[4724]: I0223 17:30:56.060381 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 04:52:56.989815601 +0000 UTC Feb 23 17:30:57 crc kubenswrapper[4724]: I0223 17:30:57.060953 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 17:25:23.69595813 +0000 UTC Feb 23 17:30:58 crc kubenswrapper[4724]: I0223 17:30:58.061486 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 20:41:53.805447921 +0000 UTC Feb 23 17:30:58 crc kubenswrapper[4724]: I0223 17:30:58.079941 4724 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 17:30:58 crc kubenswrapper[4724]: I0223 17:30:58.080055 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 17:30:58 crc kubenswrapper[4724]: I0223 17:30:58.584707 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 23 17:30:58 crc kubenswrapper[4724]: I0223 17:30:58.585022 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:30:58 crc kubenswrapper[4724]: I0223 17:30:58.589181 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:30:58 crc kubenswrapper[4724]: I0223 17:30:58.589259 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:30:58 crc kubenswrapper[4724]: I0223 17:30:58.589285 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:30:58 crc kubenswrapper[4724]: W0223 17:30:58.975341 4724 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 23 17:30:58 crc kubenswrapper[4724]: I0223 17:30:58.975571 4724 trace.go:236] Trace[1512153699]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Feb-2026 17:30:48.974) (total time: 10001ms): Feb 23 17:30:58 crc kubenswrapper[4724]: Trace[1512153699]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (17:30:58.975) Feb 23 17:30:58 crc kubenswrapper[4724]: Trace[1512153699]: [10.001467585s] [10.001467585s] END Feb 23 17:30:58 crc kubenswrapper[4724]: E0223 17:30:58.975610 4724 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 23 17:30:58 crc kubenswrapper[4724]: W0223 17:30:58.997492 4724 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 23 17:30:58 crc kubenswrapper[4724]: I0223 17:30:58.997665 4724 trace.go:236] Trace[1540424971]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Feb-2026 17:30:48.995) (total time: 10002ms): Feb 23 17:30:58 crc kubenswrapper[4724]: Trace[1540424971]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (17:30:58.997) Feb 23 17:30:58 crc kubenswrapper[4724]: Trace[1540424971]: [10.002368307s] [10.002368307s] END Feb 23 17:30:58 crc kubenswrapper[4724]: E0223 17:30:58.997713 4724 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 23 17:30:59 crc kubenswrapper[4724]: I0223 17:30:59.061724 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 22:56:05.566835208 +0000 UTC Feb 23 17:30:59 crc kubenswrapper[4724]: I0223 17:30:59.800281 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 23 17:30:59 crc kubenswrapper[4724]: I0223 17:30:59.975652 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 17:30:59 crc kubenswrapper[4724]: I0223 17:30:59.975791 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:30:59 crc kubenswrapper[4724]: I0223 17:30:59.976941 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:30:59 crc kubenswrapper[4724]: I0223 17:30:59.976969 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:30:59 crc kubenswrapper[4724]: I0223 17:30:59.976978 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:00 crc kubenswrapper[4724]: I0223 17:31:00.062987 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 05:52:55.540299197 +0000 UTC Feb 23 17:31:00 crc kubenswrapper[4724]: E0223 17:31:00.499832 4724 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:00Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 17:31:00 crc kubenswrapper[4724]: E0223 17:31:00.504134 4724 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:00Z is after 2026-02-23T05:33:13Z" node="crc" Feb 23 17:31:00 crc kubenswrapper[4724]: E0223 17:31:00.510798 4724 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:00Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.1896f06ac60078d1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-23 17:30:44.795988177 +0000 UTC m=+0.612187777,LastTimestamp:2026-02-23 17:30:44.795988177 +0000 UTC m=+0.612187777,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 23 17:31:00 crc kubenswrapper[4724]: W0223 17:31:00.511799 4724 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:00Z is after 2026-02-23T05:33:13Z Feb 23 17:31:00 crc kubenswrapper[4724]: E0223 17:31:00.511925 4724 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:00Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 17:31:00 crc kubenswrapper[4724]: W0223 17:31:00.515671 4724 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:00Z is after 2026-02-23T05:33:13Z Feb 23 17:31:00 crc kubenswrapper[4724]: E0223 17:31:00.515761 4724 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:00Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 17:31:00 crc kubenswrapper[4724]: E0223 17:31:00.517218 4724 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:00Z is after 2026-02-23T05:33:13Z" interval="6.4s" Feb 23 17:31:00 crc kubenswrapper[4724]: I0223 17:31:00.517438 4724 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Feb 23 17:31:00 crc kubenswrapper[4724]: I0223 17:31:00.517522 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 23 17:31:00 crc kubenswrapper[4724]: I0223 17:31:00.524875 4724 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Feb 23 17:31:00 crc kubenswrapper[4724]: I0223 17:31:00.524982 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 23 17:31:00 crc kubenswrapper[4724]: I0223 17:31:00.531061 4724 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:56180->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 23 17:31:00 crc kubenswrapper[4724]: I0223 17:31:00.531140 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:56180->192.168.126.11:17697: read: connection reset by peer" Feb 23 17:31:00 crc kubenswrapper[4724]: I0223 17:31:00.763430 4724 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 23 17:31:00 crc kubenswrapper[4724]: [+]log ok Feb 23 17:31:00 crc kubenswrapper[4724]: [+]etcd ok Feb 23 17:31:00 crc kubenswrapper[4724]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Feb 23 17:31:00 crc kubenswrapper[4724]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 23 17:31:00 crc kubenswrapper[4724]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 23 17:31:00 crc kubenswrapper[4724]: [+]poststarthook/openshift.io-api-request-count-filter ok Feb 23 17:31:00 crc kubenswrapper[4724]: [+]poststarthook/openshift.io-startkubeinformers ok Feb 23 17:31:00 crc kubenswrapper[4724]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Feb 23 17:31:00 crc kubenswrapper[4724]: [+]poststarthook/generic-apiserver-start-informers ok Feb 23 17:31:00 crc kubenswrapper[4724]: [+]poststarthook/priority-and-fairness-config-consumer ok Feb 23 17:31:00 crc kubenswrapper[4724]: [+]poststarthook/priority-and-fairness-filter ok Feb 23 17:31:00 crc kubenswrapper[4724]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 23 17:31:00 crc kubenswrapper[4724]: [+]poststarthook/start-apiextensions-informers ok Feb 23 17:31:00 crc kubenswrapper[4724]: [+]poststarthook/start-apiextensions-controllers ok Feb 23 17:31:00 crc kubenswrapper[4724]: [+]poststarthook/crd-informer-synced ok Feb 23 17:31:00 crc kubenswrapper[4724]: [+]poststarthook/start-system-namespaces-controller ok Feb 23 17:31:00 crc kubenswrapper[4724]: [+]poststarthook/start-cluster-authentication-info-controller ok Feb 23 17:31:00 crc kubenswrapper[4724]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Feb 23 17:31:00 crc kubenswrapper[4724]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Feb 23 17:31:00 crc kubenswrapper[4724]: [+]poststarthook/start-legacy-token-tracking-controller ok Feb 23 17:31:00 crc kubenswrapper[4724]: [+]poststarthook/start-service-ip-repair-controllers ok Feb 23 17:31:00 crc kubenswrapper[4724]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Feb 23 17:31:00 crc kubenswrapper[4724]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Feb 23 17:31:00 crc kubenswrapper[4724]: [+]poststarthook/priority-and-fairness-config-producer ok Feb 23 17:31:00 crc kubenswrapper[4724]: [+]poststarthook/bootstrap-controller ok Feb 23 17:31:00 crc kubenswrapper[4724]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Feb 23 17:31:00 crc kubenswrapper[4724]: [+]poststarthook/start-kube-aggregator-informers ok Feb 23 17:31:00 crc kubenswrapper[4724]: [+]poststarthook/apiservice-status-local-available-controller ok Feb 23 17:31:00 crc kubenswrapper[4724]: [+]poststarthook/apiservice-status-remote-available-controller ok Feb 23 17:31:00 crc kubenswrapper[4724]: [+]poststarthook/apiservice-registration-controller ok Feb 23 17:31:00 crc kubenswrapper[4724]: [+]poststarthook/apiservice-wait-for-first-sync ok Feb 23 17:31:00 crc kubenswrapper[4724]: [+]poststarthook/apiservice-discovery-controller ok Feb 23 17:31:00 crc kubenswrapper[4724]: [+]poststarthook/kube-apiserver-autoregistration ok Feb 23 17:31:00 crc kubenswrapper[4724]: [+]autoregister-completion ok Feb 23 17:31:00 crc kubenswrapper[4724]: [+]poststarthook/apiservice-openapi-controller ok Feb 23 17:31:00 crc kubenswrapper[4724]: [+]poststarthook/apiservice-openapiv3-controller ok Feb 23 17:31:00 crc kubenswrapper[4724]: livez check failed Feb 23 17:31:00 crc kubenswrapper[4724]: I0223 17:31:00.763549 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 17:31:00 crc kubenswrapper[4724]: I0223 17:31:00.802281 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:00Z is after 2026-02-23T05:33:13Z Feb 23 17:31:01 crc kubenswrapper[4724]: I0223 17:31:01.064044 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 03:38:48.490024049 +0000 UTC Feb 23 17:31:01 crc kubenswrapper[4724]: I0223 17:31:01.127832 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 23 17:31:01 crc kubenswrapper[4724]: I0223 17:31:01.130530 4724 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5d09bb8047e8dbfa75906b4c61f51eda4b3045409f0aba10a882f2d5700d6acd" exitCode=255 Feb 23 17:31:01 crc kubenswrapper[4724]: I0223 17:31:01.130594 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"5d09bb8047e8dbfa75906b4c61f51eda4b3045409f0aba10a882f2d5700d6acd"} Feb 23 17:31:01 crc kubenswrapper[4724]: I0223 17:31:01.130834 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:31:01 crc kubenswrapper[4724]: I0223 17:31:01.132357 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:01 crc kubenswrapper[4724]: I0223 17:31:01.132442 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:01 crc kubenswrapper[4724]: I0223 17:31:01.132454 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:01 crc kubenswrapper[4724]: I0223 17:31:01.133176 4724 scope.go:117] "RemoveContainer" containerID="5d09bb8047e8dbfa75906b4c61f51eda4b3045409f0aba10a882f2d5700d6acd" Feb 23 17:31:01 crc kubenswrapper[4724]: I0223 17:31:01.800183 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:01Z is after 2026-02-23T05:33:13Z Feb 23 17:31:02 crc kubenswrapper[4724]: I0223 17:31:02.064420 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 19:01:22.04079443 +0000 UTC Feb 23 17:31:02 crc kubenswrapper[4724]: I0223 17:31:02.136769 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 23 17:31:02 crc kubenswrapper[4724]: I0223 17:31:02.139166 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"596f00ec599e4f75092c41d4761a189c388727d069261f949b4df50fb9dae09d"} Feb 23 17:31:02 crc kubenswrapper[4724]: I0223 17:31:02.139608 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:31:02 crc kubenswrapper[4724]: I0223 17:31:02.141118 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:02 crc kubenswrapper[4724]: I0223 17:31:02.141174 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:02 crc kubenswrapper[4724]: I0223 17:31:02.141197 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:02 crc kubenswrapper[4724]: W0223 17:31:02.676239 4724 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:02Z is after 2026-02-23T05:33:13Z Feb 23 17:31:02 crc kubenswrapper[4724]: E0223 17:31:02.676331 4724 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:02Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 17:31:02 crc kubenswrapper[4724]: I0223 17:31:02.803790 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:02Z is after 2026-02-23T05:33:13Z Feb 23 17:31:03 crc kubenswrapper[4724]: I0223 17:31:03.064853 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 21:35:44.980260106 +0000 UTC Feb 23 17:31:03 crc kubenswrapper[4724]: I0223 17:31:03.142768 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 23 17:31:03 crc kubenswrapper[4724]: I0223 17:31:03.143400 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 23 17:31:03 crc kubenswrapper[4724]: I0223 17:31:03.145821 4724 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="596f00ec599e4f75092c41d4761a189c388727d069261f949b4df50fb9dae09d" exitCode=255 Feb 23 17:31:03 crc kubenswrapper[4724]: I0223 17:31:03.145873 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"596f00ec599e4f75092c41d4761a189c388727d069261f949b4df50fb9dae09d"} Feb 23 17:31:03 crc kubenswrapper[4724]: I0223 17:31:03.145916 4724 scope.go:117] "RemoveContainer" containerID="5d09bb8047e8dbfa75906b4c61f51eda4b3045409f0aba10a882f2d5700d6acd" Feb 23 17:31:03 crc kubenswrapper[4724]: I0223 17:31:03.146052 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:31:03 crc kubenswrapper[4724]: I0223 17:31:03.147077 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:03 crc kubenswrapper[4724]: I0223 17:31:03.147112 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:03 crc kubenswrapper[4724]: I0223 17:31:03.147124 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:03 crc kubenswrapper[4724]: I0223 17:31:03.147642 4724 scope.go:117] "RemoveContainer" containerID="596f00ec599e4f75092c41d4761a189c388727d069261f949b4df50fb9dae09d" Feb 23 17:31:03 crc kubenswrapper[4724]: E0223 17:31:03.147830 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 17:31:03 crc kubenswrapper[4724]: I0223 17:31:03.803697 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:03Z is after 2026-02-23T05:33:13Z Feb 23 17:31:04 crc kubenswrapper[4724]: I0223 17:31:04.065901 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 01:48:21.11279129 +0000 UTC Feb 23 17:31:04 crc kubenswrapper[4724]: I0223 17:31:04.152199 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 23 17:31:04 crc kubenswrapper[4724]: I0223 17:31:04.800524 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:04Z is after 2026-02-23T05:33:13Z Feb 23 17:31:05 crc kubenswrapper[4724]: W0223 17:31:05.043870 4724 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:05Z is after 2026-02-23T05:33:13Z Feb 23 17:31:05 crc kubenswrapper[4724]: E0223 17:31:05.044018 4724 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:05Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 17:31:05 crc kubenswrapper[4724]: I0223 17:31:05.066466 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 12:59:16.543742015 +0000 UTC Feb 23 17:31:05 crc kubenswrapper[4724]: E0223 17:31:05.080063 4724 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 23 17:31:05 crc kubenswrapper[4724]: I0223 17:31:05.758905 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 17:31:05 crc kubenswrapper[4724]: I0223 17:31:05.759092 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:31:05 crc kubenswrapper[4724]: I0223 17:31:05.760919 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:05 crc kubenswrapper[4724]: I0223 17:31:05.760968 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:05 crc kubenswrapper[4724]: I0223 17:31:05.760981 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:05 crc kubenswrapper[4724]: I0223 17:31:05.761613 4724 scope.go:117] "RemoveContainer" containerID="596f00ec599e4f75092c41d4761a189c388727d069261f949b4df50fb9dae09d" Feb 23 17:31:05 crc kubenswrapper[4724]: E0223 17:31:05.761793 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 17:31:05 crc kubenswrapper[4724]: I0223 17:31:05.763970 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 17:31:05 crc kubenswrapper[4724]: I0223 17:31:05.800885 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:05Z is after 2026-02-23T05:33:13Z Feb 23 17:31:06 crc kubenswrapper[4724]: I0223 17:31:06.067249 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 14:09:33.457609789 +0000 UTC Feb 23 17:31:06 crc kubenswrapper[4724]: I0223 17:31:06.161368 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:31:06 crc kubenswrapper[4724]: I0223 17:31:06.162693 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:06 crc kubenswrapper[4724]: I0223 17:31:06.162735 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:06 crc kubenswrapper[4724]: I0223 17:31:06.162747 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:06 crc kubenswrapper[4724]: I0223 17:31:06.163487 4724 scope.go:117] "RemoveContainer" containerID="596f00ec599e4f75092c41d4761a189c388727d069261f949b4df50fb9dae09d" Feb 23 17:31:06 crc kubenswrapper[4724]: E0223 17:31:06.163686 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 17:31:06 crc kubenswrapper[4724]: I0223 17:31:06.541507 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 17:31:06 crc kubenswrapper[4724]: I0223 17:31:06.803607 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:06Z is after 2026-02-23T05:33:13Z Feb 23 17:31:06 crc kubenswrapper[4724]: I0223 17:31:06.905008 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:31:06 crc kubenswrapper[4724]: I0223 17:31:06.907128 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:06 crc kubenswrapper[4724]: I0223 17:31:06.907192 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:06 crc kubenswrapper[4724]: I0223 17:31:06.907206 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:06 crc kubenswrapper[4724]: I0223 17:31:06.907237 4724 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 17:31:06 crc kubenswrapper[4724]: E0223 17:31:06.910498 4724 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:06Z is after 2026-02-23T05:33:13Z" node="crc" Feb 23 17:31:06 crc kubenswrapper[4724]: E0223 17:31:06.922501 4724 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:06Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 23 17:31:07 crc kubenswrapper[4724]: I0223 17:31:07.067506 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 00:40:35.504813721 +0000 UTC Feb 23 17:31:07 crc kubenswrapper[4724]: I0223 17:31:07.163472 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:31:07 crc kubenswrapper[4724]: I0223 17:31:07.164595 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:07 crc kubenswrapper[4724]: I0223 17:31:07.164667 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:07 crc kubenswrapper[4724]: I0223 17:31:07.164687 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:07 crc kubenswrapper[4724]: I0223 17:31:07.165825 4724 scope.go:117] "RemoveContainer" containerID="596f00ec599e4f75092c41d4761a189c388727d069261f949b4df50fb9dae09d" Feb 23 17:31:07 crc kubenswrapper[4724]: E0223 17:31:07.166144 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 17:31:07 crc kubenswrapper[4724]: I0223 17:31:07.804491 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:07Z is after 2026-02-23T05:33:13Z Feb 23 17:31:08 crc kubenswrapper[4724]: I0223 17:31:08.068187 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 17:59:30.874350748 +0000 UTC Feb 23 17:31:08 crc kubenswrapper[4724]: I0223 17:31:08.080477 4724 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 17:31:08 crc kubenswrapper[4724]: I0223 17:31:08.080737 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 17:31:08 crc kubenswrapper[4724]: I0223 17:31:08.613385 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 23 17:31:08 crc kubenswrapper[4724]: I0223 17:31:08.613693 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:31:08 crc kubenswrapper[4724]: I0223 17:31:08.615382 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:08 crc kubenswrapper[4724]: I0223 17:31:08.615445 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:08 crc kubenswrapper[4724]: I0223 17:31:08.615456 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:08 crc kubenswrapper[4724]: I0223 17:31:08.630941 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 23 17:31:08 crc kubenswrapper[4724]: I0223 17:31:08.686690 4724 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 23 17:31:08 crc kubenswrapper[4724]: E0223 17:31:08.690270 4724 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:08Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 17:31:08 crc kubenswrapper[4724]: I0223 17:31:08.802752 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:08Z is after 2026-02-23T05:33:13Z Feb 23 17:31:09 crc kubenswrapper[4724]: I0223 17:31:09.069071 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 12:24:11.943265355 +0000 UTC Feb 23 17:31:09 crc kubenswrapper[4724]: W0223 17:31:09.144007 4724 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:09Z is after 2026-02-23T05:33:13Z Feb 23 17:31:09 crc kubenswrapper[4724]: E0223 17:31:09.144116 4724 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:09Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 17:31:09 crc kubenswrapper[4724]: I0223 17:31:09.169371 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:31:09 crc kubenswrapper[4724]: I0223 17:31:09.170715 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:09 crc kubenswrapper[4724]: I0223 17:31:09.170771 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:09 crc kubenswrapper[4724]: I0223 17:31:09.170786 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:09 crc kubenswrapper[4724]: I0223 17:31:09.802806 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:09Z is after 2026-02-23T05:33:13Z Feb 23 17:31:10 crc kubenswrapper[4724]: I0223 17:31:10.070195 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 20:38:45.288077513 +0000 UTC Feb 23 17:31:10 crc kubenswrapper[4724]: I0223 17:31:10.440000 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 17:31:10 crc kubenswrapper[4724]: I0223 17:31:10.440305 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:31:10 crc kubenswrapper[4724]: I0223 17:31:10.441858 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:10 crc kubenswrapper[4724]: I0223 17:31:10.441924 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:10 crc kubenswrapper[4724]: I0223 17:31:10.441937 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:10 crc kubenswrapper[4724]: I0223 17:31:10.442544 4724 scope.go:117] "RemoveContainer" containerID="596f00ec599e4f75092c41d4761a189c388727d069261f949b4df50fb9dae09d" Feb 23 17:31:10 crc kubenswrapper[4724]: E0223 17:31:10.442737 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 17:31:10 crc kubenswrapper[4724]: E0223 17:31:10.517054 4724 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:10Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.1896f06ac60078d1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-23 17:30:44.795988177 +0000 UTC m=+0.612187777,LastTimestamp:2026-02-23 17:30:44.795988177 +0000 UTC m=+0.612187777,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 23 17:31:10 crc kubenswrapper[4724]: I0223 17:31:10.803306 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:10Z is after 2026-02-23T05:33:13Z Feb 23 17:31:11 crc kubenswrapper[4724]: I0223 17:31:11.070837 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 17:09:36.075603159 +0000 UTC Feb 23 17:31:11 crc kubenswrapper[4724]: I0223 17:31:11.802843 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:11Z is after 2026-02-23T05:33:13Z Feb 23 17:31:12 crc kubenswrapper[4724]: I0223 17:31:12.071431 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 15:19:32.486394075 +0000 UTC Feb 23 17:31:12 crc kubenswrapper[4724]: W0223 17:31:12.697839 4724 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:12Z is after 2026-02-23T05:33:13Z Feb 23 17:31:12 crc kubenswrapper[4724]: E0223 17:31:12.697953 4724 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:12Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 17:31:12 crc kubenswrapper[4724]: I0223 17:31:12.803624 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:12Z is after 2026-02-23T05:33:13Z Feb 23 17:31:13 crc kubenswrapper[4724]: I0223 17:31:13.071932 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 11:10:04.913948478 +0000 UTC Feb 23 17:31:13 crc kubenswrapper[4724]: W0223 17:31:13.279199 4724 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:13Z is after 2026-02-23T05:33:13Z Feb 23 17:31:13 crc kubenswrapper[4724]: E0223 17:31:13.279310 4724 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:13Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 17:31:13 crc kubenswrapper[4724]: I0223 17:31:13.803909 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:13Z is after 2026-02-23T05:33:13Z Feb 23 17:31:13 crc kubenswrapper[4724]: I0223 17:31:13.910832 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:31:13 crc kubenswrapper[4724]: I0223 17:31:13.913033 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:13 crc kubenswrapper[4724]: I0223 17:31:13.913104 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:13 crc kubenswrapper[4724]: I0223 17:31:13.913125 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:13 crc kubenswrapper[4724]: I0223 17:31:13.913174 4724 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 17:31:13 crc kubenswrapper[4724]: E0223 17:31:13.918713 4724 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:13Z is after 2026-02-23T05:33:13Z" node="crc" Feb 23 17:31:13 crc kubenswrapper[4724]: E0223 17:31:13.928234 4724 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:13Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 23 17:31:14 crc kubenswrapper[4724]: I0223 17:31:14.072854 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 17:48:25.412479978 +0000 UTC Feb 23 17:31:14 crc kubenswrapper[4724]: I0223 17:31:14.803187 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:14Z is after 2026-02-23T05:33:13Z Feb 23 17:31:15 crc kubenswrapper[4724]: I0223 17:31:15.074000 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 02:17:53.835217051 +0000 UTC Feb 23 17:31:15 crc kubenswrapper[4724]: E0223 17:31:15.080986 4724 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 23 17:31:15 crc kubenswrapper[4724]: I0223 17:31:15.803596 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:15Z is after 2026-02-23T05:33:13Z Feb 23 17:31:16 crc kubenswrapper[4724]: I0223 17:31:16.074444 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 12:12:08.642547692 +0000 UTC Feb 23 17:31:16 crc kubenswrapper[4724]: W0223 17:31:16.359751 4724 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:16Z is after 2026-02-23T05:33:13Z Feb 23 17:31:16 crc kubenswrapper[4724]: E0223 17:31:16.359873 4724 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:16Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 17:31:16 crc kubenswrapper[4724]: I0223 17:31:16.803121 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:16Z is after 2026-02-23T05:33:13Z Feb 23 17:31:17 crc kubenswrapper[4724]: I0223 17:31:17.075408 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 20:23:42.9840449 +0000 UTC Feb 23 17:31:17 crc kubenswrapper[4724]: I0223 17:31:17.202304 4724 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 23 17:31:17 crc kubenswrapper[4724]: I0223 17:31:17.839897 4724 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:34346->192.168.126.11:10357: read: connection reset by peer" start-of-body= Feb 23 17:31:17 crc kubenswrapper[4724]: I0223 17:31:17.840029 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:34346->192.168.126.11:10357: read: connection reset by peer" Feb 23 17:31:17 crc kubenswrapper[4724]: I0223 17:31:17.840119 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 17:31:17 crc kubenswrapper[4724]: I0223 17:31:17.840382 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:31:17 crc kubenswrapper[4724]: I0223 17:31:17.842591 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:17 crc kubenswrapper[4724]: I0223 17:31:17.842657 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:17 crc kubenswrapper[4724]: I0223 17:31:17.842678 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:17 crc kubenswrapper[4724]: I0223 17:31:17.843746 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"3ff1c2b88a730048902a1360cdbcddfb4f82c6a58c427edb24a792b254f55383"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Feb 23 17:31:17 crc kubenswrapper[4724]: I0223 17:31:17.844176 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://3ff1c2b88a730048902a1360cdbcddfb4f82c6a58c427edb24a792b254f55383" gracePeriod=30 Feb 23 17:31:18 crc kubenswrapper[4724]: I0223 17:31:18.076609 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 00:23:20.222225408 +0000 UTC Feb 23 17:31:18 crc kubenswrapper[4724]: I0223 17:31:18.199251 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 23 17:31:18 crc kubenswrapper[4724]: I0223 17:31:18.200166 4724 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="3ff1c2b88a730048902a1360cdbcddfb4f82c6a58c427edb24a792b254f55383" exitCode=255 Feb 23 17:31:18 crc kubenswrapper[4724]: I0223 17:31:18.200240 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"3ff1c2b88a730048902a1360cdbcddfb4f82c6a58c427edb24a792b254f55383"} Feb 23 17:31:19 crc kubenswrapper[4724]: I0223 17:31:19.077483 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 00:12:47.891682543 +0000 UTC Feb 23 17:31:19 crc kubenswrapper[4724]: I0223 17:31:19.210537 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 23 17:31:19 crc kubenswrapper[4724]: I0223 17:31:19.211430 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5d9fabd2560e4a3826fb070276626936d120d612d37e2750e417f637b93b1d8f"} Feb 23 17:31:19 crc kubenswrapper[4724]: I0223 17:31:19.211547 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:31:19 crc kubenswrapper[4724]: I0223 17:31:19.212752 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:19 crc kubenswrapper[4724]: I0223 17:31:19.212789 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:19 crc kubenswrapper[4724]: I0223 17:31:19.212799 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:20 crc kubenswrapper[4724]: I0223 17:31:20.078164 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 09:09:58.263037411 +0000 UTC Feb 23 17:31:20 crc kubenswrapper[4724]: I0223 17:31:20.215353 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:31:20 crc kubenswrapper[4724]: I0223 17:31:20.216967 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:20 crc kubenswrapper[4724]: I0223 17:31:20.217037 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:20 crc kubenswrapper[4724]: I0223 17:31:20.217059 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:20 crc kubenswrapper[4724]: I0223 17:31:20.919058 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:31:20 crc kubenswrapper[4724]: I0223 17:31:20.920913 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:20 crc kubenswrapper[4724]: I0223 17:31:20.920971 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:20 crc kubenswrapper[4724]: I0223 17:31:20.920987 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:20 crc kubenswrapper[4724]: I0223 17:31:20.921116 4724 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 17:31:20 crc kubenswrapper[4724]: I0223 17:31:20.933213 4724 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 23 17:31:20 crc kubenswrapper[4724]: I0223 17:31:20.933717 4724 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 23 17:31:20 crc kubenswrapper[4724]: E0223 17:31:20.933759 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 23 17:31:20 crc kubenswrapper[4724]: I0223 17:31:20.937997 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:20 crc kubenswrapper[4724]: I0223 17:31:20.938043 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:20 crc kubenswrapper[4724]: I0223 17:31:20.938061 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:20 crc kubenswrapper[4724]: I0223 17:31:20.938089 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:20 crc kubenswrapper[4724]: I0223 17:31:20.938108 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:20Z","lastTransitionTime":"2026-02-23T17:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:20 crc kubenswrapper[4724]: E0223 17:31:20.957105 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aaac6a71-65af-4ded-9945-71c01ce15653\\\",\\\"systemUUID\\\":\\\"883aa43b-ee67-45aa-9f6b-7760dc931d5e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:20 crc kubenswrapper[4724]: I0223 17:31:20.968849 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:20 crc kubenswrapper[4724]: I0223 17:31:20.968875 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:20 crc kubenswrapper[4724]: I0223 17:31:20.968883 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:20 crc kubenswrapper[4724]: I0223 17:31:20.968897 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:20 crc kubenswrapper[4724]: I0223 17:31:20.968906 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:20Z","lastTransitionTime":"2026-02-23T17:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:20 crc kubenswrapper[4724]: E0223 17:31:20.984365 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aaac6a71-65af-4ded-9945-71c01ce15653\\\",\\\"systemUUID\\\":\\\"883aa43b-ee67-45aa-9f6b-7760dc931d5e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:20 crc kubenswrapper[4724]: I0223 17:31:20.995763 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:20 crc kubenswrapper[4724]: I0223 17:31:20.995805 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:20 crc kubenswrapper[4724]: I0223 17:31:20.995819 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:20 crc kubenswrapper[4724]: I0223 17:31:20.995839 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:20 crc kubenswrapper[4724]: I0223 17:31:20.995853 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:20Z","lastTransitionTime":"2026-02-23T17:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:21 crc kubenswrapper[4724]: E0223 17:31:21.006647 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aaac6a71-65af-4ded-9945-71c01ce15653\\\",\\\"systemUUID\\\":\\\"883aa43b-ee67-45aa-9f6b-7760dc931d5e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:21 crc kubenswrapper[4724]: I0223 17:31:21.015312 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:21 crc kubenswrapper[4724]: I0223 17:31:21.015356 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:21 crc kubenswrapper[4724]: I0223 17:31:21.015367 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:21 crc kubenswrapper[4724]: I0223 17:31:21.015411 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:21 crc kubenswrapper[4724]: I0223 17:31:21.015432 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:21Z","lastTransitionTime":"2026-02-23T17:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:21 crc kubenswrapper[4724]: E0223 17:31:21.034947 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aaac6a71-65af-4ded-9945-71c01ce15653\\\",\\\"systemUUID\\\":\\\"883aa43b-ee67-45aa-9f6b-7760dc931d5e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:21 crc kubenswrapper[4724]: E0223 17:31:21.035196 4724 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 17:31:21 crc kubenswrapper[4724]: E0223 17:31:21.035246 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:21 crc kubenswrapper[4724]: I0223 17:31:21.079259 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 02:06:56.869622158 +0000 UTC Feb 23 17:31:21 crc kubenswrapper[4724]: E0223 17:31:21.136329 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:21 crc kubenswrapper[4724]: E0223 17:31:21.237450 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:21 crc kubenswrapper[4724]: E0223 17:31:21.337896 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:21 crc kubenswrapper[4724]: E0223 17:31:21.438600 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:21 crc kubenswrapper[4724]: E0223 17:31:21.539307 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:21 crc kubenswrapper[4724]: E0223 17:31:21.654420 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:21 crc kubenswrapper[4724]: E0223 17:31:21.754950 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:21 crc kubenswrapper[4724]: E0223 17:31:21.856166 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:21 crc kubenswrapper[4724]: E0223 17:31:21.957166 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:22 crc kubenswrapper[4724]: E0223 17:31:22.058238 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:22 crc kubenswrapper[4724]: I0223 17:31:22.079746 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 03:23:01.611662657 +0000 UTC Feb 23 17:31:22 crc kubenswrapper[4724]: E0223 17:31:22.158819 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:22 crc kubenswrapper[4724]: E0223 17:31:22.259468 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:22 crc kubenswrapper[4724]: E0223 17:31:22.360550 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:22 crc kubenswrapper[4724]: E0223 17:31:22.461558 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:22 crc kubenswrapper[4724]: E0223 17:31:22.563187 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:22 crc kubenswrapper[4724]: I0223 17:31:22.579528 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 17:31:22 crc kubenswrapper[4724]: I0223 17:31:22.579737 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:31:22 crc kubenswrapper[4724]: I0223 17:31:22.581248 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:22 crc kubenswrapper[4724]: I0223 17:31:22.581319 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:22 crc kubenswrapper[4724]: I0223 17:31:22.581329 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:22 crc kubenswrapper[4724]: E0223 17:31:22.663948 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:22 crc kubenswrapper[4724]: E0223 17:31:22.765091 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:22 crc kubenswrapper[4724]: E0223 17:31:22.866253 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:22 crc kubenswrapper[4724]: E0223 17:31:22.966471 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:23 crc kubenswrapper[4724]: E0223 17:31:23.067245 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:23 crc kubenswrapper[4724]: I0223 17:31:23.080844 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 21:17:36.430609855 +0000 UTC Feb 23 17:31:23 crc kubenswrapper[4724]: E0223 17:31:23.167885 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:23 crc kubenswrapper[4724]: E0223 17:31:23.269074 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:23 crc kubenswrapper[4724]: E0223 17:31:23.369566 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:23 crc kubenswrapper[4724]: E0223 17:31:23.470340 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:23 crc kubenswrapper[4724]: E0223 17:31:23.571169 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:23 crc kubenswrapper[4724]: E0223 17:31:23.672354 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:23 crc kubenswrapper[4724]: E0223 17:31:23.773123 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:23 crc kubenswrapper[4724]: E0223 17:31:23.873219 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:23 crc kubenswrapper[4724]: I0223 17:31:23.950592 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:31:23 crc kubenswrapper[4724]: I0223 17:31:23.952778 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:23 crc kubenswrapper[4724]: I0223 17:31:23.952863 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:23 crc kubenswrapper[4724]: I0223 17:31:23.952895 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:23 crc kubenswrapper[4724]: I0223 17:31:23.954057 4724 scope.go:117] "RemoveContainer" containerID="596f00ec599e4f75092c41d4761a189c388727d069261f949b4df50fb9dae09d" Feb 23 17:31:23 crc kubenswrapper[4724]: E0223 17:31:23.973711 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:24 crc kubenswrapper[4724]: E0223 17:31:24.074557 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:24 crc kubenswrapper[4724]: I0223 17:31:24.082031 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 05:39:07.472468966 +0000 UTC Feb 23 17:31:24 crc kubenswrapper[4724]: E0223 17:31:24.175472 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:24 crc kubenswrapper[4724]: E0223 17:31:24.275714 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:24 crc kubenswrapper[4724]: E0223 17:31:24.376705 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:24 crc kubenswrapper[4724]: E0223 17:31:24.477074 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:24 crc kubenswrapper[4724]: E0223 17:31:24.577805 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:24 crc kubenswrapper[4724]: E0223 17:31:24.678360 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:24 crc kubenswrapper[4724]: E0223 17:31:24.779648 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:24 crc kubenswrapper[4724]: E0223 17:31:24.880250 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:24 crc kubenswrapper[4724]: E0223 17:31:24.981175 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:25 crc kubenswrapper[4724]: I0223 17:31:25.079833 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 17:31:25 crc kubenswrapper[4724]: I0223 17:31:25.080144 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:31:25 crc kubenswrapper[4724]: E0223 17:31:25.081179 4724 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 23 17:31:25 crc kubenswrapper[4724]: E0223 17:31:25.081380 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:25 crc kubenswrapper[4724]: I0223 17:31:25.081980 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:25 crc kubenswrapper[4724]: I0223 17:31:25.082032 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:25 crc kubenswrapper[4724]: I0223 17:31:25.082051 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:25 crc kubenswrapper[4724]: I0223 17:31:25.082205 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 09:21:50.296839912 +0000 UTC Feb 23 17:31:25 crc kubenswrapper[4724]: E0223 17:31:25.182640 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:25 crc kubenswrapper[4724]: I0223 17:31:25.234045 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 23 17:31:25 crc kubenswrapper[4724]: I0223 17:31:25.236273 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae"} Feb 23 17:31:25 crc kubenswrapper[4724]: I0223 17:31:25.236518 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:31:25 crc kubenswrapper[4724]: I0223 17:31:25.237788 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:25 crc kubenswrapper[4724]: I0223 17:31:25.237843 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:25 crc kubenswrapper[4724]: I0223 17:31:25.237863 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:25 crc kubenswrapper[4724]: E0223 17:31:25.283324 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:25 crc kubenswrapper[4724]: E0223 17:31:25.384384 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:25 crc kubenswrapper[4724]: E0223 17:31:25.484972 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:25 crc kubenswrapper[4724]: E0223 17:31:25.586075 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:25 crc kubenswrapper[4724]: E0223 17:31:25.687251 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:25 crc kubenswrapper[4724]: E0223 17:31:25.787727 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:25 crc kubenswrapper[4724]: E0223 17:31:25.888751 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:25 crc kubenswrapper[4724]: E0223 17:31:25.989683 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:26 crc kubenswrapper[4724]: I0223 17:31:26.065284 4724 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 23 17:31:26 crc kubenswrapper[4724]: I0223 17:31:26.103570 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 22:46:15.566621953 +0000 UTC Feb 23 17:31:26 crc kubenswrapper[4724]: E0223 17:31:26.103643 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:26 crc kubenswrapper[4724]: I0223 17:31:26.112086 4724 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 23 17:31:26 crc kubenswrapper[4724]: E0223 17:31:26.204172 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:26 crc kubenswrapper[4724]: I0223 17:31:26.240745 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 23 17:31:26 crc kubenswrapper[4724]: I0223 17:31:26.241315 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 23 17:31:26 crc kubenswrapper[4724]: I0223 17:31:26.242639 4724 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae" exitCode=255 Feb 23 17:31:26 crc kubenswrapper[4724]: I0223 17:31:26.242686 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae"} Feb 23 17:31:26 crc kubenswrapper[4724]: I0223 17:31:26.242736 4724 scope.go:117] "RemoveContainer" containerID="596f00ec599e4f75092c41d4761a189c388727d069261f949b4df50fb9dae09d" Feb 23 17:31:26 crc kubenswrapper[4724]: I0223 17:31:26.242867 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:31:26 crc kubenswrapper[4724]: I0223 17:31:26.243757 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:26 crc kubenswrapper[4724]: I0223 17:31:26.243791 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:26 crc kubenswrapper[4724]: I0223 17:31:26.243801 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:26 crc kubenswrapper[4724]: I0223 17:31:26.244616 4724 scope.go:117] "RemoveContainer" containerID="93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae" Feb 23 17:31:26 crc kubenswrapper[4724]: E0223 17:31:26.251757 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 17:31:26 crc kubenswrapper[4724]: E0223 17:31:26.305368 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:26 crc kubenswrapper[4724]: E0223 17:31:26.406610 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:26 crc kubenswrapper[4724]: E0223 17:31:26.508056 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:26 crc kubenswrapper[4724]: I0223 17:31:26.541248 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 17:31:26 crc kubenswrapper[4724]: E0223 17:31:26.609241 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:26 crc kubenswrapper[4724]: E0223 17:31:26.709903 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:26 crc kubenswrapper[4724]: E0223 17:31:26.810411 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:26 crc kubenswrapper[4724]: E0223 17:31:26.911329 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:27 crc kubenswrapper[4724]: E0223 17:31:27.012530 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:27 crc kubenswrapper[4724]: I0223 17:31:27.104649 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 12:06:58.875650615 +0000 UTC Feb 23 17:31:27 crc kubenswrapper[4724]: E0223 17:31:27.113737 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:27 crc kubenswrapper[4724]: E0223 17:31:27.214808 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:27 crc kubenswrapper[4724]: I0223 17:31:27.248251 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 23 17:31:27 crc kubenswrapper[4724]: I0223 17:31:27.253344 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:31:27 crc kubenswrapper[4724]: I0223 17:31:27.254822 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:27 crc kubenswrapper[4724]: I0223 17:31:27.254870 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:27 crc kubenswrapper[4724]: I0223 17:31:27.254887 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:27 crc kubenswrapper[4724]: I0223 17:31:27.255969 4724 scope.go:117] "RemoveContainer" containerID="93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae" Feb 23 17:31:27 crc kubenswrapper[4724]: E0223 17:31:27.256351 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 17:31:27 crc kubenswrapper[4724]: E0223 17:31:27.315282 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:27 crc kubenswrapper[4724]: E0223 17:31:27.416484 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:27 crc kubenswrapper[4724]: E0223 17:31:27.517582 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:27 crc kubenswrapper[4724]: E0223 17:31:27.618405 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:27 crc kubenswrapper[4724]: E0223 17:31:27.719302 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:27 crc kubenswrapper[4724]: E0223 17:31:27.819860 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:27 crc kubenswrapper[4724]: E0223 17:31:27.920969 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:28 crc kubenswrapper[4724]: E0223 17:31:28.021971 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:28 crc kubenswrapper[4724]: I0223 17:31:28.080869 4724 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 17:31:28 crc kubenswrapper[4724]: I0223 17:31:28.082037 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 23 17:31:28 crc kubenswrapper[4724]: I0223 17:31:28.105494 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 21:55:14.771927111 +0000 UTC Feb 23 17:31:28 crc kubenswrapper[4724]: E0223 17:31:28.122642 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:28 crc kubenswrapper[4724]: E0223 17:31:28.223148 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:28 crc kubenswrapper[4724]: E0223 17:31:28.323597 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:28 crc kubenswrapper[4724]: E0223 17:31:28.424607 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:28 crc kubenswrapper[4724]: E0223 17:31:28.525719 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:28 crc kubenswrapper[4724]: I0223 17:31:28.558446 4724 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 23 17:31:28 crc kubenswrapper[4724]: E0223 17:31:28.626118 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:28 crc kubenswrapper[4724]: E0223 17:31:28.726590 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:28 crc kubenswrapper[4724]: E0223 17:31:28.827746 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:28 crc kubenswrapper[4724]: E0223 17:31:28.928298 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:29 crc kubenswrapper[4724]: E0223 17:31:29.028757 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:29 crc kubenswrapper[4724]: I0223 17:31:29.106741 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 02:33:00.616075585 +0000 UTC Feb 23 17:31:29 crc kubenswrapper[4724]: E0223 17:31:29.128930 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:29 crc kubenswrapper[4724]: I0223 17:31:29.169935 4724 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 23 17:31:29 crc kubenswrapper[4724]: E0223 17:31:29.230428 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:29 crc kubenswrapper[4724]: E0223 17:31:29.331230 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:29 crc kubenswrapper[4724]: E0223 17:31:29.431775 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:29 crc kubenswrapper[4724]: E0223 17:31:29.532647 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:29 crc kubenswrapper[4724]: E0223 17:31:29.632971 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:29 crc kubenswrapper[4724]: E0223 17:31:29.733571 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:29 crc kubenswrapper[4724]: E0223 17:31:29.834716 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:29 crc kubenswrapper[4724]: E0223 17:31:29.935774 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:30 crc kubenswrapper[4724]: E0223 17:31:30.036562 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:30 crc kubenswrapper[4724]: I0223 17:31:30.107159 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 01:02:51.86383551 +0000 UTC Feb 23 17:31:30 crc kubenswrapper[4724]: E0223 17:31:30.137471 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:30 crc kubenswrapper[4724]: E0223 17:31:30.237584 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 17:31:30 crc kubenswrapper[4724]: I0223 17:31:30.253585 4724 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 23 17:31:30 crc kubenswrapper[4724]: I0223 17:31:30.340916 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:30 crc kubenswrapper[4724]: I0223 17:31:30.340977 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:30 crc kubenswrapper[4724]: I0223 17:31:30.340997 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:30 crc kubenswrapper[4724]: I0223 17:31:30.341025 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:30 crc kubenswrapper[4724]: I0223 17:31:30.341043 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:30Z","lastTransitionTime":"2026-02-23T17:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:30 crc kubenswrapper[4724]: I0223 17:31:30.441130 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 17:31:30 crc kubenswrapper[4724]: I0223 17:31:30.443513 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:30 crc kubenswrapper[4724]: I0223 17:31:30.443559 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:30 crc kubenswrapper[4724]: I0223 17:31:30.443579 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:30 crc kubenswrapper[4724]: I0223 17:31:30.443604 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:30 crc kubenswrapper[4724]: I0223 17:31:30.443623 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:30Z","lastTransitionTime":"2026-02-23T17:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:30 crc kubenswrapper[4724]: I0223 17:31:30.460138 4724 scope.go:117] "RemoveContainer" containerID="93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae" Feb 23 17:31:30 crc kubenswrapper[4724]: E0223 17:31:30.460615 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 17:31:30 crc kubenswrapper[4724]: I0223 17:31:30.546760 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:30 crc kubenswrapper[4724]: I0223 17:31:30.546831 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:30 crc kubenswrapper[4724]: I0223 17:31:30.546859 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:30 crc kubenswrapper[4724]: I0223 17:31:30.546894 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:30 crc kubenswrapper[4724]: I0223 17:31:30.546935 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:30Z","lastTransitionTime":"2026-02-23T17:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:30 crc kubenswrapper[4724]: I0223 17:31:30.650049 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:30 crc kubenswrapper[4724]: I0223 17:31:30.650135 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:30 crc kubenswrapper[4724]: I0223 17:31:30.650163 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:30 crc kubenswrapper[4724]: I0223 17:31:30.650197 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:30 crc kubenswrapper[4724]: I0223 17:31:30.650221 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:30Z","lastTransitionTime":"2026-02-23T17:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:30 crc kubenswrapper[4724]: I0223 17:31:30.753071 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:30 crc kubenswrapper[4724]: I0223 17:31:30.753128 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:30 crc kubenswrapper[4724]: I0223 17:31:30.753146 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:30 crc kubenswrapper[4724]: I0223 17:31:30.753177 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:30 crc kubenswrapper[4724]: I0223 17:31:30.753196 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:30Z","lastTransitionTime":"2026-02-23T17:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:30 crc kubenswrapper[4724]: I0223 17:31:30.856666 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:30 crc kubenswrapper[4724]: I0223 17:31:30.856746 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:30 crc kubenswrapper[4724]: I0223 17:31:30.856777 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:30 crc kubenswrapper[4724]: I0223 17:31:30.856810 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:30 crc kubenswrapper[4724]: I0223 17:31:30.856834 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:30Z","lastTransitionTime":"2026-02-23T17:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:30 crc kubenswrapper[4724]: I0223 17:31:30.960257 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:30 crc kubenswrapper[4724]: I0223 17:31:30.960317 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:30 crc kubenswrapper[4724]: I0223 17:31:30.960336 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:30 crc kubenswrapper[4724]: I0223 17:31:30.960366 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:30 crc kubenswrapper[4724]: I0223 17:31:30.960386 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:30Z","lastTransitionTime":"2026-02-23T17:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.064176 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.064254 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.064278 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.064304 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.064317 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:31Z","lastTransitionTime":"2026-02-23T17:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.106631 4724 apiserver.go:52] "Watching apiserver" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.107267 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 10:18:43.301375463 +0000 UTC Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.110861 4724 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.111269 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-apiserver/kube-apiserver-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h"] Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.112089 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.112178 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.112276 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.112387 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 23 17:31:31 crc kubenswrapper[4724]: E0223 17:31:31.112512 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 17:31:31 crc kubenswrapper[4724]: E0223 17:31:31.112763 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.118506 4724 scope.go:117] "RemoveContainer" containerID="93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae" Feb 23 17:31:31 crc kubenswrapper[4724]: E0223 17:31:31.119061 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.119215 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.119330 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 23 17:31:31 crc kubenswrapper[4724]: E0223 17:31:31.119382 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.120285 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.124788 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.125176 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.128791 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.129076 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.129185 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.131926 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.132160 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.132326 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.167304 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.167362 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.167423 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.167463 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.167489 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:31Z","lastTransitionTime":"2026-02-23T17:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.176225 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.197090 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.201087 4724 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.214056 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.232524 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.252710 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.272586 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.272652 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.272671 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.272703 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.272729 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:31Z","lastTransitionTime":"2026-02-23T17:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.273023 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e1d6606-75fc-41fd-9c23-18ee248da2af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe805066e5e61f09a4f5cf1f9e912da6664229c8db86a898cbe670278e4c2530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beda3a9d030f2d4aa9c6a78508a84f5bc122debebb2c5b423233834433b8fd5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e108d74a635da102e857e2ad4f18384861e4e630ae31af9aa05841e6bbc2de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:25Z\\\",\\\"message\\\":\\\"W0223 17:31:24.378840 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 17:31:24.379771 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771867884 cert, and key in /tmp/serving-cert-1953836024/serving-signer.crt, /tmp/serving-cert-1953836024/serving-signer.key\\\\nI0223 17:31:24.846158 1 observer_polling.go:159] Starting file observer\\\\nW0223 17:31:24.858497 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 17:31:24.858706 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 17:31:24.859835 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1953836024/tls.crt::/tmp/serving-cert-1953836024/tls.key\\\\\\\"\\\\nF0223 17:31:25.358711 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:31:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b550694e49325768a2674a4d2b7089dd267b91a273751fb62d060a0663f06619\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.290677 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.295133 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.295211 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.295287 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.295349 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.295446 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.295504 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.295618 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.295671 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.295719 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.295773 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.295895 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.295914 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.296086 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.296237 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.296286 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.296325 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.296368 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.296441 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.296483 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.296482 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.296502 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.296633 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.296757 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.296798 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.296896 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.296954 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.297000 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.297092 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.297139 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.297187 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.297232 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.297282 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.297328 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.297378 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.297381 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.297445 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.297468 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.297507 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.297567 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.297613 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.297615 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.297720 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.297759 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.297793 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.297828 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.297858 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.297865 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.297932 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.297961 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.297993 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.298022 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.298048 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.298076 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.298102 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.298129 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.298121 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.298153 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.298181 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.298205 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.298228 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.298252 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.298275 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.298300 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.298326 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.298348 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.298382 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.298425 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.298386 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.298464 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.298455 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.298565 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.298631 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.298684 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.298737 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.298794 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.298852 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.298905 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.298959 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.299016 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.299071 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.299162 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.299221 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.299281 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.299332 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.300005 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.300072 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.300493 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.300566 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.300618 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.300675 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.300729 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.300781 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.300842 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.300902 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.300975 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.301026 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.301083 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.301135 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.301189 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.301242 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.301295 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.301351 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.301441 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.301497 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.301607 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.301670 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.301726 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.301781 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.301834 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.301881 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.301934 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.301989 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.302039 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.302091 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.302144 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.302198 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.302250 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.302303 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.302352 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.302435 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.302488 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.302545 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.302600 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.302655 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.302715 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.302767 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.302819 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.303079 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.303162 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.303220 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.303275 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.303328 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.303381 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.303473 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.303526 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.303581 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.303639 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.303696 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.303753 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.303809 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.303864 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.303925 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.303985 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.304039 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.304254 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.304336 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.304433 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.304506 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.304568 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.304636 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.304695 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.304753 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.304815 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.304873 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.304930 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.304991 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.305053 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.305115 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.305172 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.305228 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.305290 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.305346 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.305433 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.305492 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.305549 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.305605 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.305779 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.305840 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.305896 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.305953 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.306002 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.306054 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.306120 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.306175 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.306229 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.306287 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.306347 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.306473 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.306532 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.306583 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.306639 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.306691 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.306740 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.306798 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.306854 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.306917 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.306978 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.307035 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.307123 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.307187 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.307243 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.307286 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.307325 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.307362 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.307433 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.307472 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.307506 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.307541 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.307577 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.307616 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.307651 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.307692 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.307726 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.307762 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.307822 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.307886 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.307936 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.308041 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.308110 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.298743 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.308149 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.298776 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.298920 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.298859 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.298977 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.299435 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.299530 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.299716 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.299847 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.300058 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.300256 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.300411 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.300846 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.300824 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.300873 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.300880 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.300968 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.308499 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.301287 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.304952 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.305326 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.305854 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.306851 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.306923 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.306939 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.306964 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.307032 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.307051 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.307762 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.307935 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.307978 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.308014 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.308114 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.308693 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.308873 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.309193 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.309561 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.309799 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.310194 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.310193 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.310487 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.310642 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.310504 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.310652 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.310836 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.311037 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: E0223 17:31:31.311495 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:31:31.81145575 +0000 UTC m=+47.627655390 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.312122 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.312166 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.312544 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.312904 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.312986 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.313273 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.313363 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.313529 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.313617 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.313617 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.313659 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.314660 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.315658 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.315682 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.315780 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.315934 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.316094 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.316100 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.316166 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.316412 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.316470 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.316535 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.316828 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.308192 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.316967 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.317053 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.317110 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.317152 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.317193 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.317237 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.317249 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.317289 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.317330 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.317374 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.317444 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.317485 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.317589 4724 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.317614 4724 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.317637 4724 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.317660 4724 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.317682 4724 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.317708 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.317730 4724 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.317750 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.317773 4724 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.317794 4724 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.317790 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.317816 4724 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.317847 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.317870 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.317893 4724 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.317914 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.317935 4724 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.317955 4724 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.317976 4724 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.317995 4724 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.318015 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.318035 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.318057 4724 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.318076 4724 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.318095 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.318118 4724 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.318139 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.318157 4724 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.318176 4724 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.318195 4724 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.318214 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.318234 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.318254 4724 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.318273 4724 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.318293 4724 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.318313 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.318335 4724 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.318355 4724 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.318373 4724 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.318432 4724 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.318453 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.318461 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.318473 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: E0223 17:31:31.318594 4724 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 17:31:31 crc kubenswrapper[4724]: E0223 17:31:31.318684 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 17:31:31.818655577 +0000 UTC m=+47.634855207 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.318684 4724 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.318719 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.318740 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.318762 4724 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.318786 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.318810 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.318832 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.318852 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.318873 4724 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.318894 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.318914 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.318941 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.318962 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.318982 4724 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.319001 4724 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.319020 4724 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.319039 4724 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.319059 4724 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.319081 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.319137 4724 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.319165 4724 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.319193 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.319226 4724 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.319254 4724 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.319278 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.319269 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.319300 4724 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.319322 4724 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.319344 4724 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.319366 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.319386 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.319439 4724 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.319459 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.319480 4724 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.319499 4724 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.319519 4724 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.319539 4724 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.319566 4724 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.319592 4724 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.319682 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: E0223 17:31:31.320015 4724 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.320341 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.320370 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.320867 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.320891 4724 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.320894 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.321684 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.321786 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.321899 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.321994 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.322173 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.322765 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.322815 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.322850 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.322916 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.322954 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.323505 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.323611 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.323805 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.324470 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.325136 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.325138 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.325151 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.325208 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.325705 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.325715 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.325862 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.326057 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.326320 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.326486 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.327171 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.327264 4724 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.327764 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.328328 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.328368 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.328521 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.329187 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.329292 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.329483 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.329750 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.330717 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.331184 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.331662 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.332135 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.332566 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.332689 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.333176 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.333499 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.333556 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.333539 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.333710 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.334110 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.334634 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.334633 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.334796 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.335275 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.335659 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.336031 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.337335 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.338480 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.338635 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.353558 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.353989 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.354003 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.353853 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.354549 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.354591 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.354782 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.354723 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.355585 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.355912 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.355974 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.356560 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.356561 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.356733 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.356925 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.357210 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.357230 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.358214 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: E0223 17:31:31.358501 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 17:31:31.858450756 +0000 UTC m=+47.674650396 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.358925 4724 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.358976 4724 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.359567 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.359623 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.359634 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.359810 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.359935 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.360004 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.360078 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.360121 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: E0223 17:31:31.362165 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 17:31:31 crc kubenswrapper[4724]: E0223 17:31:31.362214 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 17:31:31 crc kubenswrapper[4724]: E0223 17:31:31.362241 4724 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 17:31:31 crc kubenswrapper[4724]: E0223 17:31:31.362358 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-23 17:31:31.862326512 +0000 UTC m=+47.678526132 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 17:31:31 crc kubenswrapper[4724]: E0223 17:31:31.362954 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 17:31:31 crc kubenswrapper[4724]: E0223 17:31:31.363020 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 17:31:31 crc kubenswrapper[4724]: E0223 17:31:31.363045 4724 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 17:31:31 crc kubenswrapper[4724]: E0223 17:31:31.363185 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-23 17:31:31.863118971 +0000 UTC m=+47.679318601 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.363929 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.364983 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.365365 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.365383 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.370586 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.370810 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.370586 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.370951 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.371611 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.371637 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.371176 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.372111 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.372348 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.372533 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.372640 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.373327 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.376363 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.376616 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.376751 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.376793 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.377496 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.378945 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.378985 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.378999 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.379021 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.379036 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:31Z","lastTransitionTime":"2026-02-23T17:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.383810 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.384270 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.384826 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.385038 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.386533 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.386927 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.387152 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.387780 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.389760 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.391026 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.400139 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.400190 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.400205 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.400251 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.400266 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:31Z","lastTransitionTime":"2026-02-23T17:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.406583 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.409226 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: E0223 17:31:31.413115 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aaac6a71-65af-4ded-9945-71c01ce15653\\\",\\\"systemUUID\\\":\\\"883aa43b-ee67-45aa-9f6b-7760dc931d5e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.418127 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.418207 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.418217 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.418237 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.418563 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:31Z","lastTransitionTime":"2026-02-23T17:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.427062 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.427781 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:31:31 crc kubenswrapper[4724]: E0223 17:31:31.429715 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aaac6a71-65af-4ded-9945-71c01ce15653\\\",\\\"systemUUID\\\":\\\"883aa43b-ee67-45aa-9f6b-7760dc931d5e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.434589 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.434664 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.434682 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.434715 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.434735 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:31Z","lastTransitionTime":"2026-02-23T17:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:31 crc kubenswrapper[4724]: E0223 17:31:31.445120 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aaac6a71-65af-4ded-9945-71c01ce15653\\\",\\\"systemUUID\\\":\\\"883aa43b-ee67-45aa-9f6b-7760dc931d5e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.450320 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.450380 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.450434 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.450464 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.450483 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:31Z","lastTransitionTime":"2026-02-23T17:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.459882 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.459942 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.460048 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.460062 4724 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.460152 4724 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.460152 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.460600 4724 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.460622 4724 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.460643 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.460662 4724 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.460860 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.460884 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.460902 4724 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.460920 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.460938 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.460968 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.460991 4724 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.461013 4724 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.461036 4724 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.461061 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.461084 4724 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.461105 4724 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.461129 4724 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.461155 4724 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.461180 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.461204 4724 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.461228 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.461257 4724 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.461284 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.461313 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.461338 4724 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.461360 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.461383 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.461448 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.461476 4724 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.461499 4724 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.461525 4724 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.461550 4724 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.461574 4724 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.461592 4724 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.461610 4724 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.461627 4724 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.461644 4724 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.461662 4724 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.461683 4724 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.461700 4724 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: E0223 17:31:31.461378 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aaac6a71-65af-4ded-9945-71c01ce15653\\\",\\\"systemUUID\\\":\\\"883aa43b-ee67-45aa-9f6b-7760dc931d5e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.461719 4724 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.462010 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.462034 4724 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.462086 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.462106 4724 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.462125 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.462173 4724 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.462195 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.462213 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.462260 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.462279 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.462297 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.462341 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.462359 4724 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.462377 4724 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.462443 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.462468 4724 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.462487 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.462541 4724 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.462559 4724 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.462577 4724 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.462624 4724 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.462643 4724 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.462662 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.462708 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.462725 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.462743 4724 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.462791 4724 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.462817 4724 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.462905 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.463208 4724 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.463290 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.463323 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.463387 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.465641 4724 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.465694 4724 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.465716 4724 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.465730 4724 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.465747 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.465763 4724 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.465780 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.465796 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.465818 4724 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.465833 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.465847 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.465864 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.465878 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.465896 4724 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.465909 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.465925 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.465938 4724 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.465950 4724 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.465965 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.465979 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.465992 4724 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.466006 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.466019 4724 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.466033 4724 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.466048 4724 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.466063 4724 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.466075 4724 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.466087 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.466101 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.466119 4724 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.466136 4724 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.466150 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.466173 4724 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.466186 4724 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.466199 4724 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.466217 4724 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.466230 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.466896 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.466974 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.467000 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.467079 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.467150 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:31Z","lastTransitionTime":"2026-02-23T17:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.481715 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 23 17:31:31 crc kubenswrapper[4724]: W0223 17:31:31.484619 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-b49214811d5993cf0b6c38b08da64f882243977fceb36f4717324d4a643c9263 WatchSource:0}: Error finding container b49214811d5993cf0b6c38b08da64f882243977fceb36f4717324d4a643c9263: Status 404 returned error can't find the container with id b49214811d5993cf0b6c38b08da64f882243977fceb36f4717324d4a643c9263 Feb 23 17:31:31 crc kubenswrapper[4724]: E0223 17:31:31.484803 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aaac6a71-65af-4ded-9945-71c01ce15653\\\",\\\"systemUUID\\\":\\\"883aa43b-ee67-45aa-9f6b-7760dc931d5e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:31 crc kubenswrapper[4724]: E0223 17:31:31.485252 4724 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.487527 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.487644 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.487713 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.487799 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.487862 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:31Z","lastTransitionTime":"2026-02-23T17:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:31 crc kubenswrapper[4724]: E0223 17:31:31.491736 4724 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 23 17:31:31 crc kubenswrapper[4724]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 23 17:31:31 crc kubenswrapper[4724]: if [[ -f "/env/_master" ]]; then Feb 23 17:31:31 crc kubenswrapper[4724]: set -o allexport Feb 23 17:31:31 crc kubenswrapper[4724]: source "/env/_master" Feb 23 17:31:31 crc kubenswrapper[4724]: set +o allexport Feb 23 17:31:31 crc kubenswrapper[4724]: fi Feb 23 17:31:31 crc kubenswrapper[4724]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Feb 23 17:31:31 crc kubenswrapper[4724]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Feb 23 17:31:31 crc kubenswrapper[4724]: ho_enable="--enable-hybrid-overlay" Feb 23 17:31:31 crc kubenswrapper[4724]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Feb 23 17:31:31 crc kubenswrapper[4724]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Feb 23 17:31:31 crc kubenswrapper[4724]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Feb 23 17:31:31 crc kubenswrapper[4724]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 23 17:31:31 crc kubenswrapper[4724]: --webhook-cert-dir="/etc/webhook-cert" \ Feb 23 17:31:31 crc kubenswrapper[4724]: --webhook-host=127.0.0.1 \ Feb 23 17:31:31 crc kubenswrapper[4724]: --webhook-port=9743 \ Feb 23 17:31:31 crc kubenswrapper[4724]: ${ho_enable} \ Feb 23 17:31:31 crc kubenswrapper[4724]: --enable-interconnect \ Feb 23 17:31:31 crc kubenswrapper[4724]: --disable-approver \ Feb 23 17:31:31 crc kubenswrapper[4724]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Feb 23 17:31:31 crc kubenswrapper[4724]: --wait-for-kubernetes-api=200s \ Feb 23 17:31:31 crc kubenswrapper[4724]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Feb 23 17:31:31 crc kubenswrapper[4724]: --loglevel="${LOGLEVEL}" Feb 23 17:31:31 crc kubenswrapper[4724]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 23 17:31:31 crc kubenswrapper[4724]: > logger="UnhandledError" Feb 23 17:31:31 crc kubenswrapper[4724]: E0223 17:31:31.495510 4724 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 23 17:31:31 crc kubenswrapper[4724]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 23 17:31:31 crc kubenswrapper[4724]: if [[ -f "/env/_master" ]]; then Feb 23 17:31:31 crc kubenswrapper[4724]: set -o allexport Feb 23 17:31:31 crc kubenswrapper[4724]: source "/env/_master" Feb 23 17:31:31 crc kubenswrapper[4724]: set +o allexport Feb 23 17:31:31 crc kubenswrapper[4724]: fi Feb 23 17:31:31 crc kubenswrapper[4724]: Feb 23 17:31:31 crc kubenswrapper[4724]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Feb 23 17:31:31 crc kubenswrapper[4724]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 23 17:31:31 crc kubenswrapper[4724]: --disable-webhook \ Feb 23 17:31:31 crc kubenswrapper[4724]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Feb 23 17:31:31 crc kubenswrapper[4724]: --loglevel="${LOGLEVEL}" Feb 23 17:31:31 crc kubenswrapper[4724]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 23 17:31:31 crc kubenswrapper[4724]: > logger="UnhandledError" Feb 23 17:31:31 crc kubenswrapper[4724]: E0223 17:31:31.497170 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Feb 23 17:31:31 crc kubenswrapper[4724]: E0223 17:31:31.504021 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 23 17:31:31 crc kubenswrapper[4724]: E0223 17:31:31.505621 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.525216 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"8ac37475d87ccbdfc99930d40dfc51d3b23e6fc9abaf21757765d43840d6a348"} Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.527076 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"b49214811d5993cf0b6c38b08da64f882243977fceb36f4717324d4a643c9263"} Feb 23 17:31:31 crc kubenswrapper[4724]: E0223 17:31:31.527103 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 23 17:31:31 crc kubenswrapper[4724]: E0223 17:31:31.528453 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Feb 23 17:31:31 crc kubenswrapper[4724]: E0223 17:31:31.529538 4724 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 23 17:31:31 crc kubenswrapper[4724]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 23 17:31:31 crc kubenswrapper[4724]: if [[ -f "/env/_master" ]]; then Feb 23 17:31:31 crc kubenswrapper[4724]: set -o allexport Feb 23 17:31:31 crc kubenswrapper[4724]: source "/env/_master" Feb 23 17:31:31 crc kubenswrapper[4724]: set +o allexport Feb 23 17:31:31 crc kubenswrapper[4724]: fi Feb 23 17:31:31 crc kubenswrapper[4724]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Feb 23 17:31:31 crc kubenswrapper[4724]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Feb 23 17:31:31 crc kubenswrapper[4724]: ho_enable="--enable-hybrid-overlay" Feb 23 17:31:31 crc kubenswrapper[4724]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Feb 23 17:31:31 crc kubenswrapper[4724]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Feb 23 17:31:31 crc kubenswrapper[4724]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Feb 23 17:31:31 crc kubenswrapper[4724]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 23 17:31:31 crc kubenswrapper[4724]: --webhook-cert-dir="/etc/webhook-cert" \ Feb 23 17:31:31 crc kubenswrapper[4724]: --webhook-host=127.0.0.1 \ Feb 23 17:31:31 crc kubenswrapper[4724]: --webhook-port=9743 \ Feb 23 17:31:31 crc kubenswrapper[4724]: ${ho_enable} \ Feb 23 17:31:31 crc kubenswrapper[4724]: --enable-interconnect \ Feb 23 17:31:31 crc kubenswrapper[4724]: --disable-approver \ Feb 23 17:31:31 crc kubenswrapper[4724]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Feb 23 17:31:31 crc kubenswrapper[4724]: --wait-for-kubernetes-api=200s \ Feb 23 17:31:31 crc kubenswrapper[4724]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Feb 23 17:31:31 crc kubenswrapper[4724]: --loglevel="${LOGLEVEL}" Feb 23 17:31:31 crc kubenswrapper[4724]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 23 17:31:31 crc kubenswrapper[4724]: > logger="UnhandledError" Feb 23 17:31:31 crc kubenswrapper[4724]: E0223 17:31:31.533823 4724 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 23 17:31:31 crc kubenswrapper[4724]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 23 17:31:31 crc kubenswrapper[4724]: if [[ -f "/env/_master" ]]; then Feb 23 17:31:31 crc kubenswrapper[4724]: set -o allexport Feb 23 17:31:31 crc kubenswrapper[4724]: source "/env/_master" Feb 23 17:31:31 crc kubenswrapper[4724]: set +o allexport Feb 23 17:31:31 crc kubenswrapper[4724]: fi Feb 23 17:31:31 crc kubenswrapper[4724]: Feb 23 17:31:31 crc kubenswrapper[4724]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Feb 23 17:31:31 crc kubenswrapper[4724]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 23 17:31:31 crc kubenswrapper[4724]: --disable-webhook \ Feb 23 17:31:31 crc kubenswrapper[4724]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Feb 23 17:31:31 crc kubenswrapper[4724]: --loglevel="${LOGLEVEL}" Feb 23 17:31:31 crc kubenswrapper[4724]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 23 17:31:31 crc kubenswrapper[4724]: > logger="UnhandledError" Feb 23 17:31:31 crc kubenswrapper[4724]: E0223 17:31:31.535674 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.539377 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.554040 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.568213 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.592270 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.592345 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.592370 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.592321 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e1d6606-75fc-41fd-9c23-18ee248da2af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe805066e5e61f09a4f5cf1f9e912da6664229c8db86a898cbe670278e4c2530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beda3a9d030f2d4aa9c6a78508a84f5bc122debebb2c5b423233834433b8fd5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e108d74a635da102e857e2ad4f18384861e4e630ae31af9aa05841e6bbc2de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:25Z\\\",\\\"message\\\":\\\"W0223 17:31:24.378840 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 17:31:24.379771 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771867884 cert, and key in /tmp/serving-cert-1953836024/serving-signer.crt, /tmp/serving-cert-1953836024/serving-signer.key\\\\nI0223 17:31:24.846158 1 observer_polling.go:159] Starting file observer\\\\nW0223 17:31:24.858497 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 17:31:24.858706 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 17:31:24.859835 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1953836024/tls.crt::/tmp/serving-cert-1953836024/tls.key\\\\\\\"\\\\nF0223 17:31:25.358711 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:31:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b550694e49325768a2674a4d2b7089dd267b91a273751fb62d060a0663f06619\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.592439 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.592731 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:31Z","lastTransitionTime":"2026-02-23T17:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.609567 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.625790 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.640366 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.651709 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.670128 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e1d6606-75fc-41fd-9c23-18ee248da2af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe805066e5e61f09a4f5cf1f9e912da6664229c8db86a898cbe670278e4c2530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beda3a9d030f2d4aa9c6a78508a84f5bc122debebb2c5b423233834433b8fd5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e108d74a635da102e857e2ad4f18384861e4e630ae31af9aa05841e6bbc2de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:25Z\\\",\\\"message\\\":\\\"W0223 17:31:24.378840 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 17:31:24.379771 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771867884 cert, and key in /tmp/serving-cert-1953836024/serving-signer.crt, /tmp/serving-cert-1953836024/serving-signer.key\\\\nI0223 17:31:24.846158 1 observer_polling.go:159] Starting file observer\\\\nW0223 17:31:24.858497 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 17:31:24.858706 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 17:31:24.859835 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1953836024/tls.crt::/tmp/serving-cert-1953836024/tls.key\\\\\\\"\\\\nF0223 17:31:25.358711 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:31:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b550694e49325768a2674a4d2b7089dd267b91a273751fb62d060a0663f06619\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.682469 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.691164 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.695513 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.695684 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.695793 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.695880 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.695959 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:31Z","lastTransitionTime":"2026-02-23T17:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.704528 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.716900 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.732521 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.748678 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 23 17:31:31 crc kubenswrapper[4724]: E0223 17:31:31.761929 4724 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 23 17:31:31 crc kubenswrapper[4724]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Feb 23 17:31:31 crc kubenswrapper[4724]: set -o allexport Feb 23 17:31:31 crc kubenswrapper[4724]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 23 17:31:31 crc kubenswrapper[4724]: source /etc/kubernetes/apiserver-url.env Feb 23 17:31:31 crc kubenswrapper[4724]: else Feb 23 17:31:31 crc kubenswrapper[4724]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 23 17:31:31 crc kubenswrapper[4724]: exit 1 Feb 23 17:31:31 crc kubenswrapper[4724]: fi Feb 23 17:31:31 crc kubenswrapper[4724]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 23 17:31:31 crc kubenswrapper[4724]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 23 17:31:31 crc kubenswrapper[4724]: > logger="UnhandledError" Feb 23 17:31:31 crc kubenswrapper[4724]: E0223 17:31:31.763524 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.799348 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.799460 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.799481 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.799512 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.799534 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:31Z","lastTransitionTime":"2026-02-23T17:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.871268 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.871484 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:31:31 crc kubenswrapper[4724]: E0223 17:31:31.871535 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:31:32.87149696 +0000 UTC m=+48.687696600 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.871602 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.871662 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:31:31 crc kubenswrapper[4724]: E0223 17:31:31.871676 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 17:31:31 crc kubenswrapper[4724]: E0223 17:31:31.871712 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.871716 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:31:31 crc kubenswrapper[4724]: E0223 17:31:31.871732 4724 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 17:31:31 crc kubenswrapper[4724]: E0223 17:31:31.871796 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-23 17:31:32.871776767 +0000 UTC m=+48.687976397 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 17:31:31 crc kubenswrapper[4724]: E0223 17:31:31.871843 4724 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 17:31:31 crc kubenswrapper[4724]: E0223 17:31:31.871852 4724 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 17:31:31 crc kubenswrapper[4724]: E0223 17:31:31.871894 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 17:31:32.87187981 +0000 UTC m=+48.688079450 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 17:31:31 crc kubenswrapper[4724]: E0223 17:31:31.871931 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 17:31:32.87190624 +0000 UTC m=+48.688105870 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 17:31:31 crc kubenswrapper[4724]: E0223 17:31:31.871938 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 17:31:31 crc kubenswrapper[4724]: E0223 17:31:31.872073 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 17:31:31 crc kubenswrapper[4724]: E0223 17:31:31.872163 4724 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 17:31:31 crc kubenswrapper[4724]: E0223 17:31:31.872320 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-23 17:31:32.87228573 +0000 UTC m=+48.688485530 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.908058 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.908117 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.908143 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.908178 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:31 crc kubenswrapper[4724]: I0223 17:31:31.908202 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:31Z","lastTransitionTime":"2026-02-23T17:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.011870 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.012000 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.012074 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.012112 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.012138 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:32Z","lastTransitionTime":"2026-02-23T17:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.107818 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 21:49:40.917693703 +0000 UTC Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.115693 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.115763 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.115784 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.115817 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.115836 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:32Z","lastTransitionTime":"2026-02-23T17:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.219231 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.219315 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.219342 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.219376 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.219436 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:32Z","lastTransitionTime":"2026-02-23T17:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.234568 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.251620 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.256802 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.271319 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.290322 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e1d6606-75fc-41fd-9c23-18ee248da2af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe805066e5e61f09a4f5cf1f9e912da6664229c8db86a898cbe670278e4c2530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beda3a9d030f2d4aa9c6a78508a84f5bc122debebb2c5b423233834433b8fd5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e108d74a635da102e857e2ad4f18384861e4e630ae31af9aa05841e6bbc2de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:25Z\\\",\\\"message\\\":\\\"W0223 17:31:24.378840 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 17:31:24.379771 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771867884 cert, and key in /tmp/serving-cert-1953836024/serving-signer.crt, /tmp/serving-cert-1953836024/serving-signer.key\\\\nI0223 17:31:24.846158 1 observer_polling.go:159] Starting file observer\\\\nW0223 17:31:24.858497 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 17:31:24.858706 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 17:31:24.859835 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1953836024/tls.crt::/tmp/serving-cert-1953836024/tls.key\\\\\\\"\\\\nF0223 17:31:25.358711 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:31:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b550694e49325768a2674a4d2b7089dd267b91a273751fb62d060a0663f06619\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.305842 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.322061 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.322967 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.323036 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.323051 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.323073 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.323087 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:32Z","lastTransitionTime":"2026-02-23T17:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.338635 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.354849 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.425726 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.426043 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.426155 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.426224 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.426289 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:32Z","lastTransitionTime":"2026-02-23T17:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.530778 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"55d52df24d1ad50fd83110c658bb5424d2f90e0b74d036eab0273765130c6092"} Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.532301 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.532349 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.532375 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.532441 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.532468 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:32Z","lastTransitionTime":"2026-02-23T17:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:32 crc kubenswrapper[4724]: E0223 17:31:32.533673 4724 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 23 17:31:32 crc kubenswrapper[4724]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Feb 23 17:31:32 crc kubenswrapper[4724]: set -o allexport Feb 23 17:31:32 crc kubenswrapper[4724]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 23 17:31:32 crc kubenswrapper[4724]: source /etc/kubernetes/apiserver-url.env Feb 23 17:31:32 crc kubenswrapper[4724]: else Feb 23 17:31:32 crc kubenswrapper[4724]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 23 17:31:32 crc kubenswrapper[4724]: exit 1 Feb 23 17:31:32 crc kubenswrapper[4724]: fi Feb 23 17:31:32 crc kubenswrapper[4724]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 23 17:31:32 crc kubenswrapper[4724]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 23 17:31:32 crc kubenswrapper[4724]: > logger="UnhandledError" Feb 23 17:31:32 crc kubenswrapper[4724]: E0223 17:31:32.535281 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.548311 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.561471 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e1d6606-75fc-41fd-9c23-18ee248da2af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe805066e5e61f09a4f5cf1f9e912da6664229c8db86a898cbe670278e4c2530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beda3a9d030f2d4aa9c6a78508a84f5bc122debebb2c5b423233834433b8fd5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e108d74a635da102e857e2ad4f18384861e4e630ae31af9aa05841e6bbc2de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:25Z\\\",\\\"message\\\":\\\"W0223 17:31:24.378840 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 17:31:24.379771 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771867884 cert, and key in /tmp/serving-cert-1953836024/serving-signer.crt, /tmp/serving-cert-1953836024/serving-signer.key\\\\nI0223 17:31:24.846158 1 observer_polling.go:159] Starting file observer\\\\nW0223 17:31:24.858497 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 17:31:24.858706 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 17:31:24.859835 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1953836024/tls.crt::/tmp/serving-cert-1953836024/tls.key\\\\\\\"\\\\nF0223 17:31:25.358711 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:31:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b550694e49325768a2674a4d2b7089dd267b91a273751fb62d060a0663f06619\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.578821 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.596004 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.608693 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.623904 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.635504 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.635583 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.635608 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.635640 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.635663 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:32Z","lastTransitionTime":"2026-02-23T17:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.639021 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35021d6d-15bd-4153-9dae-7b002eff8c23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26478861fb89db7eb6b5ba0a2089cad4360893bd95a71883be07831745c5c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f971a2a092b595760de8b7a9b6a0a13fa802d3fcb6a2c2b8922b01c9c57d782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d892cd1f6ea7fabb2e64e3e78d329f6e0efbf8585cc4f72fbcd29f31a035cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.657963 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.739254 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.739543 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.739621 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.739657 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.739672 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:32Z","lastTransitionTime":"2026-02-23T17:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.843609 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.843687 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.843700 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.843720 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.843737 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:32Z","lastTransitionTime":"2026-02-23T17:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.881979 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:31:32 crc kubenswrapper[4724]: E0223 17:31:32.882283 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:31:34.882218449 +0000 UTC m=+50.698418079 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.882436 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.882531 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.882598 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.882649 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:31:32 crc kubenswrapper[4724]: E0223 17:31:32.882871 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 17:31:32 crc kubenswrapper[4724]: E0223 17:31:32.882905 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 17:31:32 crc kubenswrapper[4724]: E0223 17:31:32.882928 4724 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 17:31:32 crc kubenswrapper[4724]: E0223 17:31:32.882994 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-23 17:31:34.882978848 +0000 UTC m=+50.699178478 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 17:31:32 crc kubenswrapper[4724]: E0223 17:31:32.883085 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 17:31:32 crc kubenswrapper[4724]: E0223 17:31:32.883136 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 17:31:32 crc kubenswrapper[4724]: E0223 17:31:32.883165 4724 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 17:31:32 crc kubenswrapper[4724]: E0223 17:31:32.883201 4724 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 17:31:32 crc kubenswrapper[4724]: E0223 17:31:32.883284 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-23 17:31:34.883247314 +0000 UTC m=+50.699447104 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 17:31:32 crc kubenswrapper[4724]: E0223 17:31:32.883340 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 17:31:34.883304746 +0000 UTC m=+50.699504376 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 17:31:32 crc kubenswrapper[4724]: E0223 17:31:32.883779 4724 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 17:31:32 crc kubenswrapper[4724]: E0223 17:31:32.884009 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 17:31:34.883988982 +0000 UTC m=+50.700188612 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.947031 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.947102 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.947115 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.947141 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.947163 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:32Z","lastTransitionTime":"2026-02-23T17:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.950754 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.950814 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.950954 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:31:32 crc kubenswrapper[4724]: E0223 17:31:32.951118 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 17:31:32 crc kubenswrapper[4724]: E0223 17:31:32.951309 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 17:31:32 crc kubenswrapper[4724]: E0223 17:31:32.951517 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.955889 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.956757 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.958473 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.959253 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.960278 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.960836 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.961439 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.962420 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.963143 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.964103 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.964728 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.965970 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.966536 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.967061 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.967942 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.968637 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.969640 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.970074 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.970778 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.972322 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.972899 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.973957 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.974446 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.975739 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.976217 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.976882 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.977965 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.978572 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.979745 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.980250 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.981189 4724 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.981293 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.982902 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.983758 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.984221 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.986737 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.988365 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.990799 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.992624 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.994937 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.996035 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.998284 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 23 17:31:32 crc kubenswrapper[4724]: I0223 17:31:32.999718 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.001932 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.002926 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.004906 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.006038 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.008529 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.009574 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.010993 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.011512 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.012618 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.013193 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.013684 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.050675 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.050978 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.051066 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.051202 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.051309 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:33Z","lastTransitionTime":"2026-02-23T17:31:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.108817 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 16:20:35.418724765 +0000 UTC Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.154869 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.154924 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.154943 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.154969 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.154991 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:33Z","lastTransitionTime":"2026-02-23T17:31:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.258533 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.258617 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.258639 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.258673 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.258694 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:33Z","lastTransitionTime":"2026-02-23T17:31:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.363157 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.363226 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.363251 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.363281 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.363304 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:33Z","lastTransitionTime":"2026-02-23T17:31:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.384091 4724 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.466873 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.466918 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.466931 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.466950 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.466965 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:33Z","lastTransitionTime":"2026-02-23T17:31:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.570656 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.570712 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.570727 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.570751 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.570771 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:33Z","lastTransitionTime":"2026-02-23T17:31:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.674274 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.674338 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.674356 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.674383 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.674429 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:33Z","lastTransitionTime":"2026-02-23T17:31:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.777901 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.777966 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.777983 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.778014 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.778033 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:33Z","lastTransitionTime":"2026-02-23T17:31:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.880943 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.881034 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.881054 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.881085 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.881106 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:33Z","lastTransitionTime":"2026-02-23T17:31:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.984140 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.984228 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.984246 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.984273 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:33 crc kubenswrapper[4724]: I0223 17:31:33.984294 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:33Z","lastTransitionTime":"2026-02-23T17:31:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:34 crc kubenswrapper[4724]: I0223 17:31:34.087870 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:34 crc kubenswrapper[4724]: I0223 17:31:34.087934 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:34 crc kubenswrapper[4724]: I0223 17:31:34.087958 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:34 crc kubenswrapper[4724]: I0223 17:31:34.087992 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:34 crc kubenswrapper[4724]: I0223 17:31:34.088012 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:34Z","lastTransitionTime":"2026-02-23T17:31:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:34 crc kubenswrapper[4724]: I0223 17:31:34.109673 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 12:16:56.009857579 +0000 UTC Feb 23 17:31:34 crc kubenswrapper[4724]: I0223 17:31:34.191229 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:34 crc kubenswrapper[4724]: I0223 17:31:34.191317 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:34 crc kubenswrapper[4724]: I0223 17:31:34.191337 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:34 crc kubenswrapper[4724]: I0223 17:31:34.191369 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:34 crc kubenswrapper[4724]: I0223 17:31:34.191428 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:34Z","lastTransitionTime":"2026-02-23T17:31:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:34 crc kubenswrapper[4724]: I0223 17:31:34.294846 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:34 crc kubenswrapper[4724]: I0223 17:31:34.294931 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:34 crc kubenswrapper[4724]: I0223 17:31:34.294963 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:34 crc kubenswrapper[4724]: I0223 17:31:34.294998 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:34 crc kubenswrapper[4724]: I0223 17:31:34.295022 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:34Z","lastTransitionTime":"2026-02-23T17:31:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:34 crc kubenswrapper[4724]: I0223 17:31:34.399912 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:34 crc kubenswrapper[4724]: I0223 17:31:34.399994 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:34 crc kubenswrapper[4724]: I0223 17:31:34.400028 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:34 crc kubenswrapper[4724]: I0223 17:31:34.400063 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:34 crc kubenswrapper[4724]: I0223 17:31:34.400099 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:34Z","lastTransitionTime":"2026-02-23T17:31:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:34 crc kubenswrapper[4724]: I0223 17:31:34.504458 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:34 crc kubenswrapper[4724]: I0223 17:31:34.504543 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:34 crc kubenswrapper[4724]: I0223 17:31:34.504566 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:34 crc kubenswrapper[4724]: I0223 17:31:34.504599 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:34 crc kubenswrapper[4724]: I0223 17:31:34.504623 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:34Z","lastTransitionTime":"2026-02-23T17:31:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:34 crc kubenswrapper[4724]: I0223 17:31:34.608088 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:34 crc kubenswrapper[4724]: I0223 17:31:34.608163 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:34 crc kubenswrapper[4724]: I0223 17:31:34.608181 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:34 crc kubenswrapper[4724]: I0223 17:31:34.608212 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:34 crc kubenswrapper[4724]: I0223 17:31:34.608229 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:34Z","lastTransitionTime":"2026-02-23T17:31:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:34 crc kubenswrapper[4724]: I0223 17:31:34.711834 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:34 crc kubenswrapper[4724]: I0223 17:31:34.711900 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:34 crc kubenswrapper[4724]: I0223 17:31:34.711920 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:34 crc kubenswrapper[4724]: I0223 17:31:34.711948 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:34 crc kubenswrapper[4724]: I0223 17:31:34.711970 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:34Z","lastTransitionTime":"2026-02-23T17:31:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:34 crc kubenswrapper[4724]: I0223 17:31:34.815362 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:34 crc kubenswrapper[4724]: I0223 17:31:34.815479 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:34 crc kubenswrapper[4724]: I0223 17:31:34.815496 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:34 crc kubenswrapper[4724]: I0223 17:31:34.815526 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:34 crc kubenswrapper[4724]: I0223 17:31:34.815544 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:34Z","lastTransitionTime":"2026-02-23T17:31:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:34 crc kubenswrapper[4724]: I0223 17:31:34.903983 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:31:34 crc kubenswrapper[4724]: I0223 17:31:34.904135 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:31:34 crc kubenswrapper[4724]: I0223 17:31:34.904190 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:31:34 crc kubenswrapper[4724]: E0223 17:31:34.904307 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:31:38.904248143 +0000 UTC m=+54.720447773 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:31:34 crc kubenswrapper[4724]: E0223 17:31:34.904360 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 17:31:34 crc kubenswrapper[4724]: E0223 17:31:34.904447 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 17:31:34 crc kubenswrapper[4724]: E0223 17:31:34.904470 4724 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 17:31:34 crc kubenswrapper[4724]: I0223 17:31:34.904507 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:31:34 crc kubenswrapper[4724]: E0223 17:31:34.904556 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-23 17:31:38.90453093 +0000 UTC m=+54.720730560 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 17:31:34 crc kubenswrapper[4724]: I0223 17:31:34.904595 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:31:34 crc kubenswrapper[4724]: E0223 17:31:34.904646 4724 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 17:31:34 crc kubenswrapper[4724]: E0223 17:31:34.904715 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 17:31:38.904699534 +0000 UTC m=+54.720899164 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 17:31:34 crc kubenswrapper[4724]: E0223 17:31:34.904734 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 17:31:34 crc kubenswrapper[4724]: E0223 17:31:34.904758 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 17:31:34 crc kubenswrapper[4724]: E0223 17:31:34.904773 4724 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 17:31:34 crc kubenswrapper[4724]: E0223 17:31:34.904833 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-23 17:31:38.904818577 +0000 UTC m=+54.721018217 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 17:31:34 crc kubenswrapper[4724]: E0223 17:31:34.905045 4724 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 17:31:34 crc kubenswrapper[4724]: E0223 17:31:34.905379 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 17:31:38.90534374 +0000 UTC m=+54.721543370 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 17:31:34 crc kubenswrapper[4724]: I0223 17:31:34.918605 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:34 crc kubenswrapper[4724]: I0223 17:31:34.918692 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:34 crc kubenswrapper[4724]: I0223 17:31:34.918710 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:34 crc kubenswrapper[4724]: I0223 17:31:34.918737 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:34 crc kubenswrapper[4724]: I0223 17:31:34.918765 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:34Z","lastTransitionTime":"2026-02-23T17:31:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:34 crc kubenswrapper[4724]: I0223 17:31:34.950299 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:31:34 crc kubenswrapper[4724]: I0223 17:31:34.950294 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:31:34 crc kubenswrapper[4724]: I0223 17:31:34.950529 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:31:34 crc kubenswrapper[4724]: E0223 17:31:34.950717 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 17:31:34 crc kubenswrapper[4724]: E0223 17:31:34.951514 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 17:31:34 crc kubenswrapper[4724]: E0223 17:31:34.951778 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 17:31:34 crc kubenswrapper[4724]: I0223 17:31:34.965019 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:34 crc kubenswrapper[4724]: I0223 17:31:34.977809 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:34 crc kubenswrapper[4724]: I0223 17:31:34.996931 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e1d6606-75fc-41fd-9c23-18ee248da2af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe805066e5e61f09a4f5cf1f9e912da6664229c8db86a898cbe670278e4c2530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beda3a9d030f2d4aa9c6a78508a84f5bc122debebb2c5b423233834433b8fd5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e108d74a635da102e857e2ad4f18384861e4e630ae31af9aa05841e6bbc2de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:25Z\\\",\\\"message\\\":\\\"W0223 17:31:24.378840 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 17:31:24.379771 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771867884 cert, and key in /tmp/serving-cert-1953836024/serving-signer.crt, /tmp/serving-cert-1953836024/serving-signer.key\\\\nI0223 17:31:24.846158 1 observer_polling.go:159] Starting file observer\\\\nW0223 17:31:24.858497 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 17:31:24.858706 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 17:31:24.859835 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1953836024/tls.crt::/tmp/serving-cert-1953836024/tls.key\\\\\\\"\\\\nF0223 17:31:25.358711 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:31:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b550694e49325768a2674a4d2b7089dd267b91a273751fb62d060a0663f06619\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:35 crc kubenswrapper[4724]: I0223 17:31:35.014212 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:35 crc kubenswrapper[4724]: I0223 17:31:35.022153 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:35 crc kubenswrapper[4724]: I0223 17:31:35.022238 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:35 crc kubenswrapper[4724]: I0223 17:31:35.022267 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:35 crc kubenswrapper[4724]: I0223 17:31:35.022304 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:35 crc kubenswrapper[4724]: I0223 17:31:35.022324 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:35Z","lastTransitionTime":"2026-02-23T17:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:35 crc kubenswrapper[4724]: I0223 17:31:35.034850 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:35 crc kubenswrapper[4724]: I0223 17:31:35.052603 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:35 crc kubenswrapper[4724]: I0223 17:31:35.071012 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35021d6d-15bd-4153-9dae-7b002eff8c23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26478861fb89db7eb6b5ba0a2089cad4360893bd95a71883be07831745c5c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f971a2a092b595760de8b7a9b6a0a13fa802d3fcb6a2c2b8922b01c9c57d782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d892cd1f6ea7fabb2e64e3e78d329f6e0efbf8585cc4f72fbcd29f31a035cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:35 crc kubenswrapper[4724]: I0223 17:31:35.095564 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:35 crc kubenswrapper[4724]: I0223 17:31:35.110499 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 01:12:35.691890159 +0000 UTC Feb 23 17:31:35 crc kubenswrapper[4724]: I0223 17:31:35.125666 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:35 crc kubenswrapper[4724]: I0223 17:31:35.125725 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:35 crc kubenswrapper[4724]: I0223 17:31:35.125739 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:35 crc kubenswrapper[4724]: I0223 17:31:35.125765 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:35 crc kubenswrapper[4724]: I0223 17:31:35.125792 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:35Z","lastTransitionTime":"2026-02-23T17:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:35 crc kubenswrapper[4724]: I0223 17:31:35.229078 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:35 crc kubenswrapper[4724]: I0223 17:31:35.229138 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:35 crc kubenswrapper[4724]: I0223 17:31:35.229158 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:35 crc kubenswrapper[4724]: I0223 17:31:35.229223 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:35 crc kubenswrapper[4724]: I0223 17:31:35.229246 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:35Z","lastTransitionTime":"2026-02-23T17:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:35 crc kubenswrapper[4724]: I0223 17:31:35.332476 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:35 crc kubenswrapper[4724]: I0223 17:31:35.332576 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:35 crc kubenswrapper[4724]: I0223 17:31:35.332596 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:35 crc kubenswrapper[4724]: I0223 17:31:35.332624 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:35 crc kubenswrapper[4724]: I0223 17:31:35.332643 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:35Z","lastTransitionTime":"2026-02-23T17:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:35 crc kubenswrapper[4724]: I0223 17:31:35.435964 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:35 crc kubenswrapper[4724]: I0223 17:31:35.436033 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:35 crc kubenswrapper[4724]: I0223 17:31:35.436051 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:35 crc kubenswrapper[4724]: I0223 17:31:35.436078 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:35 crc kubenswrapper[4724]: I0223 17:31:35.436099 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:35Z","lastTransitionTime":"2026-02-23T17:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:35 crc kubenswrapper[4724]: I0223 17:31:35.539780 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:35 crc kubenswrapper[4724]: I0223 17:31:35.539847 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:35 crc kubenswrapper[4724]: I0223 17:31:35.539865 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:35 crc kubenswrapper[4724]: I0223 17:31:35.539892 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:35 crc kubenswrapper[4724]: I0223 17:31:35.539911 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:35Z","lastTransitionTime":"2026-02-23T17:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:35 crc kubenswrapper[4724]: I0223 17:31:35.643579 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:35 crc kubenswrapper[4724]: I0223 17:31:35.643691 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:35 crc kubenswrapper[4724]: I0223 17:31:35.643721 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:35 crc kubenswrapper[4724]: I0223 17:31:35.643765 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:35 crc kubenswrapper[4724]: I0223 17:31:35.643793 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:35Z","lastTransitionTime":"2026-02-23T17:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:35 crc kubenswrapper[4724]: I0223 17:31:35.752667 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:35 crc kubenswrapper[4724]: I0223 17:31:35.752735 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:35 crc kubenswrapper[4724]: I0223 17:31:35.752751 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:35 crc kubenswrapper[4724]: I0223 17:31:35.752774 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:35 crc kubenswrapper[4724]: I0223 17:31:35.752790 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:35Z","lastTransitionTime":"2026-02-23T17:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:35 crc kubenswrapper[4724]: I0223 17:31:35.856692 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:35 crc kubenswrapper[4724]: I0223 17:31:35.856774 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:35 crc kubenswrapper[4724]: I0223 17:31:35.856789 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:35 crc kubenswrapper[4724]: I0223 17:31:35.856815 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:35 crc kubenswrapper[4724]: I0223 17:31:35.856826 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:35Z","lastTransitionTime":"2026-02-23T17:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:35 crc kubenswrapper[4724]: I0223 17:31:35.961245 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:35 crc kubenswrapper[4724]: I0223 17:31:35.961430 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:35 crc kubenswrapper[4724]: I0223 17:31:35.961459 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:35 crc kubenswrapper[4724]: I0223 17:31:35.961489 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:35 crc kubenswrapper[4724]: I0223 17:31:35.961507 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:35Z","lastTransitionTime":"2026-02-23T17:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.064352 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.064424 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.064446 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.064473 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.064490 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:36Z","lastTransitionTime":"2026-02-23T17:31:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.110782 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 08:36:46.861500907 +0000 UTC Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.168165 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.168234 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.168249 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.168274 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.168290 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:36Z","lastTransitionTime":"2026-02-23T17:31:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.271595 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.271656 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.271674 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.271702 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.271719 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:36Z","lastTransitionTime":"2026-02-23T17:31:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.375456 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.375506 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.375515 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.375532 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.375543 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:36Z","lastTransitionTime":"2026-02-23T17:31:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.478873 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.478967 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.478989 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.479022 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.479045 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:36Z","lastTransitionTime":"2026-02-23T17:31:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.582498 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.582545 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.582555 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.582575 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.582590 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:36Z","lastTransitionTime":"2026-02-23T17:31:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.685378 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.685506 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.685529 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.685560 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.685579 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:36Z","lastTransitionTime":"2026-02-23T17:31:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.720296 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.729123 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.735863 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.737156 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35021d6d-15bd-4153-9dae-7b002eff8c23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26478861fb89db7eb6b5ba0a2089cad4360893bd95a71883be07831745c5c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f971a2a092b595760de8b7a9b6a0a13fa802d3fcb6a2c2b8922b01c9c57d782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d892cd1f6ea7fabb2e64e3e78d329f6e0efbf8585cc4f72fbcd29f31a035cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.749190 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.769055 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e1d6606-75fc-41fd-9c23-18ee248da2af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe805066e5e61f09a4f5cf1f9e912da6664229c8db86a898cbe670278e4c2530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beda3a9d030f2d4aa9c6a78508a84f5bc122debebb2c5b423233834433b8fd5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e108d74a635da102e857e2ad4f18384861e4e630ae31af9aa05841e6bbc2de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:25Z\\\",\\\"message\\\":\\\"W0223 17:31:24.378840 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 17:31:24.379771 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771867884 cert, and key in /tmp/serving-cert-1953836024/serving-signer.crt, /tmp/serving-cert-1953836024/serving-signer.key\\\\nI0223 17:31:24.846158 1 observer_polling.go:159] Starting file observer\\\\nW0223 17:31:24.858497 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 17:31:24.858706 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 17:31:24.859835 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1953836024/tls.crt::/tmp/serving-cert-1953836024/tls.key\\\\\\\"\\\\nF0223 17:31:25.358711 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:31:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b550694e49325768a2674a4d2b7089dd267b91a273751fb62d060a0663f06619\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.785916 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.788166 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.788216 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.788234 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.788260 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.788279 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:36Z","lastTransitionTime":"2026-02-23T17:31:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.807743 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.821782 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.838555 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.852336 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.867459 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2df91905-ff3d-4a7d-8e22-337861483a5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9fabd2560e4a3826fb070276626936d120d612d37e2750e417f637b93b1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff1c2b88a730048902a1360cdbcddfb4f82c6a58c427edb24a792b254f55383\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:17Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 17:30:47.305065 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 17:30:47.307934 1 observer_polling.go:159] Starting file observer\\\\nI0223 17:30:47.346927 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 17:30:47.351923 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0223 17:31:17.828320 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:16Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbce4ddd851479ee9b40b8c773a052d5a1d1420df107d8ea0e9fa2b927300910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bfa4a72a4a487fbc7703a3b60a9ef9a18b7fea6e58fd427f50361d8df9731d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2591d81bed9e5dbad670e16b2266367cfb386a3d68ffad9d09e0652130ee2609\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.882682 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35021d6d-15bd-4153-9dae-7b002eff8c23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26478861fb89db7eb6b5ba0a2089cad4360893bd95a71883be07831745c5c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f971a2a092b595760de8b7a9b6a0a13fa802d3fcb6a2c2b8922b01c9c57d782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d892cd1f6ea7fabb2e64e3e78d329f6e0efbf8585cc4f72fbcd29f31a035cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.891173 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.891245 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.891264 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.891291 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.891312 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:36Z","lastTransitionTime":"2026-02-23T17:31:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.901386 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.920596 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.936650 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.950844 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.950946 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:31:36 crc kubenswrapper[4724]: E0223 17:31:36.951023 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.951054 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:31:36 crc kubenswrapper[4724]: E0223 17:31:36.951223 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 17:31:36 crc kubenswrapper[4724]: E0223 17:31:36.951467 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.954204 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e1d6606-75fc-41fd-9c23-18ee248da2af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe805066e5e61f09a4f5cf1f9e912da6664229c8db86a898cbe670278e4c2530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beda3a9d030f2d4aa9c6a78508a84f5bc122debebb2c5b423233834433b8fd5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e108d74a635da102e857e2ad4f18384861e4e630ae31af9aa05841e6bbc2de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:25Z\\\",\\\"message\\\":\\\"W0223 17:31:24.378840 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 17:31:24.379771 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771867884 cert, and key in /tmp/serving-cert-1953836024/serving-signer.crt, /tmp/serving-cert-1953836024/serving-signer.key\\\\nI0223 17:31:24.846158 1 observer_polling.go:159] Starting file observer\\\\nW0223 17:31:24.858497 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 17:31:24.858706 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 17:31:24.859835 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1953836024/tls.crt::/tmp/serving-cert-1953836024/tls.key\\\\\\\"\\\\nF0223 17:31:25.358711 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:31:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b550694e49325768a2674a4d2b7089dd267b91a273751fb62d060a0663f06619\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.970491 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.983998 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.993924 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.994000 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.994027 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.994067 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:36 crc kubenswrapper[4724]: I0223 17:31:36.994091 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:36Z","lastTransitionTime":"2026-02-23T17:31:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:37 crc kubenswrapper[4724]: I0223 17:31:37.001362 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:37 crc kubenswrapper[4724]: I0223 17:31:37.097721 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:37 crc kubenswrapper[4724]: I0223 17:31:37.097788 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:37 crc kubenswrapper[4724]: I0223 17:31:37.097808 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:37 crc kubenswrapper[4724]: I0223 17:31:37.097840 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:37 crc kubenswrapper[4724]: I0223 17:31:37.097859 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:37Z","lastTransitionTime":"2026-02-23T17:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:37 crc kubenswrapper[4724]: I0223 17:31:37.111143 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 14:44:42.047663372 +0000 UTC Feb 23 17:31:37 crc kubenswrapper[4724]: I0223 17:31:37.200520 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:37 crc kubenswrapper[4724]: I0223 17:31:37.200581 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:37 crc kubenswrapper[4724]: I0223 17:31:37.200598 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:37 crc kubenswrapper[4724]: I0223 17:31:37.200625 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:37 crc kubenswrapper[4724]: I0223 17:31:37.200651 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:37Z","lastTransitionTime":"2026-02-23T17:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:37 crc kubenswrapper[4724]: I0223 17:31:37.303881 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:37 crc kubenswrapper[4724]: I0223 17:31:37.303926 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:37 crc kubenswrapper[4724]: I0223 17:31:37.303939 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:37 crc kubenswrapper[4724]: I0223 17:31:37.303960 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:37 crc kubenswrapper[4724]: I0223 17:31:37.303975 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:37Z","lastTransitionTime":"2026-02-23T17:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:37 crc kubenswrapper[4724]: I0223 17:31:37.406946 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:37 crc kubenswrapper[4724]: I0223 17:31:37.407003 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:37 crc kubenswrapper[4724]: I0223 17:31:37.407016 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:37 crc kubenswrapper[4724]: I0223 17:31:37.407035 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:37 crc kubenswrapper[4724]: I0223 17:31:37.407046 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:37Z","lastTransitionTime":"2026-02-23T17:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:37 crc kubenswrapper[4724]: I0223 17:31:37.514031 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:37 crc kubenswrapper[4724]: I0223 17:31:37.514084 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:37 crc kubenswrapper[4724]: I0223 17:31:37.514095 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:37 crc kubenswrapper[4724]: I0223 17:31:37.514114 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:37 crc kubenswrapper[4724]: I0223 17:31:37.514127 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:37Z","lastTransitionTime":"2026-02-23T17:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:37 crc kubenswrapper[4724]: E0223 17:31:37.555538 4724 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 17:31:37 crc kubenswrapper[4724]: I0223 17:31:37.616588 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:37 crc kubenswrapper[4724]: I0223 17:31:37.616648 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:37 crc kubenswrapper[4724]: I0223 17:31:37.616662 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:37 crc kubenswrapper[4724]: I0223 17:31:37.616683 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:37 crc kubenswrapper[4724]: I0223 17:31:37.616700 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:37Z","lastTransitionTime":"2026-02-23T17:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:37 crc kubenswrapper[4724]: I0223 17:31:37.720306 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:37 crc kubenswrapper[4724]: I0223 17:31:37.720373 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:37 crc kubenswrapper[4724]: I0223 17:31:37.720431 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:37 crc kubenswrapper[4724]: I0223 17:31:37.720462 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:37 crc kubenswrapper[4724]: I0223 17:31:37.720483 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:37Z","lastTransitionTime":"2026-02-23T17:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:37 crc kubenswrapper[4724]: I0223 17:31:37.824019 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:37 crc kubenswrapper[4724]: I0223 17:31:37.824109 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:37 crc kubenswrapper[4724]: I0223 17:31:37.824134 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:37 crc kubenswrapper[4724]: I0223 17:31:37.824166 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:37 crc kubenswrapper[4724]: I0223 17:31:37.824191 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:37Z","lastTransitionTime":"2026-02-23T17:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:37 crc kubenswrapper[4724]: I0223 17:31:37.928178 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:37 crc kubenswrapper[4724]: I0223 17:31:37.928282 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:37 crc kubenswrapper[4724]: I0223 17:31:37.928313 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:37 crc kubenswrapper[4724]: I0223 17:31:37.928354 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:37 crc kubenswrapper[4724]: I0223 17:31:37.928378 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:37Z","lastTransitionTime":"2026-02-23T17:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:38 crc kubenswrapper[4724]: I0223 17:31:38.031125 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:38 crc kubenswrapper[4724]: I0223 17:31:38.031169 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:38 crc kubenswrapper[4724]: I0223 17:31:38.031179 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:38 crc kubenswrapper[4724]: I0223 17:31:38.031195 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:38 crc kubenswrapper[4724]: I0223 17:31:38.031208 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:38Z","lastTransitionTime":"2026-02-23T17:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:38 crc kubenswrapper[4724]: I0223 17:31:38.111835 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 09:44:05.86363166 +0000 UTC Feb 23 17:31:38 crc kubenswrapper[4724]: I0223 17:31:38.135514 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:38 crc kubenswrapper[4724]: I0223 17:31:38.135592 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:38 crc kubenswrapper[4724]: I0223 17:31:38.135610 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:38 crc kubenswrapper[4724]: I0223 17:31:38.135638 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:38 crc kubenswrapper[4724]: I0223 17:31:38.135661 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:38Z","lastTransitionTime":"2026-02-23T17:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:38 crc kubenswrapper[4724]: I0223 17:31:38.239220 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:38 crc kubenswrapper[4724]: I0223 17:31:38.239269 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:38 crc kubenswrapper[4724]: I0223 17:31:38.239280 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:38 crc kubenswrapper[4724]: I0223 17:31:38.239299 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:38 crc kubenswrapper[4724]: I0223 17:31:38.239313 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:38Z","lastTransitionTime":"2026-02-23T17:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:38 crc kubenswrapper[4724]: I0223 17:31:38.341106 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:38 crc kubenswrapper[4724]: I0223 17:31:38.341151 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:38 crc kubenswrapper[4724]: I0223 17:31:38.341166 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:38 crc kubenswrapper[4724]: I0223 17:31:38.341188 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:38 crc kubenswrapper[4724]: I0223 17:31:38.341202 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:38Z","lastTransitionTime":"2026-02-23T17:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:38 crc kubenswrapper[4724]: I0223 17:31:38.444520 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:38 crc kubenswrapper[4724]: I0223 17:31:38.445159 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:38 crc kubenswrapper[4724]: I0223 17:31:38.445234 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:38 crc kubenswrapper[4724]: I0223 17:31:38.445308 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:38 crc kubenswrapper[4724]: I0223 17:31:38.445370 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:38Z","lastTransitionTime":"2026-02-23T17:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:38 crc kubenswrapper[4724]: I0223 17:31:38.548179 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:38 crc kubenswrapper[4724]: I0223 17:31:38.548524 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:38 crc kubenswrapper[4724]: I0223 17:31:38.548619 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:38 crc kubenswrapper[4724]: I0223 17:31:38.548715 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:38 crc kubenswrapper[4724]: I0223 17:31:38.548796 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:38Z","lastTransitionTime":"2026-02-23T17:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:38 crc kubenswrapper[4724]: I0223 17:31:38.652127 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:38 crc kubenswrapper[4724]: I0223 17:31:38.652514 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:38 crc kubenswrapper[4724]: I0223 17:31:38.652667 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:38 crc kubenswrapper[4724]: I0223 17:31:38.652775 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:38 crc kubenswrapper[4724]: I0223 17:31:38.652881 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:38Z","lastTransitionTime":"2026-02-23T17:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:38 crc kubenswrapper[4724]: I0223 17:31:38.755908 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:38 crc kubenswrapper[4724]: I0223 17:31:38.755967 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:38 crc kubenswrapper[4724]: I0223 17:31:38.755979 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:38 crc kubenswrapper[4724]: I0223 17:31:38.755998 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:38 crc kubenswrapper[4724]: I0223 17:31:38.756009 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:38Z","lastTransitionTime":"2026-02-23T17:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:38 crc kubenswrapper[4724]: I0223 17:31:38.859162 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:38 crc kubenswrapper[4724]: I0223 17:31:38.859217 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:38 crc kubenswrapper[4724]: I0223 17:31:38.859231 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:38 crc kubenswrapper[4724]: I0223 17:31:38.859251 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:38 crc kubenswrapper[4724]: I0223 17:31:38.859266 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:38Z","lastTransitionTime":"2026-02-23T17:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:38 crc kubenswrapper[4724]: I0223 17:31:38.948498 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:31:38 crc kubenswrapper[4724]: I0223 17:31:38.948578 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:31:38 crc kubenswrapper[4724]: I0223 17:31:38.948607 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:31:38 crc kubenswrapper[4724]: I0223 17:31:38.948631 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:31:38 crc kubenswrapper[4724]: I0223 17:31:38.948650 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:31:38 crc kubenswrapper[4724]: E0223 17:31:38.948729 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:31:46.948690738 +0000 UTC m=+62.764890348 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:31:38 crc kubenswrapper[4724]: E0223 17:31:38.948783 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 17:31:38 crc kubenswrapper[4724]: E0223 17:31:38.948802 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 17:31:38 crc kubenswrapper[4724]: E0223 17:31:38.948815 4724 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 17:31:38 crc kubenswrapper[4724]: E0223 17:31:38.948843 4724 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 17:31:38 crc kubenswrapper[4724]: E0223 17:31:38.948854 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 17:31:38 crc kubenswrapper[4724]: E0223 17:31:38.948902 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 17:31:38 crc kubenswrapper[4724]: E0223 17:31:38.948922 4724 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 17:31:38 crc kubenswrapper[4724]: E0223 17:31:38.948864 4724 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 17:31:38 crc kubenswrapper[4724]: E0223 17:31:38.948882 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-23 17:31:46.948863573 +0000 UTC m=+62.765063343 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 17:31:38 crc kubenswrapper[4724]: E0223 17:31:38.948981 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 17:31:46.948967165 +0000 UTC m=+62.765166775 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 17:31:38 crc kubenswrapper[4724]: E0223 17:31:38.949000 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-23 17:31:46.948989816 +0000 UTC m=+62.765189426 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 17:31:38 crc kubenswrapper[4724]: E0223 17:31:38.949016 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 17:31:46.949008026 +0000 UTC m=+62.765207636 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 17:31:38 crc kubenswrapper[4724]: I0223 17:31:38.949960 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:31:38 crc kubenswrapper[4724]: I0223 17:31:38.949975 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:31:38 crc kubenswrapper[4724]: I0223 17:31:38.949978 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:31:38 crc kubenswrapper[4724]: E0223 17:31:38.950064 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 17:31:38 crc kubenswrapper[4724]: E0223 17:31:38.950221 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 17:31:38 crc kubenswrapper[4724]: E0223 17:31:38.950349 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 17:31:38 crc kubenswrapper[4724]: I0223 17:31:38.961536 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:38 crc kubenswrapper[4724]: I0223 17:31:38.961567 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:38 crc kubenswrapper[4724]: I0223 17:31:38.961581 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:38 crc kubenswrapper[4724]: I0223 17:31:38.961601 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:38 crc kubenswrapper[4724]: I0223 17:31:38.961614 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:38Z","lastTransitionTime":"2026-02-23T17:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:39 crc kubenswrapper[4724]: I0223 17:31:39.064035 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:39 crc kubenswrapper[4724]: I0223 17:31:39.064099 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:39 crc kubenswrapper[4724]: I0223 17:31:39.064111 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:39 crc kubenswrapper[4724]: I0223 17:31:39.064133 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:39 crc kubenswrapper[4724]: I0223 17:31:39.064147 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:39Z","lastTransitionTime":"2026-02-23T17:31:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:39 crc kubenswrapper[4724]: I0223 17:31:39.112659 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 07:06:40.174253494 +0000 UTC Feb 23 17:31:39 crc kubenswrapper[4724]: I0223 17:31:39.167574 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:39 crc kubenswrapper[4724]: I0223 17:31:39.167639 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:39 crc kubenswrapper[4724]: I0223 17:31:39.167658 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:39 crc kubenswrapper[4724]: I0223 17:31:39.167688 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:39 crc kubenswrapper[4724]: I0223 17:31:39.167710 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:39Z","lastTransitionTime":"2026-02-23T17:31:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:39 crc kubenswrapper[4724]: I0223 17:31:39.271472 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:39 crc kubenswrapper[4724]: I0223 17:31:39.271545 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:39 crc kubenswrapper[4724]: I0223 17:31:39.271566 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:39 crc kubenswrapper[4724]: I0223 17:31:39.271596 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:39 crc kubenswrapper[4724]: I0223 17:31:39.271616 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:39Z","lastTransitionTime":"2026-02-23T17:31:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:39 crc kubenswrapper[4724]: I0223 17:31:39.375184 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:39 crc kubenswrapper[4724]: I0223 17:31:39.375229 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:39 crc kubenswrapper[4724]: I0223 17:31:39.375240 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:39 crc kubenswrapper[4724]: I0223 17:31:39.375259 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:39 crc kubenswrapper[4724]: I0223 17:31:39.375274 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:39Z","lastTransitionTime":"2026-02-23T17:31:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:39 crc kubenswrapper[4724]: I0223 17:31:39.478252 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:39 crc kubenswrapper[4724]: I0223 17:31:39.478323 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:39 crc kubenswrapper[4724]: I0223 17:31:39.478346 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:39 crc kubenswrapper[4724]: I0223 17:31:39.478378 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:39 crc kubenswrapper[4724]: I0223 17:31:39.478450 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:39Z","lastTransitionTime":"2026-02-23T17:31:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:39 crc kubenswrapper[4724]: I0223 17:31:39.582282 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:39 crc kubenswrapper[4724]: I0223 17:31:39.582753 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:39 crc kubenswrapper[4724]: I0223 17:31:39.582983 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:39 crc kubenswrapper[4724]: I0223 17:31:39.583197 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:39 crc kubenswrapper[4724]: I0223 17:31:39.583366 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:39Z","lastTransitionTime":"2026-02-23T17:31:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:39 crc kubenswrapper[4724]: I0223 17:31:39.686172 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:39 crc kubenswrapper[4724]: I0223 17:31:39.686607 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:39 crc kubenswrapper[4724]: I0223 17:31:39.686736 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:39 crc kubenswrapper[4724]: I0223 17:31:39.686882 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:39 crc kubenswrapper[4724]: I0223 17:31:39.687020 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:39Z","lastTransitionTime":"2026-02-23T17:31:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:39 crc kubenswrapper[4724]: I0223 17:31:39.790946 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:39 crc kubenswrapper[4724]: I0223 17:31:39.791007 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:39 crc kubenswrapper[4724]: I0223 17:31:39.791026 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:39 crc kubenswrapper[4724]: I0223 17:31:39.791054 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:39 crc kubenswrapper[4724]: I0223 17:31:39.791077 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:39Z","lastTransitionTime":"2026-02-23T17:31:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:39 crc kubenswrapper[4724]: I0223 17:31:39.893866 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:39 crc kubenswrapper[4724]: I0223 17:31:39.893912 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:39 crc kubenswrapper[4724]: I0223 17:31:39.893920 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:39 crc kubenswrapper[4724]: I0223 17:31:39.893936 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:39 crc kubenswrapper[4724]: I0223 17:31:39.893946 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:39Z","lastTransitionTime":"2026-02-23T17:31:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:39 crc kubenswrapper[4724]: I0223 17:31:39.997511 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:39 crc kubenswrapper[4724]: I0223 17:31:39.997584 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:39 crc kubenswrapper[4724]: I0223 17:31:39.997602 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:39 crc kubenswrapper[4724]: I0223 17:31:39.997629 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:39 crc kubenswrapper[4724]: I0223 17:31:39.997646 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:39Z","lastTransitionTime":"2026-02-23T17:31:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:40 crc kubenswrapper[4724]: I0223 17:31:40.101557 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:40 crc kubenswrapper[4724]: I0223 17:31:40.101637 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:40 crc kubenswrapper[4724]: I0223 17:31:40.101657 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:40 crc kubenswrapper[4724]: I0223 17:31:40.101685 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:40 crc kubenswrapper[4724]: I0223 17:31:40.101705 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:40Z","lastTransitionTime":"2026-02-23T17:31:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:40 crc kubenswrapper[4724]: I0223 17:31:40.113648 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 07:28:56.052130942 +0000 UTC Feb 23 17:31:40 crc kubenswrapper[4724]: I0223 17:31:40.205348 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:40 crc kubenswrapper[4724]: I0223 17:31:40.205426 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:40 crc kubenswrapper[4724]: I0223 17:31:40.205440 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:40 crc kubenswrapper[4724]: I0223 17:31:40.205459 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:40 crc kubenswrapper[4724]: I0223 17:31:40.205471 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:40Z","lastTransitionTime":"2026-02-23T17:31:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:40 crc kubenswrapper[4724]: I0223 17:31:40.308907 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:40 crc kubenswrapper[4724]: I0223 17:31:40.308969 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:40 crc kubenswrapper[4724]: I0223 17:31:40.308993 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:40 crc kubenswrapper[4724]: I0223 17:31:40.309025 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:40 crc kubenswrapper[4724]: I0223 17:31:40.309045 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:40Z","lastTransitionTime":"2026-02-23T17:31:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:40 crc kubenswrapper[4724]: I0223 17:31:40.411508 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:40 crc kubenswrapper[4724]: I0223 17:31:40.411585 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:40 crc kubenswrapper[4724]: I0223 17:31:40.411601 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:40 crc kubenswrapper[4724]: I0223 17:31:40.411625 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:40 crc kubenswrapper[4724]: I0223 17:31:40.411640 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:40Z","lastTransitionTime":"2026-02-23T17:31:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:40 crc kubenswrapper[4724]: I0223 17:31:40.514638 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:40 crc kubenswrapper[4724]: I0223 17:31:40.514688 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:40 crc kubenswrapper[4724]: I0223 17:31:40.514698 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:40 crc kubenswrapper[4724]: I0223 17:31:40.514719 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:40 crc kubenswrapper[4724]: I0223 17:31:40.514732 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:40Z","lastTransitionTime":"2026-02-23T17:31:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:40 crc kubenswrapper[4724]: I0223 17:31:40.617438 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:40 crc kubenswrapper[4724]: I0223 17:31:40.617487 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:40 crc kubenswrapper[4724]: I0223 17:31:40.617497 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:40 crc kubenswrapper[4724]: I0223 17:31:40.617515 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:40 crc kubenswrapper[4724]: I0223 17:31:40.617525 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:40Z","lastTransitionTime":"2026-02-23T17:31:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:40 crc kubenswrapper[4724]: I0223 17:31:40.624665 4724 csr.go:261] certificate signing request csr-g2tk8 is approved, waiting to be issued Feb 23 17:31:40 crc kubenswrapper[4724]: I0223 17:31:40.639871 4724 csr.go:257] certificate signing request csr-g2tk8 is issued Feb 23 17:31:40 crc kubenswrapper[4724]: I0223 17:31:40.720541 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:40 crc kubenswrapper[4724]: I0223 17:31:40.720581 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:40 crc kubenswrapper[4724]: I0223 17:31:40.720592 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:40 crc kubenswrapper[4724]: I0223 17:31:40.720610 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:40 crc kubenswrapper[4724]: I0223 17:31:40.720622 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:40Z","lastTransitionTime":"2026-02-23T17:31:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:40 crc kubenswrapper[4724]: I0223 17:31:40.823682 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:40 crc kubenswrapper[4724]: I0223 17:31:40.823733 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:40 crc kubenswrapper[4724]: I0223 17:31:40.823745 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:40 crc kubenswrapper[4724]: I0223 17:31:40.823763 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:40 crc kubenswrapper[4724]: I0223 17:31:40.823778 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:40Z","lastTransitionTime":"2026-02-23T17:31:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:40 crc kubenswrapper[4724]: I0223 17:31:40.926600 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:40 crc kubenswrapper[4724]: I0223 17:31:40.926639 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:40 crc kubenswrapper[4724]: I0223 17:31:40.926647 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:40 crc kubenswrapper[4724]: I0223 17:31:40.926663 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:40 crc kubenswrapper[4724]: I0223 17:31:40.926672 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:40Z","lastTransitionTime":"2026-02-23T17:31:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:40 crc kubenswrapper[4724]: I0223 17:31:40.950299 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:31:40 crc kubenswrapper[4724]: I0223 17:31:40.950360 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:31:40 crc kubenswrapper[4724]: E0223 17:31:40.950508 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 17:31:40 crc kubenswrapper[4724]: E0223 17:31:40.950658 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 17:31:40 crc kubenswrapper[4724]: I0223 17:31:40.951088 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:31:40 crc kubenswrapper[4724]: E0223 17:31:40.951221 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.029854 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.029911 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.029926 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.029951 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.029969 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:41Z","lastTransitionTime":"2026-02-23T17:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.114812 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 10:26:42.634747911 +0000 UTC Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.133236 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.133605 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.133696 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.133786 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.133860 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:41Z","lastTransitionTime":"2026-02-23T17:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.236844 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.236889 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.236898 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.236914 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.236923 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:41Z","lastTransitionTime":"2026-02-23T17:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.339663 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.339715 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.339730 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.339747 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.339757 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:41Z","lastTransitionTime":"2026-02-23T17:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.442707 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.442763 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.442779 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.442803 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.442820 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:41Z","lastTransitionTime":"2026-02-23T17:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.545418 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.545470 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.545481 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.545501 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.545511 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:41Z","lastTransitionTime":"2026-02-23T17:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.642231 4724 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-23 17:26:40 +0000 UTC, rotation deadline is 2026-12-15 03:12:16.231359007 +0000 UTC Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.642565 4724 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7065h40m34.588802275s for next certificate rotation Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.647821 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.648073 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.648142 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.648208 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.648274 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:41Z","lastTransitionTime":"2026-02-23T17:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.750974 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.751035 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.751048 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.751070 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.751096 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:41Z","lastTransitionTime":"2026-02-23T17:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.844258 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.844631 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.844732 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.844820 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.844897 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:41Z","lastTransitionTime":"2026-02-23T17:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:41 crc kubenswrapper[4724]: E0223 17:31:41.855485 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aaac6a71-65af-4ded-9945-71c01ce15653\\\",\\\"systemUUID\\\":\\\"883aa43b-ee67-45aa-9f6b-7760dc931d5e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.860045 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.860103 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.860115 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.860135 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.860147 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:41Z","lastTransitionTime":"2026-02-23T17:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:41 crc kubenswrapper[4724]: E0223 17:31:41.871280 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aaac6a71-65af-4ded-9945-71c01ce15653\\\",\\\"systemUUID\\\":\\\"883aa43b-ee67-45aa-9f6b-7760dc931d5e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.875057 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.875130 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.875142 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.875179 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.875195 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:41Z","lastTransitionTime":"2026-02-23T17:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:41 crc kubenswrapper[4724]: E0223 17:31:41.894508 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aaac6a71-65af-4ded-9945-71c01ce15653\\\",\\\"systemUUID\\\":\\\"883aa43b-ee67-45aa-9f6b-7760dc931d5e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.899114 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.899168 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.899194 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.899217 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.899228 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:41Z","lastTransitionTime":"2026-02-23T17:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:41 crc kubenswrapper[4724]: E0223 17:31:41.911151 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aaac6a71-65af-4ded-9945-71c01ce15653\\\",\\\"systemUUID\\\":\\\"883aa43b-ee67-45aa-9f6b-7760dc931d5e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.915403 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.915439 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.915449 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.915468 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.915486 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:41Z","lastTransitionTime":"2026-02-23T17:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:41 crc kubenswrapper[4724]: E0223 17:31:41.925458 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aaac6a71-65af-4ded-9945-71c01ce15653\\\",\\\"systemUUID\\\":\\\"883aa43b-ee67-45aa-9f6b-7760dc931d5e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:41 crc kubenswrapper[4724]: E0223 17:31:41.925592 4724 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.927691 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.927733 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.927745 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.927770 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:41 crc kubenswrapper[4724]: I0223 17:31:41.927784 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:41Z","lastTransitionTime":"2026-02-23T17:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:42 crc kubenswrapper[4724]: I0223 17:31:42.032895 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:42 crc kubenswrapper[4724]: I0223 17:31:42.033202 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:42 crc kubenswrapper[4724]: I0223 17:31:42.033273 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:42 crc kubenswrapper[4724]: I0223 17:31:42.033342 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:42 crc kubenswrapper[4724]: I0223 17:31:42.033453 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:42Z","lastTransitionTime":"2026-02-23T17:31:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:42 crc kubenswrapper[4724]: I0223 17:31:42.115810 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 18:46:17.777207066 +0000 UTC Feb 23 17:31:42 crc kubenswrapper[4724]: I0223 17:31:42.136454 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:42 crc kubenswrapper[4724]: I0223 17:31:42.136533 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:42 crc kubenswrapper[4724]: I0223 17:31:42.136559 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:42 crc kubenswrapper[4724]: I0223 17:31:42.136593 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:42 crc kubenswrapper[4724]: I0223 17:31:42.136617 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:42Z","lastTransitionTime":"2026-02-23T17:31:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:42 crc kubenswrapper[4724]: I0223 17:31:42.239098 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:42 crc kubenswrapper[4724]: I0223 17:31:42.239139 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:42 crc kubenswrapper[4724]: I0223 17:31:42.239149 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:42 crc kubenswrapper[4724]: I0223 17:31:42.239165 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:42 crc kubenswrapper[4724]: I0223 17:31:42.239176 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:42Z","lastTransitionTime":"2026-02-23T17:31:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:42 crc kubenswrapper[4724]: I0223 17:31:42.342714 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:42 crc kubenswrapper[4724]: I0223 17:31:42.342788 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:42 crc kubenswrapper[4724]: I0223 17:31:42.342804 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:42 crc kubenswrapper[4724]: I0223 17:31:42.342831 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:42 crc kubenswrapper[4724]: I0223 17:31:42.342850 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:42Z","lastTransitionTime":"2026-02-23T17:31:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:42 crc kubenswrapper[4724]: I0223 17:31:42.446278 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:42 crc kubenswrapper[4724]: I0223 17:31:42.446347 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:42 crc kubenswrapper[4724]: I0223 17:31:42.446372 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:42 crc kubenswrapper[4724]: I0223 17:31:42.446474 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:42 crc kubenswrapper[4724]: I0223 17:31:42.446498 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:42Z","lastTransitionTime":"2026-02-23T17:31:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:42 crc kubenswrapper[4724]: I0223 17:31:42.549899 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:42 crc kubenswrapper[4724]: I0223 17:31:42.549955 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:42 crc kubenswrapper[4724]: I0223 17:31:42.549965 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:42 crc kubenswrapper[4724]: I0223 17:31:42.549983 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:42 crc kubenswrapper[4724]: I0223 17:31:42.549996 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:42Z","lastTransitionTime":"2026-02-23T17:31:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:42 crc kubenswrapper[4724]: I0223 17:31:42.653066 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:42 crc kubenswrapper[4724]: I0223 17:31:42.653114 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:42 crc kubenswrapper[4724]: I0223 17:31:42.653159 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:42 crc kubenswrapper[4724]: I0223 17:31:42.653179 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:42 crc kubenswrapper[4724]: I0223 17:31:42.653193 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:42Z","lastTransitionTime":"2026-02-23T17:31:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:42 crc kubenswrapper[4724]: I0223 17:31:42.755895 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:42 crc kubenswrapper[4724]: I0223 17:31:42.755941 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:42 crc kubenswrapper[4724]: I0223 17:31:42.755967 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:42 crc kubenswrapper[4724]: I0223 17:31:42.755987 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:42 crc kubenswrapper[4724]: I0223 17:31:42.756000 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:42Z","lastTransitionTime":"2026-02-23T17:31:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:42 crc kubenswrapper[4724]: I0223 17:31:42.859299 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:42 crc kubenswrapper[4724]: I0223 17:31:42.859341 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:42 crc kubenswrapper[4724]: I0223 17:31:42.859353 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:42 crc kubenswrapper[4724]: I0223 17:31:42.859375 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:42 crc kubenswrapper[4724]: I0223 17:31:42.859402 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:42Z","lastTransitionTime":"2026-02-23T17:31:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:42 crc kubenswrapper[4724]: I0223 17:31:42.950698 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:31:42 crc kubenswrapper[4724]: I0223 17:31:42.950808 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:31:42 crc kubenswrapper[4724]: I0223 17:31:42.950702 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:31:42 crc kubenswrapper[4724]: E0223 17:31:42.950875 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 17:31:42 crc kubenswrapper[4724]: E0223 17:31:42.950932 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 17:31:42 crc kubenswrapper[4724]: E0223 17:31:42.950993 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 17:31:42 crc kubenswrapper[4724]: I0223 17:31:42.962063 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:42 crc kubenswrapper[4724]: I0223 17:31:42.962109 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:42 crc kubenswrapper[4724]: I0223 17:31:42.962126 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:42 crc kubenswrapper[4724]: I0223 17:31:42.962144 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:42 crc kubenswrapper[4724]: I0223 17:31:42.962158 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:42Z","lastTransitionTime":"2026-02-23T17:31:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:43 crc kubenswrapper[4724]: I0223 17:31:43.065235 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:43 crc kubenswrapper[4724]: I0223 17:31:43.065316 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:43 crc kubenswrapper[4724]: I0223 17:31:43.065331 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:43 crc kubenswrapper[4724]: I0223 17:31:43.065374 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:43 crc kubenswrapper[4724]: I0223 17:31:43.065408 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:43Z","lastTransitionTime":"2026-02-23T17:31:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:43 crc kubenswrapper[4724]: I0223 17:31:43.116322 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 02:37:23.637474447 +0000 UTC Feb 23 17:31:43 crc kubenswrapper[4724]: I0223 17:31:43.168126 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:43 crc kubenswrapper[4724]: I0223 17:31:43.168179 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:43 crc kubenswrapper[4724]: I0223 17:31:43.168192 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:43 crc kubenswrapper[4724]: I0223 17:31:43.168213 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:43 crc kubenswrapper[4724]: I0223 17:31:43.168229 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:43Z","lastTransitionTime":"2026-02-23T17:31:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:43 crc kubenswrapper[4724]: I0223 17:31:43.270965 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:43 crc kubenswrapper[4724]: I0223 17:31:43.270998 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:43 crc kubenswrapper[4724]: I0223 17:31:43.271008 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:43 crc kubenswrapper[4724]: I0223 17:31:43.271023 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:43 crc kubenswrapper[4724]: I0223 17:31:43.271034 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:43Z","lastTransitionTime":"2026-02-23T17:31:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:43 crc kubenswrapper[4724]: I0223 17:31:43.373059 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:43 crc kubenswrapper[4724]: I0223 17:31:43.373473 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:43 crc kubenswrapper[4724]: I0223 17:31:43.373580 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:43 crc kubenswrapper[4724]: I0223 17:31:43.373661 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:43 crc kubenswrapper[4724]: I0223 17:31:43.373729 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:43Z","lastTransitionTime":"2026-02-23T17:31:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:43 crc kubenswrapper[4724]: I0223 17:31:43.476903 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:43 crc kubenswrapper[4724]: I0223 17:31:43.477205 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:43 crc kubenswrapper[4724]: I0223 17:31:43.477449 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:43 crc kubenswrapper[4724]: I0223 17:31:43.477640 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:43 crc kubenswrapper[4724]: I0223 17:31:43.477814 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:43Z","lastTransitionTime":"2026-02-23T17:31:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:43 crc kubenswrapper[4724]: I0223 17:31:43.580163 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:43 crc kubenswrapper[4724]: I0223 17:31:43.580210 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:43 crc kubenswrapper[4724]: I0223 17:31:43.580220 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:43 crc kubenswrapper[4724]: I0223 17:31:43.580237 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:43 crc kubenswrapper[4724]: I0223 17:31:43.580247 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:43Z","lastTransitionTime":"2026-02-23T17:31:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:43 crc kubenswrapper[4724]: I0223 17:31:43.683008 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:43 crc kubenswrapper[4724]: I0223 17:31:43.683059 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:43 crc kubenswrapper[4724]: I0223 17:31:43.683069 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:43 crc kubenswrapper[4724]: I0223 17:31:43.683085 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:43 crc kubenswrapper[4724]: I0223 17:31:43.683097 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:43Z","lastTransitionTime":"2026-02-23T17:31:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:43 crc kubenswrapper[4724]: I0223 17:31:43.786223 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:43 crc kubenswrapper[4724]: I0223 17:31:43.786291 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:43 crc kubenswrapper[4724]: I0223 17:31:43.786311 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:43 crc kubenswrapper[4724]: I0223 17:31:43.786342 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:43 crc kubenswrapper[4724]: I0223 17:31:43.786362 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:43Z","lastTransitionTime":"2026-02-23T17:31:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:43 crc kubenswrapper[4724]: I0223 17:31:43.889724 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:43 crc kubenswrapper[4724]: I0223 17:31:43.890084 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:43 crc kubenswrapper[4724]: I0223 17:31:43.890160 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:43 crc kubenswrapper[4724]: I0223 17:31:43.890239 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:43 crc kubenswrapper[4724]: I0223 17:31:43.890317 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:43Z","lastTransitionTime":"2026-02-23T17:31:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:43 crc kubenswrapper[4724]: I0223 17:31:43.951553 4724 scope.go:117] "RemoveContainer" containerID="93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae" Feb 23 17:31:43 crc kubenswrapper[4724]: E0223 17:31:43.951867 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 17:31:43 crc kubenswrapper[4724]: I0223 17:31:43.993037 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:43 crc kubenswrapper[4724]: I0223 17:31:43.993349 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:43 crc kubenswrapper[4724]: I0223 17:31:43.993436 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:43 crc kubenswrapper[4724]: I0223 17:31:43.993512 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:43 crc kubenswrapper[4724]: I0223 17:31:43.993569 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:43Z","lastTransitionTime":"2026-02-23T17:31:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:44 crc kubenswrapper[4724]: I0223 17:31:44.096788 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:44 crc kubenswrapper[4724]: I0223 17:31:44.097168 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:44 crc kubenswrapper[4724]: I0223 17:31:44.097261 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:44 crc kubenswrapper[4724]: I0223 17:31:44.097341 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:44 crc kubenswrapper[4724]: I0223 17:31:44.097442 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:44Z","lastTransitionTime":"2026-02-23T17:31:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:44 crc kubenswrapper[4724]: I0223 17:31:44.116540 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 18:51:57.946800214 +0000 UTC Feb 23 17:31:44 crc kubenswrapper[4724]: I0223 17:31:44.200618 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:44 crc kubenswrapper[4724]: I0223 17:31:44.200664 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:44 crc kubenswrapper[4724]: I0223 17:31:44.200673 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:44 crc kubenswrapper[4724]: I0223 17:31:44.200692 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:44 crc kubenswrapper[4724]: I0223 17:31:44.200706 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:44Z","lastTransitionTime":"2026-02-23T17:31:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:44 crc kubenswrapper[4724]: I0223 17:31:44.303903 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:44 crc kubenswrapper[4724]: I0223 17:31:44.303946 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:44 crc kubenswrapper[4724]: I0223 17:31:44.303955 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:44 crc kubenswrapper[4724]: I0223 17:31:44.303975 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:44 crc kubenswrapper[4724]: I0223 17:31:44.303985 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:44Z","lastTransitionTime":"2026-02-23T17:31:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:44 crc kubenswrapper[4724]: I0223 17:31:44.406671 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:44 crc kubenswrapper[4724]: I0223 17:31:44.406711 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:44 crc kubenswrapper[4724]: I0223 17:31:44.406723 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:44 crc kubenswrapper[4724]: I0223 17:31:44.406742 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:44 crc kubenswrapper[4724]: I0223 17:31:44.406756 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:44Z","lastTransitionTime":"2026-02-23T17:31:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:44 crc kubenswrapper[4724]: I0223 17:31:44.509461 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:44 crc kubenswrapper[4724]: I0223 17:31:44.509501 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:44 crc kubenswrapper[4724]: I0223 17:31:44.509514 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:44 crc kubenswrapper[4724]: I0223 17:31:44.509532 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:44 crc kubenswrapper[4724]: I0223 17:31:44.509553 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:44Z","lastTransitionTime":"2026-02-23T17:31:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:44 crc kubenswrapper[4724]: I0223 17:31:44.615822 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:44 crc kubenswrapper[4724]: I0223 17:31:44.615882 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:44 crc kubenswrapper[4724]: I0223 17:31:44.615898 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:44 crc kubenswrapper[4724]: I0223 17:31:44.615923 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:44 crc kubenswrapper[4724]: I0223 17:31:44.615940 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:44Z","lastTransitionTime":"2026-02-23T17:31:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:44 crc kubenswrapper[4724]: I0223 17:31:44.669287 4724 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 23 17:31:44 crc kubenswrapper[4724]: I0223 17:31:44.720859 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:44 crc kubenswrapper[4724]: I0223 17:31:44.720918 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:44 crc kubenswrapper[4724]: I0223 17:31:44.720946 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:44 crc kubenswrapper[4724]: I0223 17:31:44.720971 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:44 crc kubenswrapper[4724]: I0223 17:31:44.720989 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:44Z","lastTransitionTime":"2026-02-23T17:31:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:44 crc kubenswrapper[4724]: I0223 17:31:44.822846 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:44 crc kubenswrapper[4724]: I0223 17:31:44.822900 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:44 crc kubenswrapper[4724]: I0223 17:31:44.822912 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:44 crc kubenswrapper[4724]: I0223 17:31:44.822934 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:44 crc kubenswrapper[4724]: I0223 17:31:44.822946 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:44Z","lastTransitionTime":"2026-02-23T17:31:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:44 crc kubenswrapper[4724]: I0223 17:31:44.925536 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:44 crc kubenswrapper[4724]: I0223 17:31:44.925652 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:44 crc kubenswrapper[4724]: I0223 17:31:44.925665 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:44 crc kubenswrapper[4724]: I0223 17:31:44.925685 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:44 crc kubenswrapper[4724]: I0223 17:31:44.925699 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:44Z","lastTransitionTime":"2026-02-23T17:31:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:44 crc kubenswrapper[4724]: I0223 17:31:44.950286 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:31:44 crc kubenswrapper[4724]: I0223 17:31:44.950415 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:31:44 crc kubenswrapper[4724]: I0223 17:31:44.950439 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:31:44 crc kubenswrapper[4724]: E0223 17:31:44.950609 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 17:31:44 crc kubenswrapper[4724]: E0223 17:31:44.950780 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 17:31:44 crc kubenswrapper[4724]: E0223 17:31:44.950900 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 17:31:44 crc kubenswrapper[4724]: I0223 17:31:44.960977 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:44 crc kubenswrapper[4724]: I0223 17:31:44.974334 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:44 crc kubenswrapper[4724]: I0223 17:31:44.992594 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.004669 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.020257 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e1d6606-75fc-41fd-9c23-18ee248da2af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe805066e5e61f09a4f5cf1f9e912da6664229c8db86a898cbe670278e4c2530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beda3a9d030f2d4aa9c6a78508a84f5bc122debebb2c5b423233834433b8fd5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e108d74a635da102e857e2ad4f18384861e4e630ae31af9aa05841e6bbc2de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:25Z\\\",\\\"message\\\":\\\"W0223 17:31:24.378840 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 17:31:24.379771 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771867884 cert, and key in /tmp/serving-cert-1953836024/serving-signer.crt, /tmp/serving-cert-1953836024/serving-signer.key\\\\nI0223 17:31:24.846158 1 observer_polling.go:159] Starting file observer\\\\nW0223 17:31:24.858497 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 17:31:24.858706 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 17:31:24.859835 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1953836024/tls.crt::/tmp/serving-cert-1953836024/tls.key\\\\\\\"\\\\nF0223 17:31:25.358711 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:31:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b550694e49325768a2674a4d2b7089dd267b91a273751fb62d060a0663f06619\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.029121 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.029180 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.029193 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.029235 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.029247 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:45Z","lastTransitionTime":"2026-02-23T17:31:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.031191 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.044815 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2df91905-ff3d-4a7d-8e22-337861483a5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9fabd2560e4a3826fb070276626936d120d612d37e2750e417f637b93b1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff1c2b88a730048902a1360cdbcddfb4f82c6a58c427edb24a792b254f55383\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:17Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 17:30:47.305065 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 17:30:47.307934 1 observer_polling.go:159] Starting file observer\\\\nI0223 17:30:47.346927 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 17:30:47.351923 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0223 17:31:17.828320 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:16Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbce4ddd851479ee9b40b8c773a052d5a1d1420df107d8ea0e9fa2b927300910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bfa4a72a4a487fbc7703a3b60a9ef9a18b7fea6e58fd427f50361d8df9731d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2591d81bed9e5dbad670e16b2266367cfb386a3d68ffad9d09e0652130ee2609\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.058019 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35021d6d-15bd-4153-9dae-7b002eff8c23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26478861fb89db7eb6b5ba0a2089cad4360893bd95a71883be07831745c5c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f971a2a092b595760de8b7a9b6a0a13fa802d3fcb6a2c2b8922b01c9c57d782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d892cd1f6ea7fabb2e64e3e78d329f6e0efbf8585cc4f72fbcd29f31a035cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.068544 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.117074 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 17:46:19.290017727 +0000 UTC Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.137195 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.137638 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.137648 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.137666 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.137677 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:45Z","lastTransitionTime":"2026-02-23T17:31:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.241997 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.242038 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.242049 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.242066 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.242080 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:45Z","lastTransitionTime":"2026-02-23T17:31:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.344896 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.344952 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.344965 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.344996 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.345010 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:45Z","lastTransitionTime":"2026-02-23T17:31:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.448210 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.448274 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.448287 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.448310 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.448322 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:45Z","lastTransitionTime":"2026-02-23T17:31:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.550795 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.550842 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.550852 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.550871 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.550883 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:45Z","lastTransitionTime":"2026-02-23T17:31:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.574315 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"fbcb3c1b1daa4328b6e5e049e9f1270ee94da4897df26324f6e4cd898f565de0"} Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.584509 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.593564 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.602096 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.612779 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e1d6606-75fc-41fd-9c23-18ee248da2af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe805066e5e61f09a4f5cf1f9e912da6664229c8db86a898cbe670278e4c2530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beda3a9d030f2d4aa9c6a78508a84f5bc122debebb2c5b423233834433b8fd5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e108d74a635da102e857e2ad4f18384861e4e630ae31af9aa05841e6bbc2de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:25Z\\\",\\\"message\\\":\\\"W0223 17:31:24.378840 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 17:31:24.379771 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771867884 cert, and key in /tmp/serving-cert-1953836024/serving-signer.crt, /tmp/serving-cert-1953836024/serving-signer.key\\\\nI0223 17:31:24.846158 1 observer_polling.go:159] Starting file observer\\\\nW0223 17:31:24.858497 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 17:31:24.858706 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 17:31:24.859835 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1953836024/tls.crt::/tmp/serving-cert-1953836024/tls.key\\\\\\\"\\\\nF0223 17:31:25.358711 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:31:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b550694e49325768a2674a4d2b7089dd267b91a273751fb62d060a0663f06619\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.621517 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.634802 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.648657 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2df91905-ff3d-4a7d-8e22-337861483a5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9fabd2560e4a3826fb070276626936d120d612d37e2750e417f637b93b1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff1c2b88a730048902a1360cdbcddfb4f82c6a58c427edb24a792b254f55383\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:17Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 17:30:47.305065 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 17:30:47.307934 1 observer_polling.go:159] Starting file observer\\\\nI0223 17:30:47.346927 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 17:30:47.351923 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0223 17:31:17.828320 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:16Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbce4ddd851479ee9b40b8c773a052d5a1d1420df107d8ea0e9fa2b927300910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bfa4a72a4a487fbc7703a3b60a9ef9a18b7fea6e58fd427f50361d8df9731d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2591d81bed9e5dbad670e16b2266367cfb386a3d68ffad9d09e0652130ee2609\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.653334 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.653367 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.653380 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.653417 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.653429 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:45Z","lastTransitionTime":"2026-02-23T17:31:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.657569 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35021d6d-15bd-4153-9dae-7b002eff8c23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26478861fb89db7eb6b5ba0a2089cad4360893bd95a71883be07831745c5c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f971a2a092b595760de8b7a9b6a0a13fa802d3fcb6a2c2b8922b01c9c57d782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d892cd1f6ea7fabb2e64e3e78d329f6e0efbf8585cc4f72fbcd29f31a035cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.667809 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbcb3c1b1daa4328b6e5e049e9f1270ee94da4897df26324f6e4cd898f565de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.755709 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.755750 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.755763 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.755779 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.755790 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:45Z","lastTransitionTime":"2026-02-23T17:31:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.858536 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.858581 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.858591 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.858609 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.858621 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:45Z","lastTransitionTime":"2026-02-23T17:31:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.961130 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.961170 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.961181 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.961205 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:45 crc kubenswrapper[4724]: I0223 17:31:45.961223 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:45Z","lastTransitionTime":"2026-02-23T17:31:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.063414 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.063459 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.063472 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.063491 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.063506 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:46Z","lastTransitionTime":"2026-02-23T17:31:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.117232 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 00:29:47.039514204 +0000 UTC Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.166226 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.166283 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.166294 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.166318 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.166331 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:46Z","lastTransitionTime":"2026-02-23T17:31:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.269377 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.269458 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.269473 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.269493 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.269506 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:46Z","lastTransitionTime":"2026-02-23T17:31:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.372100 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.372142 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.372154 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.372171 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.372182 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:46Z","lastTransitionTime":"2026-02-23T17:31:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.474538 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.474580 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.474594 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.474623 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.474637 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:46Z","lastTransitionTime":"2026-02-23T17:31:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.576994 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.577032 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.577041 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.577054 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.577063 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:46Z","lastTransitionTime":"2026-02-23T17:31:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.579162 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"4d114679fca6bbb2dbaaee0970449fe265c7c931665eb559d015abb61de3ec07"} Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.579259 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"4dd3d46856b5d5dece678b672db6485b68b9b7fbe9f029f7d9df37906146674c"} Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.603607 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2df91905-ff3d-4a7d-8e22-337861483a5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9fabd2560e4a3826fb070276626936d120d612d37e2750e417f637b93b1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff1c2b88a730048902a1360cdbcddfb4f82c6a58c427edb24a792b254f55383\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:17Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 17:30:47.305065 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 17:30:47.307934 1 observer_polling.go:159] Starting file observer\\\\nI0223 17:30:47.346927 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 17:30:47.351923 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0223 17:31:17.828320 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:16Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbce4ddd851479ee9b40b8c773a052d5a1d1420df107d8ea0e9fa2b927300910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bfa4a72a4a487fbc7703a3b60a9ef9a18b7fea6e58fd427f50361d8df9731d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2591d81bed9e5dbad670e16b2266367cfb386a3d68ffad9d09e0652130ee2609\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:46Z is after 2025-08-24T17:21:41Z" Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.618353 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35021d6d-15bd-4153-9dae-7b002eff8c23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26478861fb89db7eb6b5ba0a2089cad4360893bd95a71883be07831745c5c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f971a2a092b595760de8b7a9b6a0a13fa802d3fcb6a2c2b8922b01c9c57d782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d892cd1f6ea7fabb2e64e3e78d329f6e0efbf8585cc4f72fbcd29f31a035cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:46Z is after 2025-08-24T17:21:41Z" Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.638011 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbcb3c1b1daa4328b6e5e049e9f1270ee94da4897df26324f6e4cd898f565de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:46Z is after 2025-08-24T17:21:41Z" Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.658531 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:46Z is after 2025-08-24T17:21:41Z" Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.673051 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:46Z is after 2025-08-24T17:21:41Z" Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.678694 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.678723 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.678734 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.678750 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.678762 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:46Z","lastTransitionTime":"2026-02-23T17:31:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.684881 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:46Z is after 2025-08-24T17:21:41Z" Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.700330 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e1d6606-75fc-41fd-9c23-18ee248da2af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe805066e5e61f09a4f5cf1f9e912da6664229c8db86a898cbe670278e4c2530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beda3a9d030f2d4aa9c6a78508a84f5bc122debebb2c5b423233834433b8fd5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e108d74a635da102e857e2ad4f18384861e4e630ae31af9aa05841e6bbc2de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:25Z\\\",\\\"message\\\":\\\"W0223 17:31:24.378840 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 17:31:24.379771 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771867884 cert, and key in /tmp/serving-cert-1953836024/serving-signer.crt, /tmp/serving-cert-1953836024/serving-signer.key\\\\nI0223 17:31:24.846158 1 observer_polling.go:159] Starting file observer\\\\nW0223 17:31:24.858497 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 17:31:24.858706 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 17:31:24.859835 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1953836024/tls.crt::/tmp/serving-cert-1953836024/tls.key\\\\\\\"\\\\nF0223 17:31:25.358711 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:31:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b550694e49325768a2674a4d2b7089dd267b91a273751fb62d060a0663f06619\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:46Z is after 2025-08-24T17:21:41Z" Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.715581 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:46Z is after 2025-08-24T17:21:41Z" Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.728818 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d114679fca6bbb2dbaaee0970449fe265c7c931665eb559d015abb61de3ec07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dd3d46856b5d5dece678b672db6485b68b9b7fbe9f029f7d9df37906146674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:46Z is after 2025-08-24T17:21:41Z" Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.781276 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.781348 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.781361 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.781385 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.781419 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:46Z","lastTransitionTime":"2026-02-23T17:31:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.885370 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.885476 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.885502 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.885541 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.885565 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:46Z","lastTransitionTime":"2026-02-23T17:31:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.950484 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.950484 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.950506 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:31:46 crc kubenswrapper[4724]: E0223 17:31:46.950701 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 17:31:46 crc kubenswrapper[4724]: E0223 17:31:46.950794 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 17:31:46 crc kubenswrapper[4724]: E0223 17:31:46.951045 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.988854 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.988903 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.988915 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.988935 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:46 crc kubenswrapper[4724]: I0223 17:31:46.988953 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:46Z","lastTransitionTime":"2026-02-23T17:31:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:47 crc kubenswrapper[4724]: I0223 17:31:47.030691 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:31:47 crc kubenswrapper[4724]: I0223 17:31:47.030821 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:31:47 crc kubenswrapper[4724]: I0223 17:31:47.030870 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:31:47 crc kubenswrapper[4724]: I0223 17:31:47.030910 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:31:47 crc kubenswrapper[4724]: I0223 17:31:47.030946 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:31:47 crc kubenswrapper[4724]: E0223 17:31:47.030973 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:32:03.030938345 +0000 UTC m=+78.847137975 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:31:47 crc kubenswrapper[4724]: E0223 17:31:47.031120 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 17:31:47 crc kubenswrapper[4724]: E0223 17:31:47.031147 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 17:31:47 crc kubenswrapper[4724]: E0223 17:31:47.031168 4724 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 17:31:47 crc kubenswrapper[4724]: E0223 17:31:47.031230 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 17:32:03.031212372 +0000 UTC m=+78.847411992 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 17:31:47 crc kubenswrapper[4724]: E0223 17:31:47.031234 4724 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 17:31:47 crc kubenswrapper[4724]: E0223 17:31:47.031279 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 17:32:03.031265863 +0000 UTC m=+78.847465483 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 17:31:47 crc kubenswrapper[4724]: E0223 17:31:47.031170 4724 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 17:31:47 crc kubenswrapper[4724]: E0223 17:31:47.031331 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-23 17:32:03.031320684 +0000 UTC m=+78.847520294 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 17:31:47 crc kubenswrapper[4724]: E0223 17:31:47.031120 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 17:31:47 crc kubenswrapper[4724]: E0223 17:31:47.031359 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 17:31:47 crc kubenswrapper[4724]: E0223 17:31:47.031372 4724 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 17:31:47 crc kubenswrapper[4724]: E0223 17:31:47.031433 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-23 17:32:03.031420437 +0000 UTC m=+78.847620057 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 17:31:47 crc kubenswrapper[4724]: I0223 17:31:47.091623 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:47 crc kubenswrapper[4724]: I0223 17:31:47.091661 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:47 crc kubenswrapper[4724]: I0223 17:31:47.091711 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:47 crc kubenswrapper[4724]: I0223 17:31:47.091731 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:47 crc kubenswrapper[4724]: I0223 17:31:47.091745 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:47Z","lastTransitionTime":"2026-02-23T17:31:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:47 crc kubenswrapper[4724]: I0223 17:31:47.117825 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 06:15:57.83046483 +0000 UTC Feb 23 17:31:47 crc kubenswrapper[4724]: I0223 17:31:47.194244 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:47 crc kubenswrapper[4724]: I0223 17:31:47.194288 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:47 crc kubenswrapper[4724]: I0223 17:31:47.194299 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:47 crc kubenswrapper[4724]: I0223 17:31:47.194320 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:47 crc kubenswrapper[4724]: I0223 17:31:47.194331 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:47Z","lastTransitionTime":"2026-02-23T17:31:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:47 crc kubenswrapper[4724]: I0223 17:31:47.296975 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:47 crc kubenswrapper[4724]: I0223 17:31:47.297050 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:47 crc kubenswrapper[4724]: I0223 17:31:47.297069 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:47 crc kubenswrapper[4724]: I0223 17:31:47.297098 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:47 crc kubenswrapper[4724]: I0223 17:31:47.297121 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:47Z","lastTransitionTime":"2026-02-23T17:31:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:47 crc kubenswrapper[4724]: I0223 17:31:47.399084 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:47 crc kubenswrapper[4724]: I0223 17:31:47.399147 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:47 crc kubenswrapper[4724]: I0223 17:31:47.399162 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:47 crc kubenswrapper[4724]: I0223 17:31:47.399180 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:47 crc kubenswrapper[4724]: I0223 17:31:47.399194 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:47Z","lastTransitionTime":"2026-02-23T17:31:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:47 crc kubenswrapper[4724]: I0223 17:31:47.501631 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:47 crc kubenswrapper[4724]: I0223 17:31:47.501681 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:47 crc kubenswrapper[4724]: I0223 17:31:47.501692 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:47 crc kubenswrapper[4724]: I0223 17:31:47.501711 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:47 crc kubenswrapper[4724]: I0223 17:31:47.501722 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:47Z","lastTransitionTime":"2026-02-23T17:31:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:47 crc kubenswrapper[4724]: I0223 17:31:47.604786 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:47 crc kubenswrapper[4724]: I0223 17:31:47.604864 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:47 crc kubenswrapper[4724]: I0223 17:31:47.604892 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:47 crc kubenswrapper[4724]: I0223 17:31:47.604923 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:47 crc kubenswrapper[4724]: I0223 17:31:47.604946 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:47Z","lastTransitionTime":"2026-02-23T17:31:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:47 crc kubenswrapper[4724]: I0223 17:31:47.710034 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:47 crc kubenswrapper[4724]: I0223 17:31:47.710092 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:47 crc kubenswrapper[4724]: I0223 17:31:47.710111 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:47 crc kubenswrapper[4724]: I0223 17:31:47.710141 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:47 crc kubenswrapper[4724]: I0223 17:31:47.710163 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:47Z","lastTransitionTime":"2026-02-23T17:31:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:47 crc kubenswrapper[4724]: I0223 17:31:47.813616 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:47 crc kubenswrapper[4724]: I0223 17:31:47.813712 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:47 crc kubenswrapper[4724]: I0223 17:31:47.813737 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:47 crc kubenswrapper[4724]: I0223 17:31:47.813774 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:47 crc kubenswrapper[4724]: I0223 17:31:47.813798 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:47Z","lastTransitionTime":"2026-02-23T17:31:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:47 crc kubenswrapper[4724]: I0223 17:31:47.916993 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:47 crc kubenswrapper[4724]: I0223 17:31:47.917071 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:47 crc kubenswrapper[4724]: I0223 17:31:47.917094 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:47 crc kubenswrapper[4724]: I0223 17:31:47.917124 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:47 crc kubenswrapper[4724]: I0223 17:31:47.917145 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:47Z","lastTransitionTime":"2026-02-23T17:31:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:48 crc kubenswrapper[4724]: I0223 17:31:48.020488 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:48 crc kubenswrapper[4724]: I0223 17:31:48.020564 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:48 crc kubenswrapper[4724]: I0223 17:31:48.020588 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:48 crc kubenswrapper[4724]: I0223 17:31:48.020621 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:48 crc kubenswrapper[4724]: I0223 17:31:48.020646 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:48Z","lastTransitionTime":"2026-02-23T17:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:48 crc kubenswrapper[4724]: I0223 17:31:48.118529 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 22:36:52.216355922 +0000 UTC Feb 23 17:31:48 crc kubenswrapper[4724]: I0223 17:31:48.124621 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:48 crc kubenswrapper[4724]: I0223 17:31:48.124719 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:48 crc kubenswrapper[4724]: I0223 17:31:48.124748 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:48 crc kubenswrapper[4724]: I0223 17:31:48.124787 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:48 crc kubenswrapper[4724]: I0223 17:31:48.124814 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:48Z","lastTransitionTime":"2026-02-23T17:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:48 crc kubenswrapper[4724]: I0223 17:31:48.229244 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:48 crc kubenswrapper[4724]: I0223 17:31:48.229329 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:48 crc kubenswrapper[4724]: I0223 17:31:48.229349 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:48 crc kubenswrapper[4724]: I0223 17:31:48.229385 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:48 crc kubenswrapper[4724]: I0223 17:31:48.229444 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:48Z","lastTransitionTime":"2026-02-23T17:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:48 crc kubenswrapper[4724]: I0223 17:31:48.332978 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:48 crc kubenswrapper[4724]: I0223 17:31:48.333047 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:48 crc kubenswrapper[4724]: I0223 17:31:48.333063 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:48 crc kubenswrapper[4724]: I0223 17:31:48.333092 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:48 crc kubenswrapper[4724]: I0223 17:31:48.333111 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:48Z","lastTransitionTime":"2026-02-23T17:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:48 crc kubenswrapper[4724]: I0223 17:31:48.436481 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:48 crc kubenswrapper[4724]: I0223 17:31:48.436541 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:48 crc kubenswrapper[4724]: I0223 17:31:48.436555 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:48 crc kubenswrapper[4724]: I0223 17:31:48.436575 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:48 crc kubenswrapper[4724]: I0223 17:31:48.436594 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:48Z","lastTransitionTime":"2026-02-23T17:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:48 crc kubenswrapper[4724]: I0223 17:31:48.539696 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:48 crc kubenswrapper[4724]: I0223 17:31:48.539754 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:48 crc kubenswrapper[4724]: I0223 17:31:48.539770 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:48 crc kubenswrapper[4724]: I0223 17:31:48.539790 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:48 crc kubenswrapper[4724]: I0223 17:31:48.539804 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:48Z","lastTransitionTime":"2026-02-23T17:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:48 crc kubenswrapper[4724]: I0223 17:31:48.641846 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:48 crc kubenswrapper[4724]: I0223 17:31:48.641890 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:48 crc kubenswrapper[4724]: I0223 17:31:48.641904 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:48 crc kubenswrapper[4724]: I0223 17:31:48.641921 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:48 crc kubenswrapper[4724]: I0223 17:31:48.641933 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:48Z","lastTransitionTime":"2026-02-23T17:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:48 crc kubenswrapper[4724]: I0223 17:31:48.745241 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:48 crc kubenswrapper[4724]: I0223 17:31:48.745337 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:48 crc kubenswrapper[4724]: I0223 17:31:48.745382 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:48 crc kubenswrapper[4724]: I0223 17:31:48.745459 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:48 crc kubenswrapper[4724]: I0223 17:31:48.745546 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:48Z","lastTransitionTime":"2026-02-23T17:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:48 crc kubenswrapper[4724]: I0223 17:31:48.849722 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:48 crc kubenswrapper[4724]: I0223 17:31:48.850344 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:48 crc kubenswrapper[4724]: I0223 17:31:48.850363 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:48 crc kubenswrapper[4724]: I0223 17:31:48.850420 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:48 crc kubenswrapper[4724]: I0223 17:31:48.850443 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:48Z","lastTransitionTime":"2026-02-23T17:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:48 crc kubenswrapper[4724]: I0223 17:31:48.949909 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:31:48 crc kubenswrapper[4724]: I0223 17:31:48.949954 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:31:48 crc kubenswrapper[4724]: I0223 17:31:48.950017 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:31:48 crc kubenswrapper[4724]: E0223 17:31:48.950084 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 17:31:48 crc kubenswrapper[4724]: E0223 17:31:48.950269 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 17:31:48 crc kubenswrapper[4724]: E0223 17:31:48.950529 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 17:31:48 crc kubenswrapper[4724]: I0223 17:31:48.952773 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:48 crc kubenswrapper[4724]: I0223 17:31:48.952802 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:48 crc kubenswrapper[4724]: I0223 17:31:48.952812 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:48 crc kubenswrapper[4724]: I0223 17:31:48.952831 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:48 crc kubenswrapper[4724]: I0223 17:31:48.952843 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:48Z","lastTransitionTime":"2026-02-23T17:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:49 crc kubenswrapper[4724]: I0223 17:31:49.055096 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:49 crc kubenswrapper[4724]: I0223 17:31:49.055144 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:49 crc kubenswrapper[4724]: I0223 17:31:49.055155 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:49 crc kubenswrapper[4724]: I0223 17:31:49.055173 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:49 crc kubenswrapper[4724]: I0223 17:31:49.055184 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:49Z","lastTransitionTime":"2026-02-23T17:31:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:49 crc kubenswrapper[4724]: I0223 17:31:49.118720 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 03:37:37.029157254 +0000 UTC Feb 23 17:31:49 crc kubenswrapper[4724]: I0223 17:31:49.158138 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:49 crc kubenswrapper[4724]: I0223 17:31:49.158188 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:49 crc kubenswrapper[4724]: I0223 17:31:49.158205 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:49 crc kubenswrapper[4724]: I0223 17:31:49.158228 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:49 crc kubenswrapper[4724]: I0223 17:31:49.158242 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:49Z","lastTransitionTime":"2026-02-23T17:31:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:49 crc kubenswrapper[4724]: I0223 17:31:49.261247 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:49 crc kubenswrapper[4724]: I0223 17:31:49.261287 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:49 crc kubenswrapper[4724]: I0223 17:31:49.261295 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:49 crc kubenswrapper[4724]: I0223 17:31:49.261312 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:49 crc kubenswrapper[4724]: I0223 17:31:49.261325 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:49Z","lastTransitionTime":"2026-02-23T17:31:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:49 crc kubenswrapper[4724]: I0223 17:31:49.364383 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:49 crc kubenswrapper[4724]: I0223 17:31:49.364491 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:49 crc kubenswrapper[4724]: I0223 17:31:49.364514 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:49 crc kubenswrapper[4724]: I0223 17:31:49.364543 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:49 crc kubenswrapper[4724]: I0223 17:31:49.364566 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:49Z","lastTransitionTime":"2026-02-23T17:31:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:49 crc kubenswrapper[4724]: I0223 17:31:49.467261 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:49 crc kubenswrapper[4724]: I0223 17:31:49.467563 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:49 crc kubenswrapper[4724]: I0223 17:31:49.467665 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:49 crc kubenswrapper[4724]: I0223 17:31:49.467738 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:49 crc kubenswrapper[4724]: I0223 17:31:49.467804 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:49Z","lastTransitionTime":"2026-02-23T17:31:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:49 crc kubenswrapper[4724]: I0223 17:31:49.571146 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:49 crc kubenswrapper[4724]: I0223 17:31:49.571201 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:49 crc kubenswrapper[4724]: I0223 17:31:49.571214 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:49 crc kubenswrapper[4724]: I0223 17:31:49.571240 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:49 crc kubenswrapper[4724]: I0223 17:31:49.571255 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:49Z","lastTransitionTime":"2026-02-23T17:31:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:49 crc kubenswrapper[4724]: I0223 17:31:49.674640 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:49 crc kubenswrapper[4724]: I0223 17:31:49.674715 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:49 crc kubenswrapper[4724]: I0223 17:31:49.674741 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:49 crc kubenswrapper[4724]: I0223 17:31:49.674774 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:49 crc kubenswrapper[4724]: I0223 17:31:49.674802 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:49Z","lastTransitionTime":"2026-02-23T17:31:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:49 crc kubenswrapper[4724]: I0223 17:31:49.778425 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:49 crc kubenswrapper[4724]: I0223 17:31:49.778475 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:49 crc kubenswrapper[4724]: I0223 17:31:49.778485 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:49 crc kubenswrapper[4724]: I0223 17:31:49.778505 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:49 crc kubenswrapper[4724]: I0223 17:31:49.778516 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:49Z","lastTransitionTime":"2026-02-23T17:31:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:49 crc kubenswrapper[4724]: I0223 17:31:49.881641 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:49 crc kubenswrapper[4724]: I0223 17:31:49.882004 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:49 crc kubenswrapper[4724]: I0223 17:31:49.882086 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:49 crc kubenswrapper[4724]: I0223 17:31:49.882177 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:49 crc kubenswrapper[4724]: I0223 17:31:49.882261 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:49Z","lastTransitionTime":"2026-02-23T17:31:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:49 crc kubenswrapper[4724]: I0223 17:31:49.984698 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:49 crc kubenswrapper[4724]: I0223 17:31:49.984786 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:49 crc kubenswrapper[4724]: I0223 17:31:49.984817 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:49 crc kubenswrapper[4724]: I0223 17:31:49.984860 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:49 crc kubenswrapper[4724]: I0223 17:31:49.984882 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:49Z","lastTransitionTime":"2026-02-23T17:31:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:50 crc kubenswrapper[4724]: I0223 17:31:50.087316 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:50 crc kubenswrapper[4724]: I0223 17:31:50.087365 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:50 crc kubenswrapper[4724]: I0223 17:31:50.087375 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:50 crc kubenswrapper[4724]: I0223 17:31:50.087413 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:50 crc kubenswrapper[4724]: I0223 17:31:50.087427 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:50Z","lastTransitionTime":"2026-02-23T17:31:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:50 crc kubenswrapper[4724]: I0223 17:31:50.119293 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 17:27:33.773531062 +0000 UTC Feb 23 17:31:50 crc kubenswrapper[4724]: I0223 17:31:50.190051 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:50 crc kubenswrapper[4724]: I0223 17:31:50.190116 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:50 crc kubenswrapper[4724]: I0223 17:31:50.190128 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:50 crc kubenswrapper[4724]: I0223 17:31:50.190153 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:50 crc kubenswrapper[4724]: I0223 17:31:50.190166 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:50Z","lastTransitionTime":"2026-02-23T17:31:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:50 crc kubenswrapper[4724]: I0223 17:31:50.292468 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:50 crc kubenswrapper[4724]: I0223 17:31:50.292505 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:50 crc kubenswrapper[4724]: I0223 17:31:50.292514 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:50 crc kubenswrapper[4724]: I0223 17:31:50.292532 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:50 crc kubenswrapper[4724]: I0223 17:31:50.292542 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:50Z","lastTransitionTime":"2026-02-23T17:31:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:50 crc kubenswrapper[4724]: I0223 17:31:50.395130 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:50 crc kubenswrapper[4724]: I0223 17:31:50.395164 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:50 crc kubenswrapper[4724]: I0223 17:31:50.395181 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:50 crc kubenswrapper[4724]: I0223 17:31:50.395198 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:50 crc kubenswrapper[4724]: I0223 17:31:50.395209 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:50Z","lastTransitionTime":"2026-02-23T17:31:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:50 crc kubenswrapper[4724]: I0223 17:31:50.497857 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:50 crc kubenswrapper[4724]: I0223 17:31:50.497905 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:50 crc kubenswrapper[4724]: I0223 17:31:50.497918 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:50 crc kubenswrapper[4724]: I0223 17:31:50.497938 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:50 crc kubenswrapper[4724]: I0223 17:31:50.497949 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:50Z","lastTransitionTime":"2026-02-23T17:31:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:50 crc kubenswrapper[4724]: I0223 17:31:50.590752 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"62d85ca9995343f8d309c902c37fc2db6faf539dfde44cc8334e279648ec2ffa"} Feb 23 17:31:50 crc kubenswrapper[4724]: I0223 17:31:50.599946 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:50 crc kubenswrapper[4724]: I0223 17:31:50.599976 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:50 crc kubenswrapper[4724]: I0223 17:31:50.599987 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:50 crc kubenswrapper[4724]: I0223 17:31:50.600003 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:50 crc kubenswrapper[4724]: I0223 17:31:50.600015 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:50Z","lastTransitionTime":"2026-02-23T17:31:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:50 crc kubenswrapper[4724]: I0223 17:31:50.610488 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35021d6d-15bd-4153-9dae-7b002eff8c23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26478861fb89db7eb6b5ba0a2089cad4360893bd95a71883be07831745c5c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f971a2a092b595760de8b7a9b6a0a13fa802d3fcb6a2c2b8922b01c9c57d782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d892cd1f6ea7fabb2e64e3e78d329f6e0efbf8585cc4f72fbcd29f31a035cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:50Z is after 2025-08-24T17:21:41Z" Feb 23 17:31:50 crc kubenswrapper[4724]: I0223 17:31:50.628454 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbcb3c1b1daa4328b6e5e049e9f1270ee94da4897df26324f6e4cd898f565de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:50Z is after 2025-08-24T17:21:41Z" Feb 23 17:31:50 crc kubenswrapper[4724]: I0223 17:31:50.645585 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2df91905-ff3d-4a7d-8e22-337861483a5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9fabd2560e4a3826fb070276626936d120d612d37e2750e417f637b93b1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff1c2b88a730048902a1360cdbcddfb4f82c6a58c427edb24a792b254f55383\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:17Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 17:30:47.305065 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 17:30:47.307934 1 observer_polling.go:159] Starting file observer\\\\nI0223 17:30:47.346927 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 17:30:47.351923 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0223 17:31:17.828320 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:16Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbce4ddd851479ee9b40b8c773a052d5a1d1420df107d8ea0e9fa2b927300910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bfa4a72a4a487fbc7703a3b60a9ef9a18b7fea6e58fd427f50361d8df9731d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2591d81bed9e5dbad670e16b2266367cfb386a3d68ffad9d09e0652130ee2609\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:50Z is after 2025-08-24T17:21:41Z" Feb 23 17:31:50 crc kubenswrapper[4724]: I0223 17:31:50.669482 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e1d6606-75fc-41fd-9c23-18ee248da2af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe805066e5e61f09a4f5cf1f9e912da6664229c8db86a898cbe670278e4c2530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beda3a9d030f2d4aa9c6a78508a84f5bc122debebb2c5b423233834433b8fd5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e108d74a635da102e857e2ad4f18384861e4e630ae31af9aa05841e6bbc2de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:25Z\\\",\\\"message\\\":\\\"W0223 17:31:24.378840 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 17:31:24.379771 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771867884 cert, and key in /tmp/serving-cert-1953836024/serving-signer.crt, /tmp/serving-cert-1953836024/serving-signer.key\\\\nI0223 17:31:24.846158 1 observer_polling.go:159] Starting file observer\\\\nW0223 17:31:24.858497 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 17:31:24.858706 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 17:31:24.859835 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1953836024/tls.crt::/tmp/serving-cert-1953836024/tls.key\\\\\\\"\\\\nF0223 17:31:25.358711 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:31:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b550694e49325768a2674a4d2b7089dd267b91a273751fb62d060a0663f06619\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:50Z is after 2025-08-24T17:21:41Z" Feb 23 17:31:50 crc kubenswrapper[4724]: I0223 17:31:50.684100 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:50Z is after 2025-08-24T17:21:41Z" Feb 23 17:31:50 crc kubenswrapper[4724]: I0223 17:31:50.700560 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d114679fca6bbb2dbaaee0970449fe265c7c931665eb559d015abb61de3ec07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dd3d46856b5d5dece678b672db6485b68b9b7fbe9f029f7d9df37906146674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:50Z is after 2025-08-24T17:21:41Z" Feb 23 17:31:50 crc kubenswrapper[4724]: I0223 17:31:50.702096 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:50 crc kubenswrapper[4724]: I0223 17:31:50.702141 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:50 crc kubenswrapper[4724]: I0223 17:31:50.702155 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:50 crc kubenswrapper[4724]: I0223 17:31:50.702173 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:50 crc kubenswrapper[4724]: I0223 17:31:50.702184 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:50Z","lastTransitionTime":"2026-02-23T17:31:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:50 crc kubenswrapper[4724]: I0223 17:31:50.719089 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:50Z is after 2025-08-24T17:21:41Z" Feb 23 17:31:50 crc kubenswrapper[4724]: I0223 17:31:50.735306 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:50Z is after 2025-08-24T17:21:41Z" Feb 23 17:31:50 crc kubenswrapper[4724]: I0223 17:31:50.750153 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d85ca9995343f8d309c902c37fc2db6faf539dfde44cc8334e279648ec2ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:50Z is after 2025-08-24T17:21:41Z" Feb 23 17:31:50 crc kubenswrapper[4724]: I0223 17:31:50.804525 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:50 crc kubenswrapper[4724]: I0223 17:31:50.804580 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:50 crc kubenswrapper[4724]: I0223 17:31:50.804600 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:50 crc kubenswrapper[4724]: I0223 17:31:50.804628 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:50 crc kubenswrapper[4724]: I0223 17:31:50.804647 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:50Z","lastTransitionTime":"2026-02-23T17:31:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:50 crc kubenswrapper[4724]: I0223 17:31:50.907291 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:50 crc kubenswrapper[4724]: I0223 17:31:50.907339 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:50 crc kubenswrapper[4724]: I0223 17:31:50.907353 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:50 crc kubenswrapper[4724]: I0223 17:31:50.907377 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:50 crc kubenswrapper[4724]: I0223 17:31:50.907407 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:50Z","lastTransitionTime":"2026-02-23T17:31:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:50 crc kubenswrapper[4724]: I0223 17:31:50.950657 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:31:50 crc kubenswrapper[4724]: E0223 17:31:50.950797 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 17:31:50 crc kubenswrapper[4724]: I0223 17:31:50.951211 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:31:50 crc kubenswrapper[4724]: E0223 17:31:50.951282 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 17:31:50 crc kubenswrapper[4724]: I0223 17:31:50.951341 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:31:50 crc kubenswrapper[4724]: E0223 17:31:50.951428 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 17:31:51 crc kubenswrapper[4724]: I0223 17:31:51.009825 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:51 crc kubenswrapper[4724]: I0223 17:31:51.009887 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:51 crc kubenswrapper[4724]: I0223 17:31:51.009901 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:51 crc kubenswrapper[4724]: I0223 17:31:51.009924 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:51 crc kubenswrapper[4724]: I0223 17:31:51.009948 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:51Z","lastTransitionTime":"2026-02-23T17:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:51 crc kubenswrapper[4724]: I0223 17:31:51.112609 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:51 crc kubenswrapper[4724]: I0223 17:31:51.112688 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:51 crc kubenswrapper[4724]: I0223 17:31:51.112700 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:51 crc kubenswrapper[4724]: I0223 17:31:51.112716 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:51 crc kubenswrapper[4724]: I0223 17:31:51.112726 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:51Z","lastTransitionTime":"2026-02-23T17:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:51 crc kubenswrapper[4724]: I0223 17:31:51.120917 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 22:30:09.574776443 +0000 UTC Feb 23 17:31:51 crc kubenswrapper[4724]: I0223 17:31:51.215840 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:51 crc kubenswrapper[4724]: I0223 17:31:51.215923 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:51 crc kubenswrapper[4724]: I0223 17:31:51.215941 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:51 crc kubenswrapper[4724]: I0223 17:31:51.215975 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:51 crc kubenswrapper[4724]: I0223 17:31:51.215994 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:51Z","lastTransitionTime":"2026-02-23T17:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:51 crc kubenswrapper[4724]: I0223 17:31:51.319490 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:51 crc kubenswrapper[4724]: I0223 17:31:51.319550 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:51 crc kubenswrapper[4724]: I0223 17:31:51.319563 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:51 crc kubenswrapper[4724]: I0223 17:31:51.319587 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:51 crc kubenswrapper[4724]: I0223 17:31:51.319601 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:51Z","lastTransitionTime":"2026-02-23T17:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:51 crc kubenswrapper[4724]: I0223 17:31:51.422109 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:51 crc kubenswrapper[4724]: I0223 17:31:51.422147 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:51 crc kubenswrapper[4724]: I0223 17:31:51.422164 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:51 crc kubenswrapper[4724]: I0223 17:31:51.422193 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:51 crc kubenswrapper[4724]: I0223 17:31:51.422209 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:51Z","lastTransitionTime":"2026-02-23T17:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:51 crc kubenswrapper[4724]: I0223 17:31:51.525029 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:51 crc kubenswrapper[4724]: I0223 17:31:51.525074 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:51 crc kubenswrapper[4724]: I0223 17:31:51.525083 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:51 crc kubenswrapper[4724]: I0223 17:31:51.525101 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:51 crc kubenswrapper[4724]: I0223 17:31:51.525112 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:51Z","lastTransitionTime":"2026-02-23T17:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:51 crc kubenswrapper[4724]: I0223 17:31:51.627598 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:51 crc kubenswrapper[4724]: I0223 17:31:51.627646 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:51 crc kubenswrapper[4724]: I0223 17:31:51.627660 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:51 crc kubenswrapper[4724]: I0223 17:31:51.627679 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:51 crc kubenswrapper[4724]: I0223 17:31:51.627694 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:51Z","lastTransitionTime":"2026-02-23T17:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:51 crc kubenswrapper[4724]: I0223 17:31:51.730579 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:51 crc kubenswrapper[4724]: I0223 17:31:51.730654 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:51 crc kubenswrapper[4724]: I0223 17:31:51.730672 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:51 crc kubenswrapper[4724]: I0223 17:31:51.730698 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:51 crc kubenswrapper[4724]: I0223 17:31:51.730718 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:51Z","lastTransitionTime":"2026-02-23T17:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:51 crc kubenswrapper[4724]: I0223 17:31:51.833560 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:51 crc kubenswrapper[4724]: I0223 17:31:51.833632 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:51 crc kubenswrapper[4724]: I0223 17:31:51.833656 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:51 crc kubenswrapper[4724]: I0223 17:31:51.833685 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:51 crc kubenswrapper[4724]: I0223 17:31:51.833707 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:51Z","lastTransitionTime":"2026-02-23T17:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:51 crc kubenswrapper[4724]: I0223 17:31:51.936376 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:51 crc kubenswrapper[4724]: I0223 17:31:51.936429 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:51 crc kubenswrapper[4724]: I0223 17:31:51.936442 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:51 crc kubenswrapper[4724]: I0223 17:31:51.936460 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:51 crc kubenswrapper[4724]: I0223 17:31:51.936472 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:51Z","lastTransitionTime":"2026-02-23T17:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.038935 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.038975 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.038986 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.039001 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.039013 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:52Z","lastTransitionTime":"2026-02-23T17:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.121537 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 07:23:17.397677436 +0000 UTC Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.141517 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.141551 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.141561 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.141576 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.141586 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:52Z","lastTransitionTime":"2026-02-23T17:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.243767 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.243799 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.243807 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.243821 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.243831 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:52Z","lastTransitionTime":"2026-02-23T17:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.257929 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.257954 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.257962 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.257974 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.257981 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:52Z","lastTransitionTime":"2026-02-23T17:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:52 crc kubenswrapper[4724]: E0223 17:31:52.270773 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aaac6a71-65af-4ded-9945-71c01ce15653\\\",\\\"systemUUID\\\":\\\"883aa43b-ee67-45aa-9f6b-7760dc931d5e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:52Z is after 2025-08-24T17:21:41Z" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.274713 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.274747 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.274764 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.274779 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.274792 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:52Z","lastTransitionTime":"2026-02-23T17:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:52 crc kubenswrapper[4724]: E0223 17:31:52.290459 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aaac6a71-65af-4ded-9945-71c01ce15653\\\",\\\"systemUUID\\\":\\\"883aa43b-ee67-45aa-9f6b-7760dc931d5e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:52Z is after 2025-08-24T17:21:41Z" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.293889 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.293932 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.293945 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.293960 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.293972 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:52Z","lastTransitionTime":"2026-02-23T17:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:52 crc kubenswrapper[4724]: E0223 17:31:52.308794 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aaac6a71-65af-4ded-9945-71c01ce15653\\\",\\\"systemUUID\\\":\\\"883aa43b-ee67-45aa-9f6b-7760dc931d5e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:52Z is after 2025-08-24T17:21:41Z" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.312756 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.312799 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.312814 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.312832 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.312844 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:52Z","lastTransitionTime":"2026-02-23T17:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:52 crc kubenswrapper[4724]: E0223 17:31:52.326673 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aaac6a71-65af-4ded-9945-71c01ce15653\\\",\\\"systemUUID\\\":\\\"883aa43b-ee67-45aa-9f6b-7760dc931d5e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:52Z is after 2025-08-24T17:21:41Z" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.331014 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.331072 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.331091 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.331114 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.331126 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:52Z","lastTransitionTime":"2026-02-23T17:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:52 crc kubenswrapper[4724]: E0223 17:31:52.344761 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:31:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aaac6a71-65af-4ded-9945-71c01ce15653\\\",\\\"systemUUID\\\":\\\"883aa43b-ee67-45aa-9f6b-7760dc931d5e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:52Z is after 2025-08-24T17:21:41Z" Feb 23 17:31:52 crc kubenswrapper[4724]: E0223 17:31:52.344928 4724 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.350855 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.350885 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.350897 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.350914 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.350927 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:52Z","lastTransitionTime":"2026-02-23T17:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.453790 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.453837 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.453848 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.453867 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.453879 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:52Z","lastTransitionTime":"2026-02-23T17:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.556708 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.556753 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.556765 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.556785 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.556798 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:52Z","lastTransitionTime":"2026-02-23T17:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.659888 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.659952 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.659962 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.659980 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.659998 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:52Z","lastTransitionTime":"2026-02-23T17:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.763488 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.763538 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.763550 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.763569 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.763582 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:52Z","lastTransitionTime":"2026-02-23T17:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.866114 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.866165 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.866175 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.866196 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.866208 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:52Z","lastTransitionTime":"2026-02-23T17:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.950951 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.950961 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.950987 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:31:52 crc kubenswrapper[4724]: E0223 17:31:52.951186 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 17:31:52 crc kubenswrapper[4724]: E0223 17:31:52.951271 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 17:31:52 crc kubenswrapper[4724]: E0223 17:31:52.951348 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.968788 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.968818 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.968829 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.968862 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:52 crc kubenswrapper[4724]: I0223 17:31:52.968873 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:52Z","lastTransitionTime":"2026-02-23T17:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:53 crc kubenswrapper[4724]: I0223 17:31:53.071758 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:53 crc kubenswrapper[4724]: I0223 17:31:53.071799 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:53 crc kubenswrapper[4724]: I0223 17:31:53.071832 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:53 crc kubenswrapper[4724]: I0223 17:31:53.071848 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:53 crc kubenswrapper[4724]: I0223 17:31:53.071861 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:53Z","lastTransitionTime":"2026-02-23T17:31:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:53 crc kubenswrapper[4724]: I0223 17:31:53.122496 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 04:52:25.769278896 +0000 UTC Feb 23 17:31:53 crc kubenswrapper[4724]: I0223 17:31:53.178299 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:53 crc kubenswrapper[4724]: I0223 17:31:53.178367 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:53 crc kubenswrapper[4724]: I0223 17:31:53.178384 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:53 crc kubenswrapper[4724]: I0223 17:31:53.178431 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:53 crc kubenswrapper[4724]: I0223 17:31:53.178457 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:53Z","lastTransitionTime":"2026-02-23T17:31:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:53 crc kubenswrapper[4724]: I0223 17:31:53.280318 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:53 crc kubenswrapper[4724]: I0223 17:31:53.280380 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:53 crc kubenswrapper[4724]: I0223 17:31:53.280412 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:53 crc kubenswrapper[4724]: I0223 17:31:53.280435 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:53 crc kubenswrapper[4724]: I0223 17:31:53.280447 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:53Z","lastTransitionTime":"2026-02-23T17:31:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:53 crc kubenswrapper[4724]: I0223 17:31:53.383269 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:53 crc kubenswrapper[4724]: I0223 17:31:53.383317 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:53 crc kubenswrapper[4724]: I0223 17:31:53.383326 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:53 crc kubenswrapper[4724]: I0223 17:31:53.383344 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:53 crc kubenswrapper[4724]: I0223 17:31:53.383356 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:53Z","lastTransitionTime":"2026-02-23T17:31:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:53 crc kubenswrapper[4724]: I0223 17:31:53.486452 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:53 crc kubenswrapper[4724]: I0223 17:31:53.486505 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:53 crc kubenswrapper[4724]: I0223 17:31:53.486515 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:53 crc kubenswrapper[4724]: I0223 17:31:53.486532 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:53 crc kubenswrapper[4724]: I0223 17:31:53.486545 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:53Z","lastTransitionTime":"2026-02-23T17:31:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:53 crc kubenswrapper[4724]: I0223 17:31:53.589047 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:53 crc kubenswrapper[4724]: I0223 17:31:53.589104 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:53 crc kubenswrapper[4724]: I0223 17:31:53.589114 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:53 crc kubenswrapper[4724]: I0223 17:31:53.589134 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:53 crc kubenswrapper[4724]: I0223 17:31:53.589149 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:53Z","lastTransitionTime":"2026-02-23T17:31:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:53 crc kubenswrapper[4724]: I0223 17:31:53.692375 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:53 crc kubenswrapper[4724]: I0223 17:31:53.692432 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:53 crc kubenswrapper[4724]: I0223 17:31:53.692445 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:53 crc kubenswrapper[4724]: I0223 17:31:53.692465 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:53 crc kubenswrapper[4724]: I0223 17:31:53.692480 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:53Z","lastTransitionTime":"2026-02-23T17:31:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:53 crc kubenswrapper[4724]: I0223 17:31:53.795269 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:53 crc kubenswrapper[4724]: I0223 17:31:53.795304 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:53 crc kubenswrapper[4724]: I0223 17:31:53.795314 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:53 crc kubenswrapper[4724]: I0223 17:31:53.795332 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:53 crc kubenswrapper[4724]: I0223 17:31:53.795342 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:53Z","lastTransitionTime":"2026-02-23T17:31:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:53 crc kubenswrapper[4724]: I0223 17:31:53.896913 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:53 crc kubenswrapper[4724]: I0223 17:31:53.896942 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:53 crc kubenswrapper[4724]: I0223 17:31:53.896951 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:53 crc kubenswrapper[4724]: I0223 17:31:53.896966 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:53 crc kubenswrapper[4724]: I0223 17:31:53.896977 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:53Z","lastTransitionTime":"2026-02-23T17:31:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:54 crc kubenswrapper[4724]: I0223 17:31:54.000209 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:54 crc kubenswrapper[4724]: I0223 17:31:54.000257 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:54 crc kubenswrapper[4724]: I0223 17:31:54.000272 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:54 crc kubenswrapper[4724]: I0223 17:31:54.000293 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:54 crc kubenswrapper[4724]: I0223 17:31:54.000308 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:54Z","lastTransitionTime":"2026-02-23T17:31:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:54 crc kubenswrapper[4724]: I0223 17:31:54.103594 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:54 crc kubenswrapper[4724]: I0223 17:31:54.103628 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:54 crc kubenswrapper[4724]: I0223 17:31:54.103638 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:54 crc kubenswrapper[4724]: I0223 17:31:54.103653 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:54 crc kubenswrapper[4724]: I0223 17:31:54.103664 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:54Z","lastTransitionTime":"2026-02-23T17:31:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:54 crc kubenswrapper[4724]: I0223 17:31:54.123316 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 03:53:59.860393254 +0000 UTC Feb 23 17:31:54 crc kubenswrapper[4724]: I0223 17:31:54.206194 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:54 crc kubenswrapper[4724]: I0223 17:31:54.206231 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:54 crc kubenswrapper[4724]: I0223 17:31:54.206240 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:54 crc kubenswrapper[4724]: I0223 17:31:54.206255 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:54 crc kubenswrapper[4724]: I0223 17:31:54.206264 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:54Z","lastTransitionTime":"2026-02-23T17:31:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:54 crc kubenswrapper[4724]: I0223 17:31:54.310038 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:54 crc kubenswrapper[4724]: I0223 17:31:54.310082 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:54 crc kubenswrapper[4724]: I0223 17:31:54.310090 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:54 crc kubenswrapper[4724]: I0223 17:31:54.310119 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:54 crc kubenswrapper[4724]: I0223 17:31:54.310132 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:54Z","lastTransitionTime":"2026-02-23T17:31:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:54 crc kubenswrapper[4724]: I0223 17:31:54.412609 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:54 crc kubenswrapper[4724]: I0223 17:31:54.412639 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:54 crc kubenswrapper[4724]: I0223 17:31:54.412648 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:54 crc kubenswrapper[4724]: I0223 17:31:54.412663 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:54 crc kubenswrapper[4724]: I0223 17:31:54.412674 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:54Z","lastTransitionTime":"2026-02-23T17:31:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:54 crc kubenswrapper[4724]: I0223 17:31:54.515839 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:54 crc kubenswrapper[4724]: I0223 17:31:54.515885 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:54 crc kubenswrapper[4724]: I0223 17:31:54.515897 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:54 crc kubenswrapper[4724]: I0223 17:31:54.515915 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:54 crc kubenswrapper[4724]: I0223 17:31:54.515926 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:54Z","lastTransitionTime":"2026-02-23T17:31:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:54 crc kubenswrapper[4724]: I0223 17:31:54.617853 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:54 crc kubenswrapper[4724]: I0223 17:31:54.617888 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:54 crc kubenswrapper[4724]: I0223 17:31:54.617897 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:54 crc kubenswrapper[4724]: I0223 17:31:54.617912 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:54 crc kubenswrapper[4724]: I0223 17:31:54.617922 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:54Z","lastTransitionTime":"2026-02-23T17:31:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:54 crc kubenswrapper[4724]: I0223 17:31:54.720764 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:54 crc kubenswrapper[4724]: I0223 17:31:54.720816 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:54 crc kubenswrapper[4724]: I0223 17:31:54.720829 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:54 crc kubenswrapper[4724]: I0223 17:31:54.720844 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:54 crc kubenswrapper[4724]: I0223 17:31:54.720858 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:54Z","lastTransitionTime":"2026-02-23T17:31:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:54 crc kubenswrapper[4724]: I0223 17:31:54.823721 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:54 crc kubenswrapper[4724]: I0223 17:31:54.823799 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:54 crc kubenswrapper[4724]: I0223 17:31:54.823815 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:54 crc kubenswrapper[4724]: I0223 17:31:54.823842 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:54 crc kubenswrapper[4724]: I0223 17:31:54.823855 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:54Z","lastTransitionTime":"2026-02-23T17:31:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:54 crc kubenswrapper[4724]: I0223 17:31:54.926846 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:54 crc kubenswrapper[4724]: I0223 17:31:54.926905 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:54 crc kubenswrapper[4724]: I0223 17:31:54.926918 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:54 crc kubenswrapper[4724]: I0223 17:31:54.926943 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:54 crc kubenswrapper[4724]: I0223 17:31:54.926958 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:54Z","lastTransitionTime":"2026-02-23T17:31:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:54 crc kubenswrapper[4724]: I0223 17:31:54.950164 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:31:54 crc kubenswrapper[4724]: I0223 17:31:54.950233 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:31:54 crc kubenswrapper[4724]: I0223 17:31:54.950164 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:31:54 crc kubenswrapper[4724]: E0223 17:31:54.950374 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 17:31:54 crc kubenswrapper[4724]: E0223 17:31:54.950476 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 17:31:54 crc kubenswrapper[4724]: E0223 17:31:54.950610 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 17:31:54 crc kubenswrapper[4724]: I0223 17:31:54.968216 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d114679fca6bbb2dbaaee0970449fe265c7c931665eb559d015abb61de3ec07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dd3d46856b5d5dece678b672db6485b68b9b7fbe9f029f7d9df37906146674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:54Z is after 2025-08-24T17:21:41Z" Feb 23 17:31:54 crc kubenswrapper[4724]: I0223 17:31:54.982890 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:54Z is after 2025-08-24T17:21:41Z" Feb 23 17:31:54 crc kubenswrapper[4724]: I0223 17:31:54.996054 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:54Z is after 2025-08-24T17:21:41Z" Feb 23 17:31:55 crc kubenswrapper[4724]: I0223 17:31:55.010208 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d85ca9995343f8d309c902c37fc2db6faf539dfde44cc8334e279648ec2ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:55Z is after 2025-08-24T17:21:41Z" Feb 23 17:31:55 crc kubenswrapper[4724]: I0223 17:31:55.028470 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e1d6606-75fc-41fd-9c23-18ee248da2af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe805066e5e61f09a4f5cf1f9e912da6664229c8db86a898cbe670278e4c2530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beda3a9d030f2d4aa9c6a78508a84f5bc122debebb2c5b423233834433b8fd5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e108d74a635da102e857e2ad4f18384861e4e630ae31af9aa05841e6bbc2de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:25Z\\\",\\\"message\\\":\\\"W0223 17:31:24.378840 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 17:31:24.379771 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771867884 cert, and key in /tmp/serving-cert-1953836024/serving-signer.crt, /tmp/serving-cert-1953836024/serving-signer.key\\\\nI0223 17:31:24.846158 1 observer_polling.go:159] Starting file observer\\\\nW0223 17:31:24.858497 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 17:31:24.858706 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 17:31:24.859835 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1953836024/tls.crt::/tmp/serving-cert-1953836024/tls.key\\\\\\\"\\\\nF0223 17:31:25.358711 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:31:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b550694e49325768a2674a4d2b7089dd267b91a273751fb62d060a0663f06619\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:55Z is after 2025-08-24T17:21:41Z" Feb 23 17:31:55 crc kubenswrapper[4724]: I0223 17:31:55.030468 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:55 crc kubenswrapper[4724]: I0223 17:31:55.030509 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:55 crc kubenswrapper[4724]: I0223 17:31:55.030525 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:55 crc kubenswrapper[4724]: I0223 17:31:55.030547 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:55 crc kubenswrapper[4724]: I0223 17:31:55.030560 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:55Z","lastTransitionTime":"2026-02-23T17:31:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:55 crc kubenswrapper[4724]: I0223 17:31:55.045849 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:55Z is after 2025-08-24T17:21:41Z" Feb 23 17:31:55 crc kubenswrapper[4724]: I0223 17:31:55.058541 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2df91905-ff3d-4a7d-8e22-337861483a5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9fabd2560e4a3826fb070276626936d120d612d37e2750e417f637b93b1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff1c2b88a730048902a1360cdbcddfb4f82c6a58c427edb24a792b254f55383\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:17Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 17:30:47.305065 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 17:30:47.307934 1 observer_polling.go:159] Starting file observer\\\\nI0223 17:30:47.346927 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 17:30:47.351923 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0223 17:31:17.828320 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:16Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbce4ddd851479ee9b40b8c773a052d5a1d1420df107d8ea0e9fa2b927300910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bfa4a72a4a487fbc7703a3b60a9ef9a18b7fea6e58fd427f50361d8df9731d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2591d81bed9e5dbad670e16b2266367cfb386a3d68ffad9d09e0652130ee2609\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:55Z is after 2025-08-24T17:21:41Z" Feb 23 17:31:55 crc kubenswrapper[4724]: I0223 17:31:55.070154 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35021d6d-15bd-4153-9dae-7b002eff8c23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26478861fb89db7eb6b5ba0a2089cad4360893bd95a71883be07831745c5c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f971a2a092b595760de8b7a9b6a0a13fa802d3fcb6a2c2b8922b01c9c57d782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d892cd1f6ea7fabb2e64e3e78d329f6e0efbf8585cc4f72fbcd29f31a035cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:55Z is after 2025-08-24T17:21:41Z" Feb 23 17:31:55 crc kubenswrapper[4724]: I0223 17:31:55.082797 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbcb3c1b1daa4328b6e5e049e9f1270ee94da4897df26324f6e4cd898f565de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:55Z is after 2025-08-24T17:21:41Z" Feb 23 17:31:55 crc kubenswrapper[4724]: I0223 17:31:55.123618 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 08:09:10.526922194 +0000 UTC Feb 23 17:31:55 crc kubenswrapper[4724]: I0223 17:31:55.133676 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:55 crc kubenswrapper[4724]: I0223 17:31:55.133713 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:55 crc kubenswrapper[4724]: I0223 17:31:55.133726 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:55 crc kubenswrapper[4724]: I0223 17:31:55.133742 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:55 crc kubenswrapper[4724]: I0223 17:31:55.133753 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:55Z","lastTransitionTime":"2026-02-23T17:31:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:55 crc kubenswrapper[4724]: I0223 17:31:55.238657 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:55 crc kubenswrapper[4724]: I0223 17:31:55.238731 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:55 crc kubenswrapper[4724]: I0223 17:31:55.238754 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:55 crc kubenswrapper[4724]: I0223 17:31:55.238774 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:55 crc kubenswrapper[4724]: I0223 17:31:55.238798 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:55Z","lastTransitionTime":"2026-02-23T17:31:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:55 crc kubenswrapper[4724]: I0223 17:31:55.341841 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:55 crc kubenswrapper[4724]: I0223 17:31:55.341896 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:55 crc kubenswrapper[4724]: I0223 17:31:55.341908 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:55 crc kubenswrapper[4724]: I0223 17:31:55.341929 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:55 crc kubenswrapper[4724]: I0223 17:31:55.341942 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:55Z","lastTransitionTime":"2026-02-23T17:31:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:55 crc kubenswrapper[4724]: I0223 17:31:55.445021 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:55 crc kubenswrapper[4724]: I0223 17:31:55.445095 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:55 crc kubenswrapper[4724]: I0223 17:31:55.445106 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:55 crc kubenswrapper[4724]: I0223 17:31:55.445124 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:55 crc kubenswrapper[4724]: I0223 17:31:55.445135 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:55Z","lastTransitionTime":"2026-02-23T17:31:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:55 crc kubenswrapper[4724]: I0223 17:31:55.547563 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:55 crc kubenswrapper[4724]: I0223 17:31:55.547618 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:55 crc kubenswrapper[4724]: I0223 17:31:55.547630 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:55 crc kubenswrapper[4724]: I0223 17:31:55.547650 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:55 crc kubenswrapper[4724]: I0223 17:31:55.547663 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:55Z","lastTransitionTime":"2026-02-23T17:31:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:55 crc kubenswrapper[4724]: I0223 17:31:55.650290 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:55 crc kubenswrapper[4724]: I0223 17:31:55.650348 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:55 crc kubenswrapper[4724]: I0223 17:31:55.650362 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:55 crc kubenswrapper[4724]: I0223 17:31:55.650383 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:55 crc kubenswrapper[4724]: I0223 17:31:55.650419 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:55Z","lastTransitionTime":"2026-02-23T17:31:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:55 crc kubenswrapper[4724]: I0223 17:31:55.753335 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:55 crc kubenswrapper[4724]: I0223 17:31:55.753415 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:55 crc kubenswrapper[4724]: I0223 17:31:55.753427 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:55 crc kubenswrapper[4724]: I0223 17:31:55.753449 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:55 crc kubenswrapper[4724]: I0223 17:31:55.753461 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:55Z","lastTransitionTime":"2026-02-23T17:31:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:55 crc kubenswrapper[4724]: I0223 17:31:55.856136 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:55 crc kubenswrapper[4724]: I0223 17:31:55.856570 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:55 crc kubenswrapper[4724]: I0223 17:31:55.856672 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:55 crc kubenswrapper[4724]: I0223 17:31:55.856776 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:55 crc kubenswrapper[4724]: I0223 17:31:55.856861 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:55Z","lastTransitionTime":"2026-02-23T17:31:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:55 crc kubenswrapper[4724]: I0223 17:31:55.960000 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:55 crc kubenswrapper[4724]: I0223 17:31:55.960688 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:55 crc kubenswrapper[4724]: I0223 17:31:55.960756 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:55 crc kubenswrapper[4724]: I0223 17:31:55.960834 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:55 crc kubenswrapper[4724]: I0223 17:31:55.960916 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:55Z","lastTransitionTime":"2026-02-23T17:31:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:56 crc kubenswrapper[4724]: I0223 17:31:56.063277 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:56 crc kubenswrapper[4724]: I0223 17:31:56.063331 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:56 crc kubenswrapper[4724]: I0223 17:31:56.063344 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:56 crc kubenswrapper[4724]: I0223 17:31:56.063366 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:56 crc kubenswrapper[4724]: I0223 17:31:56.063379 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:56Z","lastTransitionTime":"2026-02-23T17:31:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:56 crc kubenswrapper[4724]: I0223 17:31:56.124039 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 08:01:03.221200669 +0000 UTC Feb 23 17:31:56 crc kubenswrapper[4724]: I0223 17:31:56.166437 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:56 crc kubenswrapper[4724]: I0223 17:31:56.166498 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:56 crc kubenswrapper[4724]: I0223 17:31:56.166514 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:56 crc kubenswrapper[4724]: I0223 17:31:56.166550 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:56 crc kubenswrapper[4724]: I0223 17:31:56.166568 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:56Z","lastTransitionTime":"2026-02-23T17:31:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:56 crc kubenswrapper[4724]: I0223 17:31:56.268966 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:56 crc kubenswrapper[4724]: I0223 17:31:56.269002 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:56 crc kubenswrapper[4724]: I0223 17:31:56.269011 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:56 crc kubenswrapper[4724]: I0223 17:31:56.269027 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:56 crc kubenswrapper[4724]: I0223 17:31:56.269036 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:56Z","lastTransitionTime":"2026-02-23T17:31:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:56 crc kubenswrapper[4724]: I0223 17:31:56.372111 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:56 crc kubenswrapper[4724]: I0223 17:31:56.372449 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:56 crc kubenswrapper[4724]: I0223 17:31:56.372568 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:56 crc kubenswrapper[4724]: I0223 17:31:56.372669 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:56 crc kubenswrapper[4724]: I0223 17:31:56.372763 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:56Z","lastTransitionTime":"2026-02-23T17:31:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:56 crc kubenswrapper[4724]: I0223 17:31:56.475332 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:56 crc kubenswrapper[4724]: I0223 17:31:56.475383 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:56 crc kubenswrapper[4724]: I0223 17:31:56.475408 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:56 crc kubenswrapper[4724]: I0223 17:31:56.475429 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:56 crc kubenswrapper[4724]: I0223 17:31:56.475440 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:56Z","lastTransitionTime":"2026-02-23T17:31:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:56 crc kubenswrapper[4724]: I0223 17:31:56.579082 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:56 crc kubenswrapper[4724]: I0223 17:31:56.579145 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:56 crc kubenswrapper[4724]: I0223 17:31:56.579156 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:56 crc kubenswrapper[4724]: I0223 17:31:56.579178 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:56 crc kubenswrapper[4724]: I0223 17:31:56.579189 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:56Z","lastTransitionTime":"2026-02-23T17:31:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:56 crc kubenswrapper[4724]: I0223 17:31:56.681816 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:56 crc kubenswrapper[4724]: I0223 17:31:56.681866 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:56 crc kubenswrapper[4724]: I0223 17:31:56.681877 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:56 crc kubenswrapper[4724]: I0223 17:31:56.681896 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:56 crc kubenswrapper[4724]: I0223 17:31:56.681907 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:56Z","lastTransitionTime":"2026-02-23T17:31:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:56 crc kubenswrapper[4724]: I0223 17:31:56.785181 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:56 crc kubenswrapper[4724]: I0223 17:31:56.785246 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:56 crc kubenswrapper[4724]: I0223 17:31:56.785256 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:56 crc kubenswrapper[4724]: I0223 17:31:56.785274 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:56 crc kubenswrapper[4724]: I0223 17:31:56.785284 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:56Z","lastTransitionTime":"2026-02-23T17:31:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:56 crc kubenswrapper[4724]: I0223 17:31:56.888329 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:56 crc kubenswrapper[4724]: I0223 17:31:56.888434 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:56 crc kubenswrapper[4724]: I0223 17:31:56.888450 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:56 crc kubenswrapper[4724]: I0223 17:31:56.888468 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:56 crc kubenswrapper[4724]: I0223 17:31:56.888482 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:56Z","lastTransitionTime":"2026-02-23T17:31:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:56 crc kubenswrapper[4724]: I0223 17:31:56.950594 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:31:56 crc kubenswrapper[4724]: I0223 17:31:56.950593 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:31:56 crc kubenswrapper[4724]: I0223 17:31:56.950616 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:31:56 crc kubenswrapper[4724]: E0223 17:31:56.950937 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 17:31:56 crc kubenswrapper[4724]: E0223 17:31:56.951089 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 17:31:56 crc kubenswrapper[4724]: E0223 17:31:56.951133 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 17:31:56 crc kubenswrapper[4724]: I0223 17:31:56.960180 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 23 17:31:56 crc kubenswrapper[4724]: I0223 17:31:56.991023 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:56 crc kubenswrapper[4724]: I0223 17:31:56.991109 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:56 crc kubenswrapper[4724]: I0223 17:31:56.991128 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:56 crc kubenswrapper[4724]: I0223 17:31:56.991153 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:56 crc kubenswrapper[4724]: I0223 17:31:56.991172 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:56Z","lastTransitionTime":"2026-02-23T17:31:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:57 crc kubenswrapper[4724]: I0223 17:31:57.097631 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:57 crc kubenswrapper[4724]: I0223 17:31:57.097696 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:57 crc kubenswrapper[4724]: I0223 17:31:57.097707 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:57 crc kubenswrapper[4724]: I0223 17:31:57.097725 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:57 crc kubenswrapper[4724]: I0223 17:31:57.097738 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:57Z","lastTransitionTime":"2026-02-23T17:31:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:57 crc kubenswrapper[4724]: I0223 17:31:57.125412 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 07:52:08.05193379 +0000 UTC Feb 23 17:31:57 crc kubenswrapper[4724]: I0223 17:31:57.199761 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:57 crc kubenswrapper[4724]: I0223 17:31:57.199798 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:57 crc kubenswrapper[4724]: I0223 17:31:57.199811 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:57 crc kubenswrapper[4724]: I0223 17:31:57.199840 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:57 crc kubenswrapper[4724]: I0223 17:31:57.199851 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:57Z","lastTransitionTime":"2026-02-23T17:31:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:57 crc kubenswrapper[4724]: I0223 17:31:57.302600 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:57 crc kubenswrapper[4724]: I0223 17:31:57.302636 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:57 crc kubenswrapper[4724]: I0223 17:31:57.302646 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:57 crc kubenswrapper[4724]: I0223 17:31:57.302661 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:57 crc kubenswrapper[4724]: I0223 17:31:57.302673 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:57Z","lastTransitionTime":"2026-02-23T17:31:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:57 crc kubenswrapper[4724]: I0223 17:31:57.405121 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:57 crc kubenswrapper[4724]: I0223 17:31:57.405198 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:57 crc kubenswrapper[4724]: I0223 17:31:57.405211 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:57 crc kubenswrapper[4724]: I0223 17:31:57.405227 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:57 crc kubenswrapper[4724]: I0223 17:31:57.405238 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:57Z","lastTransitionTime":"2026-02-23T17:31:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:57 crc kubenswrapper[4724]: I0223 17:31:57.507474 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:57 crc kubenswrapper[4724]: I0223 17:31:57.507850 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:57 crc kubenswrapper[4724]: I0223 17:31:57.507936 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:57 crc kubenswrapper[4724]: I0223 17:31:57.508011 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:57 crc kubenswrapper[4724]: I0223 17:31:57.508083 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:57Z","lastTransitionTime":"2026-02-23T17:31:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:57 crc kubenswrapper[4724]: I0223 17:31:57.612116 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:57 crc kubenswrapper[4724]: I0223 17:31:57.612153 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:57 crc kubenswrapper[4724]: I0223 17:31:57.612164 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:57 crc kubenswrapper[4724]: I0223 17:31:57.612185 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:57 crc kubenswrapper[4724]: I0223 17:31:57.612199 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:57Z","lastTransitionTime":"2026-02-23T17:31:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:57 crc kubenswrapper[4724]: I0223 17:31:57.714791 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:57 crc kubenswrapper[4724]: I0223 17:31:57.715129 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:57 crc kubenswrapper[4724]: I0223 17:31:57.715214 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:57 crc kubenswrapper[4724]: I0223 17:31:57.715315 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:57 crc kubenswrapper[4724]: I0223 17:31:57.715407 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:57Z","lastTransitionTime":"2026-02-23T17:31:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:57 crc kubenswrapper[4724]: I0223 17:31:57.818891 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:57 crc kubenswrapper[4724]: I0223 17:31:57.818949 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:57 crc kubenswrapper[4724]: I0223 17:31:57.818963 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:57 crc kubenswrapper[4724]: I0223 17:31:57.818983 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:57 crc kubenswrapper[4724]: I0223 17:31:57.818997 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:57Z","lastTransitionTime":"2026-02-23T17:31:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:57 crc kubenswrapper[4724]: I0223 17:31:57.921866 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:57 crc kubenswrapper[4724]: I0223 17:31:57.921960 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:57 crc kubenswrapper[4724]: I0223 17:31:57.921993 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:57 crc kubenswrapper[4724]: I0223 17:31:57.922028 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:57 crc kubenswrapper[4724]: I0223 17:31:57.922053 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:57Z","lastTransitionTime":"2026-02-23T17:31:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:58 crc kubenswrapper[4724]: I0223 17:31:58.024439 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:58 crc kubenswrapper[4724]: I0223 17:31:58.024486 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:58 crc kubenswrapper[4724]: I0223 17:31:58.024501 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:58 crc kubenswrapper[4724]: I0223 17:31:58.024519 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:58 crc kubenswrapper[4724]: I0223 17:31:58.024531 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:58Z","lastTransitionTime":"2026-02-23T17:31:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:58 crc kubenswrapper[4724]: I0223 17:31:58.125591 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 19:30:17.451687502 +0000 UTC Feb 23 17:31:58 crc kubenswrapper[4724]: I0223 17:31:58.128481 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:58 crc kubenswrapper[4724]: I0223 17:31:58.128551 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:58 crc kubenswrapper[4724]: I0223 17:31:58.128567 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:58 crc kubenswrapper[4724]: I0223 17:31:58.128598 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:58 crc kubenswrapper[4724]: I0223 17:31:58.128633 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:58Z","lastTransitionTime":"2026-02-23T17:31:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:58 crc kubenswrapper[4724]: I0223 17:31:58.231844 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:58 crc kubenswrapper[4724]: I0223 17:31:58.232250 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:58 crc kubenswrapper[4724]: I0223 17:31:58.232451 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:58 crc kubenswrapper[4724]: I0223 17:31:58.232608 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:58 crc kubenswrapper[4724]: I0223 17:31:58.232767 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:58Z","lastTransitionTime":"2026-02-23T17:31:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:58 crc kubenswrapper[4724]: I0223 17:31:58.335464 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:58 crc kubenswrapper[4724]: I0223 17:31:58.335504 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:58 crc kubenswrapper[4724]: I0223 17:31:58.335515 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:58 crc kubenswrapper[4724]: I0223 17:31:58.335534 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:58 crc kubenswrapper[4724]: I0223 17:31:58.335543 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:58Z","lastTransitionTime":"2026-02-23T17:31:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:58 crc kubenswrapper[4724]: I0223 17:31:58.438836 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:58 crc kubenswrapper[4724]: I0223 17:31:58.438906 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:58 crc kubenswrapper[4724]: I0223 17:31:58.438926 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:58 crc kubenswrapper[4724]: I0223 17:31:58.438959 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:58 crc kubenswrapper[4724]: I0223 17:31:58.438979 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:58Z","lastTransitionTime":"2026-02-23T17:31:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:58 crc kubenswrapper[4724]: I0223 17:31:58.542207 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:58 crc kubenswrapper[4724]: I0223 17:31:58.542546 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:58 crc kubenswrapper[4724]: I0223 17:31:58.542610 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:58 crc kubenswrapper[4724]: I0223 17:31:58.542681 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:58 crc kubenswrapper[4724]: I0223 17:31:58.542737 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:58Z","lastTransitionTime":"2026-02-23T17:31:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:58 crc kubenswrapper[4724]: I0223 17:31:58.645713 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:58 crc kubenswrapper[4724]: I0223 17:31:58.645755 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:58 crc kubenswrapper[4724]: I0223 17:31:58.645770 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:58 crc kubenswrapper[4724]: I0223 17:31:58.645788 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:58 crc kubenswrapper[4724]: I0223 17:31:58.645799 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:58Z","lastTransitionTime":"2026-02-23T17:31:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:58 crc kubenswrapper[4724]: I0223 17:31:58.748378 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:58 crc kubenswrapper[4724]: I0223 17:31:58.748440 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:58 crc kubenswrapper[4724]: I0223 17:31:58.748451 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:58 crc kubenswrapper[4724]: I0223 17:31:58.748467 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:58 crc kubenswrapper[4724]: I0223 17:31:58.748479 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:58Z","lastTransitionTime":"2026-02-23T17:31:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:58 crc kubenswrapper[4724]: I0223 17:31:58.851086 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:58 crc kubenswrapper[4724]: I0223 17:31:58.851451 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:58 crc kubenswrapper[4724]: I0223 17:31:58.851547 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:58 crc kubenswrapper[4724]: I0223 17:31:58.851618 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:58 crc kubenswrapper[4724]: I0223 17:31:58.851678 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:58Z","lastTransitionTime":"2026-02-23T17:31:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:58 crc kubenswrapper[4724]: I0223 17:31:58.950481 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:31:58 crc kubenswrapper[4724]: I0223 17:31:58.950608 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:31:58 crc kubenswrapper[4724]: I0223 17:31:58.950708 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:31:58 crc kubenswrapper[4724]: E0223 17:31:58.951200 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 17:31:58 crc kubenswrapper[4724]: E0223 17:31:58.951343 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 17:31:58 crc kubenswrapper[4724]: I0223 17:31:58.951440 4724 scope.go:117] "RemoveContainer" containerID="93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae" Feb 23 17:31:58 crc kubenswrapper[4724]: E0223 17:31:58.951475 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 17:31:58 crc kubenswrapper[4724]: I0223 17:31:58.954449 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:58 crc kubenswrapper[4724]: I0223 17:31:58.954482 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:58 crc kubenswrapper[4724]: I0223 17:31:58.954494 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:58 crc kubenswrapper[4724]: I0223 17:31:58.954509 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:58 crc kubenswrapper[4724]: I0223 17:31:58.954524 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:58Z","lastTransitionTime":"2026-02-23T17:31:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.058017 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.058075 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.058087 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.058113 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.058128 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:59Z","lastTransitionTime":"2026-02-23T17:31:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.126265 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 22:45:27.494988119 +0000 UTC Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.161346 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.161418 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.161435 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.161458 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.161477 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:59Z","lastTransitionTime":"2026-02-23T17:31:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.264997 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.265061 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.265085 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.265149 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.265172 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:59Z","lastTransitionTime":"2026-02-23T17:31:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.367468 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.367517 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.367527 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.367544 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.367557 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:59Z","lastTransitionTime":"2026-02-23T17:31:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.470420 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.470485 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.470503 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.470535 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.470556 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:59Z","lastTransitionTime":"2026-02-23T17:31:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.574283 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.574334 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.574345 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.574366 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.574379 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:59Z","lastTransitionTime":"2026-02-23T17:31:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.622476 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.624739 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a5b889ee0ad15cc6ff92c69a44c4bc0bf620e23e3dc7276324662c265ba5c7d4"} Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.625964 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.643624 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2df91905-ff3d-4a7d-8e22-337861483a5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9fabd2560e4a3826fb070276626936d120d612d37e2750e417f637b93b1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff1c2b88a730048902a1360cdbcddfb4f82c6a58c427edb24a792b254f55383\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:17Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 17:30:47.305065 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 17:30:47.307934 1 observer_polling.go:159] Starting file observer\\\\nI0223 17:30:47.346927 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 17:30:47.351923 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0223 17:31:17.828320 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:16Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbce4ddd851479ee9b40b8c773a052d5a1d1420df107d8ea0e9fa2b927300910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bfa4a72a4a487fbc7703a3b60a9ef9a18b7fea6e58fd427f50361d8df9731d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2591d81bed9e5dbad670e16b2266367cfb386a3d68ffad9d09e0652130ee2609\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:59Z is after 2025-08-24T17:21:41Z" Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.662040 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35021d6d-15bd-4153-9dae-7b002eff8c23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26478861fb89db7eb6b5ba0a2089cad4360893bd95a71883be07831745c5c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f971a2a092b595760de8b7a9b6a0a13fa802d3fcb6a2c2b8922b01c9c57d782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d892cd1f6ea7fabb2e64e3e78d329f6e0efbf8585cc4f72fbcd29f31a035cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:59Z is after 2025-08-24T17:21:41Z" Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.677457 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.677493 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.677504 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.677523 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.677540 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:59Z","lastTransitionTime":"2026-02-23T17:31:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.680532 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbcb3c1b1daa4328b6e5e049e9f1270ee94da4897df26324f6e4cd898f565de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:59Z is after 2025-08-24T17:21:41Z" Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.692989 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d85ca9995343f8d309c902c37fc2db6faf539dfde44cc8334e279648ec2ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:59Z is after 2025-08-24T17:21:41Z" Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.705518 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74b84e23-8d8b-47b7-b3d4-269b16e9dfb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e581ef5d1fb9752e7582661dffceba50fc7b408467d9b0f0cca3b29e1838e72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:59Z is after 2025-08-24T17:21:41Z" Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.719741 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e1d6606-75fc-41fd-9c23-18ee248da2af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe805066e5e61f09a4f5cf1f9e912da6664229c8db86a898cbe670278e4c2530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beda3a9d030f2d4aa9c6a78508a84f5bc122debebb2c5b423233834433b8fd5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e108d74a635da102e857e2ad4f18384861e4e630ae31af9aa05841e6bbc2de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b889ee0ad15cc6ff92c69a44c4bc0bf620e23e3dc7276324662c265ba5c7d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:25Z\\\",\\\"message\\\":\\\"W0223 17:31:24.378840 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 17:31:24.379771 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771867884 cert, and key in /tmp/serving-cert-1953836024/serving-signer.crt, /tmp/serving-cert-1953836024/serving-signer.key\\\\nI0223 17:31:24.846158 1 observer_polling.go:159] Starting file observer\\\\nW0223 17:31:24.858497 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 17:31:24.858706 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 17:31:24.859835 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1953836024/tls.crt::/tmp/serving-cert-1953836024/tls.key\\\\\\\"\\\\nF0223 17:31:25.358711 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:31:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b550694e49325768a2674a4d2b7089dd267b91a273751fb62d060a0663f06619\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:59Z is after 2025-08-24T17:21:41Z" Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.732922 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:59Z is after 2025-08-24T17:21:41Z" Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.747155 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d114679fca6bbb2dbaaee0970449fe265c7c931665eb559d015abb61de3ec07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dd3d46856b5d5dece678b672db6485b68b9b7fbe9f029f7d9df37906146674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:59Z is after 2025-08-24T17:21:41Z" Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.759154 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:59Z is after 2025-08-24T17:21:41Z" Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.770950 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:59Z is after 2025-08-24T17:21:41Z" Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.780311 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.780347 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.780355 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.780372 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.780402 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:59Z","lastTransitionTime":"2026-02-23T17:31:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.883874 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.883937 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.883949 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.883969 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.883983 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:59Z","lastTransitionTime":"2026-02-23T17:31:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.986280 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.986350 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.986370 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.986404 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:31:59 crc kubenswrapper[4724]: I0223 17:31:59.986417 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:31:59Z","lastTransitionTime":"2026-02-23T17:31:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:00 crc kubenswrapper[4724]: I0223 17:32:00.089799 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:00 crc kubenswrapper[4724]: I0223 17:32:00.089918 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:00 crc kubenswrapper[4724]: I0223 17:32:00.089947 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:00 crc kubenswrapper[4724]: I0223 17:32:00.089981 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:00 crc kubenswrapper[4724]: I0223 17:32:00.090014 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:00Z","lastTransitionTime":"2026-02-23T17:32:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:00 crc kubenswrapper[4724]: I0223 17:32:00.127420 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 22:15:05.09314855 +0000 UTC Feb 23 17:32:00 crc kubenswrapper[4724]: I0223 17:32:00.192626 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:00 crc kubenswrapper[4724]: I0223 17:32:00.192673 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:00 crc kubenswrapper[4724]: I0223 17:32:00.192682 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:00 crc kubenswrapper[4724]: I0223 17:32:00.192698 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:00 crc kubenswrapper[4724]: I0223 17:32:00.192710 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:00Z","lastTransitionTime":"2026-02-23T17:32:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:00 crc kubenswrapper[4724]: I0223 17:32:00.295545 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:00 crc kubenswrapper[4724]: I0223 17:32:00.295638 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:00 crc kubenswrapper[4724]: I0223 17:32:00.295649 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:00 crc kubenswrapper[4724]: I0223 17:32:00.295670 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:00 crc kubenswrapper[4724]: I0223 17:32:00.295683 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:00Z","lastTransitionTime":"2026-02-23T17:32:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:00 crc kubenswrapper[4724]: I0223 17:32:00.398822 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:00 crc kubenswrapper[4724]: I0223 17:32:00.398871 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:00 crc kubenswrapper[4724]: I0223 17:32:00.398888 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:00 crc kubenswrapper[4724]: I0223 17:32:00.398914 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:00 crc kubenswrapper[4724]: I0223 17:32:00.398933 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:00Z","lastTransitionTime":"2026-02-23T17:32:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:00 crc kubenswrapper[4724]: I0223 17:32:00.502210 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:00 crc kubenswrapper[4724]: I0223 17:32:00.502271 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:00 crc kubenswrapper[4724]: I0223 17:32:00.502285 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:00 crc kubenswrapper[4724]: I0223 17:32:00.502303 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:00 crc kubenswrapper[4724]: I0223 17:32:00.502317 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:00Z","lastTransitionTime":"2026-02-23T17:32:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:00 crc kubenswrapper[4724]: I0223 17:32:00.605038 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:00 crc kubenswrapper[4724]: I0223 17:32:00.605096 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:00 crc kubenswrapper[4724]: I0223 17:32:00.605109 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:00 crc kubenswrapper[4724]: I0223 17:32:00.605131 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:00 crc kubenswrapper[4724]: I0223 17:32:00.605146 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:00Z","lastTransitionTime":"2026-02-23T17:32:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:00 crc kubenswrapper[4724]: I0223 17:32:00.708733 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:00 crc kubenswrapper[4724]: I0223 17:32:00.708798 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:00 crc kubenswrapper[4724]: I0223 17:32:00.708813 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:00 crc kubenswrapper[4724]: I0223 17:32:00.708848 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:00 crc kubenswrapper[4724]: I0223 17:32:00.708865 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:00Z","lastTransitionTime":"2026-02-23T17:32:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:00 crc kubenswrapper[4724]: I0223 17:32:00.812837 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:00 crc kubenswrapper[4724]: I0223 17:32:00.812891 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:00 crc kubenswrapper[4724]: I0223 17:32:00.812904 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:00 crc kubenswrapper[4724]: I0223 17:32:00.812927 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:00 crc kubenswrapper[4724]: I0223 17:32:00.812939 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:00Z","lastTransitionTime":"2026-02-23T17:32:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:00 crc kubenswrapper[4724]: I0223 17:32:00.915455 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:00 crc kubenswrapper[4724]: I0223 17:32:00.915506 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:00 crc kubenswrapper[4724]: I0223 17:32:00.915516 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:00 crc kubenswrapper[4724]: I0223 17:32:00.915538 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:00 crc kubenswrapper[4724]: I0223 17:32:00.915549 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:00Z","lastTransitionTime":"2026-02-23T17:32:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:00 crc kubenswrapper[4724]: I0223 17:32:00.950333 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:32:00 crc kubenswrapper[4724]: I0223 17:32:00.950351 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:32:00 crc kubenswrapper[4724]: E0223 17:32:00.950525 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 17:32:00 crc kubenswrapper[4724]: I0223 17:32:00.950629 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:32:00 crc kubenswrapper[4724]: E0223 17:32:00.950849 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 17:32:00 crc kubenswrapper[4724]: E0223 17:32:00.951047 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 17:32:01 crc kubenswrapper[4724]: I0223 17:32:01.018675 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:01 crc kubenswrapper[4724]: I0223 17:32:01.018746 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:01 crc kubenswrapper[4724]: I0223 17:32:01.018767 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:01 crc kubenswrapper[4724]: I0223 17:32:01.018797 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:01 crc kubenswrapper[4724]: I0223 17:32:01.018818 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:01Z","lastTransitionTime":"2026-02-23T17:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:01 crc kubenswrapper[4724]: I0223 17:32:01.122358 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:01 crc kubenswrapper[4724]: I0223 17:32:01.122432 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:01 crc kubenswrapper[4724]: I0223 17:32:01.122446 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:01 crc kubenswrapper[4724]: I0223 17:32:01.122466 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:01 crc kubenswrapper[4724]: I0223 17:32:01.122479 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:01Z","lastTransitionTime":"2026-02-23T17:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:01 crc kubenswrapper[4724]: I0223 17:32:01.127749 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 09:00:56.590592011 +0000 UTC Feb 23 17:32:01 crc kubenswrapper[4724]: I0223 17:32:01.224813 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:01 crc kubenswrapper[4724]: I0223 17:32:01.224855 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:01 crc kubenswrapper[4724]: I0223 17:32:01.224868 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:01 crc kubenswrapper[4724]: I0223 17:32:01.224900 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:01 crc kubenswrapper[4724]: I0223 17:32:01.224911 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:01Z","lastTransitionTime":"2026-02-23T17:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:01 crc kubenswrapper[4724]: I0223 17:32:01.328529 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:01 crc kubenswrapper[4724]: I0223 17:32:01.328564 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:01 crc kubenswrapper[4724]: I0223 17:32:01.328575 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:01 crc kubenswrapper[4724]: I0223 17:32:01.328591 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:01 crc kubenswrapper[4724]: I0223 17:32:01.328602 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:01Z","lastTransitionTime":"2026-02-23T17:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:01 crc kubenswrapper[4724]: I0223 17:32:01.431798 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:01 crc kubenswrapper[4724]: I0223 17:32:01.431853 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:01 crc kubenswrapper[4724]: I0223 17:32:01.431865 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:01 crc kubenswrapper[4724]: I0223 17:32:01.431885 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:01 crc kubenswrapper[4724]: I0223 17:32:01.431899 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:01Z","lastTransitionTime":"2026-02-23T17:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:01 crc kubenswrapper[4724]: I0223 17:32:01.542360 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:01 crc kubenswrapper[4724]: I0223 17:32:01.542551 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:01 crc kubenswrapper[4724]: I0223 17:32:01.542582 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:01 crc kubenswrapper[4724]: I0223 17:32:01.542601 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:01 crc kubenswrapper[4724]: I0223 17:32:01.542613 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:01Z","lastTransitionTime":"2026-02-23T17:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:01 crc kubenswrapper[4724]: I0223 17:32:01.644911 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:01 crc kubenswrapper[4724]: I0223 17:32:01.645350 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:01 crc kubenswrapper[4724]: I0223 17:32:01.645372 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:01 crc kubenswrapper[4724]: I0223 17:32:01.645408 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:01 crc kubenswrapper[4724]: I0223 17:32:01.645421 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:01Z","lastTransitionTime":"2026-02-23T17:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.080917 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:32:02 crc kubenswrapper[4724]: E0223 17:32:02.081416 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.082932 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.082991 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.083005 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.083034 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.083048 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:02Z","lastTransitionTime":"2026-02-23T17:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.128080 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 11:07:42.858433511 +0000 UTC Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.186245 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.186384 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.186483 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.186551 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.186663 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:02Z","lastTransitionTime":"2026-02-23T17:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.289364 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.289677 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.289801 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.290011 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.290211 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:02Z","lastTransitionTime":"2026-02-23T17:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.393864 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.394269 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.394409 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.394530 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.394632 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:02Z","lastTransitionTime":"2026-02-23T17:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.498694 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.498738 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.498749 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.498770 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.498782 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:02Z","lastTransitionTime":"2026-02-23T17:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.601883 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.601931 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.601948 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.601969 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.601982 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:02Z","lastTransitionTime":"2026-02-23T17:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.705358 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.705423 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.705435 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.705456 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.705469 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:02Z","lastTransitionTime":"2026-02-23T17:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.731977 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.732022 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.732033 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.732051 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.732064 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:02Z","lastTransitionTime":"2026-02-23T17:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:02 crc kubenswrapper[4724]: E0223 17:32:02.745497 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aaac6a71-65af-4ded-9945-71c01ce15653\\\",\\\"systemUUID\\\":\\\"883aa43b-ee67-45aa-9f6b-7760dc931d5e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:02Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.749933 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.749976 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.749993 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.750016 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.750034 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:02Z","lastTransitionTime":"2026-02-23T17:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:02 crc kubenswrapper[4724]: E0223 17:32:02.767346 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aaac6a71-65af-4ded-9945-71c01ce15653\\\",\\\"systemUUID\\\":\\\"883aa43b-ee67-45aa-9f6b-7760dc931d5e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:02Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.773841 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.773900 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.773919 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.773949 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.773969 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:02Z","lastTransitionTime":"2026-02-23T17:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:02 crc kubenswrapper[4724]: E0223 17:32:02.791087 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aaac6a71-65af-4ded-9945-71c01ce15653\\\",\\\"systemUUID\\\":\\\"883aa43b-ee67-45aa-9f6b-7760dc931d5e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:02Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.801381 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.801467 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.801482 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.801508 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.801524 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:02Z","lastTransitionTime":"2026-02-23T17:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:02 crc kubenswrapper[4724]: E0223 17:32:02.819232 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aaac6a71-65af-4ded-9945-71c01ce15653\\\",\\\"systemUUID\\\":\\\"883aa43b-ee67-45aa-9f6b-7760dc931d5e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:02Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.824703 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.824765 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.824776 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.824795 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.824809 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:02Z","lastTransitionTime":"2026-02-23T17:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:02 crc kubenswrapper[4724]: E0223 17:32:02.842199 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aaac6a71-65af-4ded-9945-71c01ce15653\\\",\\\"systemUUID\\\":\\\"883aa43b-ee67-45aa-9f6b-7760dc931d5e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:02Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:02 crc kubenswrapper[4724]: E0223 17:32:02.842327 4724 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.844912 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.844952 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.844962 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.844978 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.844988 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:02Z","lastTransitionTime":"2026-02-23T17:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.948651 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.948719 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.948737 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.948769 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.948788 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:02Z","lastTransitionTime":"2026-02-23T17:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.950055 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:32:02 crc kubenswrapper[4724]: I0223 17:32:02.950172 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:32:02 crc kubenswrapper[4724]: E0223 17:32:02.950245 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 17:32:02 crc kubenswrapper[4724]: E0223 17:32:02.950477 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 17:32:03 crc kubenswrapper[4724]: I0223 17:32:03.052689 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:03 crc kubenswrapper[4724]: I0223 17:32:03.052726 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:03 crc kubenswrapper[4724]: I0223 17:32:03.052735 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:03 crc kubenswrapper[4724]: I0223 17:32:03.052752 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:03 crc kubenswrapper[4724]: I0223 17:32:03.052761 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:03Z","lastTransitionTime":"2026-02-23T17:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:03 crc kubenswrapper[4724]: I0223 17:32:03.088593 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:32:03 crc kubenswrapper[4724]: I0223 17:32:03.088674 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:32:03 crc kubenswrapper[4724]: I0223 17:32:03.088716 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:32:03 crc kubenswrapper[4724]: I0223 17:32:03.088757 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:32:03 crc kubenswrapper[4724]: I0223 17:32:03.088785 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:32:03 crc kubenswrapper[4724]: E0223 17:32:03.088813 4724 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 17:32:03 crc kubenswrapper[4724]: E0223 17:32:03.088850 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:32:35.08880808 +0000 UTC m=+110.905007680 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:32:03 crc kubenswrapper[4724]: E0223 17:32:03.088909 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 17:32:35.088885142 +0000 UTC m=+110.905084952 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 17:32:03 crc kubenswrapper[4724]: E0223 17:32:03.088911 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 17:32:03 crc kubenswrapper[4724]: E0223 17:32:03.088933 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 17:32:03 crc kubenswrapper[4724]: E0223 17:32:03.088947 4724 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 17:32:03 crc kubenswrapper[4724]: E0223 17:32:03.088978 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-23 17:32:35.088969404 +0000 UTC m=+110.905169234 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 17:32:03 crc kubenswrapper[4724]: E0223 17:32:03.089009 4724 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 17:32:03 crc kubenswrapper[4724]: E0223 17:32:03.089072 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 17:32:35.089058916 +0000 UTC m=+110.905258516 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 17:32:03 crc kubenswrapper[4724]: E0223 17:32:03.089087 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 17:32:03 crc kubenswrapper[4724]: E0223 17:32:03.089161 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 17:32:03 crc kubenswrapper[4724]: E0223 17:32:03.089188 4724 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 17:32:03 crc kubenswrapper[4724]: E0223 17:32:03.089304 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-23 17:32:35.089271052 +0000 UTC m=+110.905470682 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 17:32:03 crc kubenswrapper[4724]: I0223 17:32:03.128821 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 06:09:53.074428732 +0000 UTC Feb 23 17:32:03 crc kubenswrapper[4724]: I0223 17:32:03.154536 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:03 crc kubenswrapper[4724]: I0223 17:32:03.154592 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:03 crc kubenswrapper[4724]: I0223 17:32:03.154617 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:03 crc kubenswrapper[4724]: I0223 17:32:03.154641 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:03 crc kubenswrapper[4724]: I0223 17:32:03.154657 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:03Z","lastTransitionTime":"2026-02-23T17:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:03 crc kubenswrapper[4724]: I0223 17:32:03.257163 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:03 crc kubenswrapper[4724]: I0223 17:32:03.257219 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:03 crc kubenswrapper[4724]: I0223 17:32:03.257229 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:03 crc kubenswrapper[4724]: I0223 17:32:03.257247 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:03 crc kubenswrapper[4724]: I0223 17:32:03.257258 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:03Z","lastTransitionTime":"2026-02-23T17:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:03 crc kubenswrapper[4724]: I0223 17:32:03.360907 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:03 crc kubenswrapper[4724]: I0223 17:32:03.360988 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:03 crc kubenswrapper[4724]: I0223 17:32:03.361011 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:03 crc kubenswrapper[4724]: I0223 17:32:03.361042 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:03 crc kubenswrapper[4724]: I0223 17:32:03.361060 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:03Z","lastTransitionTime":"2026-02-23T17:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:03 crc kubenswrapper[4724]: I0223 17:32:03.463667 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:03 crc kubenswrapper[4724]: I0223 17:32:03.463725 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:03 crc kubenswrapper[4724]: I0223 17:32:03.463742 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:03 crc kubenswrapper[4724]: I0223 17:32:03.463766 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:03 crc kubenswrapper[4724]: I0223 17:32:03.463779 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:03Z","lastTransitionTime":"2026-02-23T17:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:03 crc kubenswrapper[4724]: I0223 17:32:03.566227 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:03 crc kubenswrapper[4724]: I0223 17:32:03.566282 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:03 crc kubenswrapper[4724]: I0223 17:32:03.566295 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:03 crc kubenswrapper[4724]: I0223 17:32:03.566317 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:03 crc kubenswrapper[4724]: I0223 17:32:03.566330 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:03Z","lastTransitionTime":"2026-02-23T17:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:03 crc kubenswrapper[4724]: I0223 17:32:03.669315 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:03 crc kubenswrapper[4724]: I0223 17:32:03.669373 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:03 crc kubenswrapper[4724]: I0223 17:32:03.669419 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:03 crc kubenswrapper[4724]: I0223 17:32:03.669447 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:03 crc kubenswrapper[4724]: I0223 17:32:03.669467 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:03Z","lastTransitionTime":"2026-02-23T17:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:03 crc kubenswrapper[4724]: I0223 17:32:03.772073 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:03 crc kubenswrapper[4724]: I0223 17:32:03.772141 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:03 crc kubenswrapper[4724]: I0223 17:32:03.772158 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:03 crc kubenswrapper[4724]: I0223 17:32:03.772184 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:03 crc kubenswrapper[4724]: I0223 17:32:03.772202 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:03Z","lastTransitionTime":"2026-02-23T17:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:03 crc kubenswrapper[4724]: I0223 17:32:03.875326 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:03 crc kubenswrapper[4724]: I0223 17:32:03.875420 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:03 crc kubenswrapper[4724]: I0223 17:32:03.875443 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:03 crc kubenswrapper[4724]: I0223 17:32:03.875474 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:03 crc kubenswrapper[4724]: I0223 17:32:03.875494 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:03Z","lastTransitionTime":"2026-02-23T17:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:03 crc kubenswrapper[4724]: I0223 17:32:03.950702 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:32:03 crc kubenswrapper[4724]: E0223 17:32:03.950917 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 17:32:03 crc kubenswrapper[4724]: I0223 17:32:03.978754 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:03 crc kubenswrapper[4724]: I0223 17:32:03.978833 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:03 crc kubenswrapper[4724]: I0223 17:32:03.978858 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:03 crc kubenswrapper[4724]: I0223 17:32:03.978893 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:03 crc kubenswrapper[4724]: I0223 17:32:03.978918 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:03Z","lastTransitionTime":"2026-02-23T17:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:04 crc kubenswrapper[4724]: I0223 17:32:04.081417 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:04 crc kubenswrapper[4724]: I0223 17:32:04.081465 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:04 crc kubenswrapper[4724]: I0223 17:32:04.081479 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:04 crc kubenswrapper[4724]: I0223 17:32:04.081521 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:04 crc kubenswrapper[4724]: I0223 17:32:04.081533 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:04Z","lastTransitionTime":"2026-02-23T17:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:04 crc kubenswrapper[4724]: I0223 17:32:04.129457 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 07:00:03.371112532 +0000 UTC Feb 23 17:32:04 crc kubenswrapper[4724]: I0223 17:32:04.183980 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:04 crc kubenswrapper[4724]: I0223 17:32:04.184037 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:04 crc kubenswrapper[4724]: I0223 17:32:04.184053 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:04 crc kubenswrapper[4724]: I0223 17:32:04.184077 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:04 crc kubenswrapper[4724]: I0223 17:32:04.184094 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:04Z","lastTransitionTime":"2026-02-23T17:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:04 crc kubenswrapper[4724]: I0223 17:32:04.286974 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:04 crc kubenswrapper[4724]: I0223 17:32:04.287043 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:04 crc kubenswrapper[4724]: I0223 17:32:04.287060 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:04 crc kubenswrapper[4724]: I0223 17:32:04.287085 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:04 crc kubenswrapper[4724]: I0223 17:32:04.287102 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:04Z","lastTransitionTime":"2026-02-23T17:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:04 crc kubenswrapper[4724]: I0223 17:32:04.390489 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:04 crc kubenswrapper[4724]: I0223 17:32:04.390542 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:04 crc kubenswrapper[4724]: I0223 17:32:04.390551 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:04 crc kubenswrapper[4724]: I0223 17:32:04.390568 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:04 crc kubenswrapper[4724]: I0223 17:32:04.390578 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:04Z","lastTransitionTime":"2026-02-23T17:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:04 crc kubenswrapper[4724]: I0223 17:32:04.493769 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:04 crc kubenswrapper[4724]: I0223 17:32:04.493854 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:04 crc kubenswrapper[4724]: I0223 17:32:04.493868 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:04 crc kubenswrapper[4724]: I0223 17:32:04.493906 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:04 crc kubenswrapper[4724]: I0223 17:32:04.493931 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:04Z","lastTransitionTime":"2026-02-23T17:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:04 crc kubenswrapper[4724]: I0223 17:32:04.596838 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:04 crc kubenswrapper[4724]: I0223 17:32:04.596914 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:04 crc kubenswrapper[4724]: I0223 17:32:04.596933 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:04 crc kubenswrapper[4724]: I0223 17:32:04.596963 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:04 crc kubenswrapper[4724]: I0223 17:32:04.596986 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:04Z","lastTransitionTime":"2026-02-23T17:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:04 crc kubenswrapper[4724]: I0223 17:32:04.699925 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:04 crc kubenswrapper[4724]: I0223 17:32:04.699968 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:04 crc kubenswrapper[4724]: I0223 17:32:04.699977 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:04 crc kubenswrapper[4724]: I0223 17:32:04.699993 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:04 crc kubenswrapper[4724]: I0223 17:32:04.700006 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:04Z","lastTransitionTime":"2026-02-23T17:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:04 crc kubenswrapper[4724]: I0223 17:32:04.802538 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:04 crc kubenswrapper[4724]: I0223 17:32:04.802596 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:04 crc kubenswrapper[4724]: I0223 17:32:04.802609 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:04 crc kubenswrapper[4724]: I0223 17:32:04.802628 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:04 crc kubenswrapper[4724]: I0223 17:32:04.803003 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:04Z","lastTransitionTime":"2026-02-23T17:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:04 crc kubenswrapper[4724]: I0223 17:32:04.907177 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:04 crc kubenswrapper[4724]: I0223 17:32:04.907225 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:04 crc kubenswrapper[4724]: I0223 17:32:04.907239 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:04 crc kubenswrapper[4724]: I0223 17:32:04.907259 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:04 crc kubenswrapper[4724]: I0223 17:32:04.907272 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:04Z","lastTransitionTime":"2026-02-23T17:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:04 crc kubenswrapper[4724]: I0223 17:32:04.949997 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:32:04 crc kubenswrapper[4724]: E0223 17:32:04.950193 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 17:32:04 crc kubenswrapper[4724]: I0223 17:32:04.950032 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:32:04 crc kubenswrapper[4724]: E0223 17:32:04.950354 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 17:32:04 crc kubenswrapper[4724]: I0223 17:32:04.974093 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2df91905-ff3d-4a7d-8e22-337861483a5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9fabd2560e4a3826fb070276626936d120d612d37e2750e417f637b93b1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff1c2b88a730048902a1360cdbcddfb4f82c6a58c427edb24a792b254f55383\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:17Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 17:30:47.305065 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 17:30:47.307934 1 observer_polling.go:159] Starting file observer\\\\nI0223 17:30:47.346927 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 17:30:47.351923 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0223 17:31:17.828320 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:16Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbce4ddd851479ee9b40b8c773a052d5a1d1420df107d8ea0e9fa2b927300910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bfa4a72a4a487fbc7703a3b60a9ef9a18b7fea6e58fd427f50361d8df9731d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2591d81bed9e5dbad670e16b2266367cfb386a3d68ffad9d09e0652130ee2609\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:04Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:04 crc kubenswrapper[4724]: I0223 17:32:04.988537 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35021d6d-15bd-4153-9dae-7b002eff8c23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26478861fb89db7eb6b5ba0a2089cad4360893bd95a71883be07831745c5c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f971a2a092b595760de8b7a9b6a0a13fa802d3fcb6a2c2b8922b01c9c57d782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d892cd1f6ea7fabb2e64e3e78d329f6e0efbf8585cc4f72fbcd29f31a035cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:04Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:05 crc kubenswrapper[4724]: I0223 17:32:05.006208 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbcb3c1b1daa4328b6e5e049e9f1270ee94da4897df26324f6e4cd898f565de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:05Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:05 crc kubenswrapper[4724]: I0223 17:32:05.010294 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:05 crc kubenswrapper[4724]: I0223 17:32:05.010350 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:05 crc kubenswrapper[4724]: I0223 17:32:05.010370 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:05 crc kubenswrapper[4724]: I0223 17:32:05.010425 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:05 crc kubenswrapper[4724]: I0223 17:32:05.010445 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:05Z","lastTransitionTime":"2026-02-23T17:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:05 crc kubenswrapper[4724]: I0223 17:32:05.021987 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d85ca9995343f8d309c902c37fc2db6faf539dfde44cc8334e279648ec2ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:05Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:05 crc kubenswrapper[4724]: I0223 17:32:05.040139 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74b84e23-8d8b-47b7-b3d4-269b16e9dfb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e581ef5d1fb9752e7582661dffceba50fc7b408467d9b0f0cca3b29e1838e72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:05Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:05 crc kubenswrapper[4724]: I0223 17:32:05.057930 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e1d6606-75fc-41fd-9c23-18ee248da2af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe805066e5e61f09a4f5cf1f9e912da6664229c8db86a898cbe670278e4c2530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beda3a9d030f2d4aa9c6a78508a84f5bc122debebb2c5b423233834433b8fd5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e108d74a635da102e857e2ad4f18384861e4e630ae31af9aa05841e6bbc2de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b889ee0ad15cc6ff92c69a44c4bc0bf620e23e3dc7276324662c265ba5c7d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:25Z\\\",\\\"message\\\":\\\"W0223 17:31:24.378840 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 17:31:24.379771 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771867884 cert, and key in /tmp/serving-cert-1953836024/serving-signer.crt, /tmp/serving-cert-1953836024/serving-signer.key\\\\nI0223 17:31:24.846158 1 observer_polling.go:159] Starting file observer\\\\nW0223 17:31:24.858497 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 17:31:24.858706 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 17:31:24.859835 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1953836024/tls.crt::/tmp/serving-cert-1953836024/tls.key\\\\\\\"\\\\nF0223 17:31:25.358711 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:31:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b550694e49325768a2674a4d2b7089dd267b91a273751fb62d060a0663f06619\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:05Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:05 crc kubenswrapper[4724]: I0223 17:32:05.079370 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:05Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:05 crc kubenswrapper[4724]: I0223 17:32:05.100831 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d114679fca6bbb2dbaaee0970449fe265c7c931665eb559d015abb61de3ec07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dd3d46856b5d5dece678b672db6485b68b9b7fbe9f029f7d9df37906146674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:05Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:05 crc kubenswrapper[4724]: I0223 17:32:05.112432 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:05 crc kubenswrapper[4724]: I0223 17:32:05.112500 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:05 crc kubenswrapper[4724]: I0223 17:32:05.112519 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:05 crc kubenswrapper[4724]: I0223 17:32:05.112551 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:05 crc kubenswrapper[4724]: I0223 17:32:05.112572 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:05Z","lastTransitionTime":"2026-02-23T17:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:05 crc kubenswrapper[4724]: I0223 17:32:05.118910 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:05Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:05 crc kubenswrapper[4724]: I0223 17:32:05.129594 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 00:24:09.023163593 +0000 UTC Feb 23 17:32:05 crc kubenswrapper[4724]: I0223 17:32:05.133618 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:05Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:05 crc kubenswrapper[4724]: I0223 17:32:05.214845 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:05 crc kubenswrapper[4724]: I0223 17:32:05.214897 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:05 crc kubenswrapper[4724]: I0223 17:32:05.214910 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:05 crc kubenswrapper[4724]: I0223 17:32:05.214927 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:05 crc kubenswrapper[4724]: I0223 17:32:05.214938 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:05Z","lastTransitionTime":"2026-02-23T17:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:05 crc kubenswrapper[4724]: I0223 17:32:05.317791 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:05 crc kubenswrapper[4724]: I0223 17:32:05.317864 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:05 crc kubenswrapper[4724]: I0223 17:32:05.317883 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:05 crc kubenswrapper[4724]: I0223 17:32:05.317912 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:05 crc kubenswrapper[4724]: I0223 17:32:05.317934 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:05Z","lastTransitionTime":"2026-02-23T17:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:05 crc kubenswrapper[4724]: I0223 17:32:05.421783 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:05 crc kubenswrapper[4724]: I0223 17:32:05.421876 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:05 crc kubenswrapper[4724]: I0223 17:32:05.421899 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:05 crc kubenswrapper[4724]: I0223 17:32:05.421928 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:05 crc kubenswrapper[4724]: I0223 17:32:05.421946 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:05Z","lastTransitionTime":"2026-02-23T17:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:05 crc kubenswrapper[4724]: I0223 17:32:05.524650 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:05 crc kubenswrapper[4724]: I0223 17:32:05.524712 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:05 crc kubenswrapper[4724]: I0223 17:32:05.524732 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:05 crc kubenswrapper[4724]: I0223 17:32:05.524757 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:05 crc kubenswrapper[4724]: I0223 17:32:05.524774 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:05Z","lastTransitionTime":"2026-02-23T17:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:05 crc kubenswrapper[4724]: I0223 17:32:05.628441 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:05 crc kubenswrapper[4724]: I0223 17:32:05.628501 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:05 crc kubenswrapper[4724]: I0223 17:32:05.628516 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:05 crc kubenswrapper[4724]: I0223 17:32:05.628538 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:05 crc kubenswrapper[4724]: I0223 17:32:05.628551 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:05Z","lastTransitionTime":"2026-02-23T17:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:05 crc kubenswrapper[4724]: I0223 17:32:05.731605 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:05 crc kubenswrapper[4724]: I0223 17:32:05.731670 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:05 crc kubenswrapper[4724]: I0223 17:32:05.731683 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:05 crc kubenswrapper[4724]: I0223 17:32:05.731707 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:05 crc kubenswrapper[4724]: I0223 17:32:05.731725 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:05Z","lastTransitionTime":"2026-02-23T17:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:05 crc kubenswrapper[4724]: I0223 17:32:05.835528 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:05 crc kubenswrapper[4724]: I0223 17:32:05.835589 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:05 crc kubenswrapper[4724]: I0223 17:32:05.835603 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:05 crc kubenswrapper[4724]: I0223 17:32:05.835625 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:05 crc kubenswrapper[4724]: I0223 17:32:05.835639 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:05Z","lastTransitionTime":"2026-02-23T17:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:05 crc kubenswrapper[4724]: I0223 17:32:05.938899 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:05 crc kubenswrapper[4724]: I0223 17:32:05.938946 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:05 crc kubenswrapper[4724]: I0223 17:32:05.938959 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:05 crc kubenswrapper[4724]: I0223 17:32:05.938978 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:05 crc kubenswrapper[4724]: I0223 17:32:05.938991 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:05Z","lastTransitionTime":"2026-02-23T17:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:05 crc kubenswrapper[4724]: I0223 17:32:05.950327 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:32:05 crc kubenswrapper[4724]: E0223 17:32:05.950556 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 17:32:06 crc kubenswrapper[4724]: I0223 17:32:06.043031 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:06 crc kubenswrapper[4724]: I0223 17:32:06.043103 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:06 crc kubenswrapper[4724]: I0223 17:32:06.043121 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:06 crc kubenswrapper[4724]: I0223 17:32:06.043148 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:06 crc kubenswrapper[4724]: I0223 17:32:06.043166 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:06Z","lastTransitionTime":"2026-02-23T17:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:06 crc kubenswrapper[4724]: I0223 17:32:06.130111 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 03:12:18.808329881 +0000 UTC Feb 23 17:32:06 crc kubenswrapper[4724]: I0223 17:32:06.146961 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:06 crc kubenswrapper[4724]: I0223 17:32:06.147039 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:06 crc kubenswrapper[4724]: I0223 17:32:06.147057 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:06 crc kubenswrapper[4724]: I0223 17:32:06.147089 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:06 crc kubenswrapper[4724]: I0223 17:32:06.147108 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:06Z","lastTransitionTime":"2026-02-23T17:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:06 crc kubenswrapper[4724]: I0223 17:32:06.250749 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:06 crc kubenswrapper[4724]: I0223 17:32:06.250846 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:06 crc kubenswrapper[4724]: I0223 17:32:06.250882 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:06 crc kubenswrapper[4724]: I0223 17:32:06.250925 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:06 crc kubenswrapper[4724]: I0223 17:32:06.250947 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:06Z","lastTransitionTime":"2026-02-23T17:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:06 crc kubenswrapper[4724]: I0223 17:32:06.354655 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:06 crc kubenswrapper[4724]: I0223 17:32:06.354745 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:06 crc kubenswrapper[4724]: I0223 17:32:06.354763 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:06 crc kubenswrapper[4724]: I0223 17:32:06.354791 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:06 crc kubenswrapper[4724]: I0223 17:32:06.354812 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:06Z","lastTransitionTime":"2026-02-23T17:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:06 crc kubenswrapper[4724]: I0223 17:32:06.458345 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:06 crc kubenswrapper[4724]: I0223 17:32:06.458479 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:06 crc kubenswrapper[4724]: I0223 17:32:06.458497 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:06 crc kubenswrapper[4724]: I0223 17:32:06.458527 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:06 crc kubenswrapper[4724]: I0223 17:32:06.458545 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:06Z","lastTransitionTime":"2026-02-23T17:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:06 crc kubenswrapper[4724]: I0223 17:32:06.567639 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:06 crc kubenswrapper[4724]: I0223 17:32:06.567700 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:06 crc kubenswrapper[4724]: I0223 17:32:06.567712 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:06 crc kubenswrapper[4724]: I0223 17:32:06.567731 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:06 crc kubenswrapper[4724]: I0223 17:32:06.567742 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:06Z","lastTransitionTime":"2026-02-23T17:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:06 crc kubenswrapper[4724]: I0223 17:32:06.669816 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:06 crc kubenswrapper[4724]: I0223 17:32:06.669913 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:06 crc kubenswrapper[4724]: I0223 17:32:06.669925 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:06 crc kubenswrapper[4724]: I0223 17:32:06.669958 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:06 crc kubenswrapper[4724]: I0223 17:32:06.669976 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:06Z","lastTransitionTime":"2026-02-23T17:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:06 crc kubenswrapper[4724]: I0223 17:32:06.772488 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:06 crc kubenswrapper[4724]: I0223 17:32:06.772533 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:06 crc kubenswrapper[4724]: I0223 17:32:06.772584 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:06 crc kubenswrapper[4724]: I0223 17:32:06.772604 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:06 crc kubenswrapper[4724]: I0223 17:32:06.772616 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:06Z","lastTransitionTime":"2026-02-23T17:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:06 crc kubenswrapper[4724]: I0223 17:32:06.874941 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:06 crc kubenswrapper[4724]: I0223 17:32:06.874977 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:06 crc kubenswrapper[4724]: I0223 17:32:06.874988 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:06 crc kubenswrapper[4724]: I0223 17:32:06.875003 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:06 crc kubenswrapper[4724]: I0223 17:32:06.875013 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:06Z","lastTransitionTime":"2026-02-23T17:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:06 crc kubenswrapper[4724]: I0223 17:32:06.950723 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:32:06 crc kubenswrapper[4724]: I0223 17:32:06.950723 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:32:06 crc kubenswrapper[4724]: E0223 17:32:06.950897 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 17:32:06 crc kubenswrapper[4724]: E0223 17:32:06.950970 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 17:32:06 crc kubenswrapper[4724]: I0223 17:32:06.976871 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:06 crc kubenswrapper[4724]: I0223 17:32:06.976941 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:06 crc kubenswrapper[4724]: I0223 17:32:06.976955 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:06 crc kubenswrapper[4724]: I0223 17:32:06.976974 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:06 crc kubenswrapper[4724]: I0223 17:32:06.976985 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:06Z","lastTransitionTime":"2026-02-23T17:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:07 crc kubenswrapper[4724]: I0223 17:32:07.080152 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:07 crc kubenswrapper[4724]: I0223 17:32:07.080201 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:07 crc kubenswrapper[4724]: I0223 17:32:07.080210 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:07 crc kubenswrapper[4724]: I0223 17:32:07.080231 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:07 crc kubenswrapper[4724]: I0223 17:32:07.080243 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:07Z","lastTransitionTime":"2026-02-23T17:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:07 crc kubenswrapper[4724]: I0223 17:32:07.130548 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 18:01:46.739373315 +0000 UTC Feb 23 17:32:07 crc kubenswrapper[4724]: I0223 17:32:07.182824 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:07 crc kubenswrapper[4724]: I0223 17:32:07.182866 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:07 crc kubenswrapper[4724]: I0223 17:32:07.182875 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:07 crc kubenswrapper[4724]: I0223 17:32:07.182894 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:07 crc kubenswrapper[4724]: I0223 17:32:07.182907 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:07Z","lastTransitionTime":"2026-02-23T17:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:07 crc kubenswrapper[4724]: I0223 17:32:07.285865 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:07 crc kubenswrapper[4724]: I0223 17:32:07.285917 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:07 crc kubenswrapper[4724]: I0223 17:32:07.285927 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:07 crc kubenswrapper[4724]: I0223 17:32:07.285947 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:07 crc kubenswrapper[4724]: I0223 17:32:07.285960 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:07Z","lastTransitionTime":"2026-02-23T17:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:07 crc kubenswrapper[4724]: I0223 17:32:07.389844 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:07 crc kubenswrapper[4724]: I0223 17:32:07.389936 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:07 crc kubenswrapper[4724]: I0223 17:32:07.390022 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:07 crc kubenswrapper[4724]: I0223 17:32:07.390049 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:07 crc kubenswrapper[4724]: I0223 17:32:07.390065 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:07Z","lastTransitionTime":"2026-02-23T17:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:07 crc kubenswrapper[4724]: I0223 17:32:07.494318 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:07 crc kubenswrapper[4724]: I0223 17:32:07.494372 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:07 crc kubenswrapper[4724]: I0223 17:32:07.494382 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:07 crc kubenswrapper[4724]: I0223 17:32:07.494421 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:07 crc kubenswrapper[4724]: I0223 17:32:07.494435 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:07Z","lastTransitionTime":"2026-02-23T17:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:07 crc kubenswrapper[4724]: I0223 17:32:07.597404 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:07 crc kubenswrapper[4724]: I0223 17:32:07.597860 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:07 crc kubenswrapper[4724]: I0223 17:32:07.597990 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:07 crc kubenswrapper[4724]: I0223 17:32:07.598104 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:07 crc kubenswrapper[4724]: I0223 17:32:07.598208 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:07Z","lastTransitionTime":"2026-02-23T17:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:07 crc kubenswrapper[4724]: I0223 17:32:07.700974 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:07 crc kubenswrapper[4724]: I0223 17:32:07.701338 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:07 crc kubenswrapper[4724]: I0223 17:32:07.701445 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:07 crc kubenswrapper[4724]: I0223 17:32:07.701556 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:07 crc kubenswrapper[4724]: I0223 17:32:07.701653 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:07Z","lastTransitionTime":"2026-02-23T17:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:07 crc kubenswrapper[4724]: I0223 17:32:07.804578 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:07 crc kubenswrapper[4724]: I0223 17:32:07.804644 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:07 crc kubenswrapper[4724]: I0223 17:32:07.804663 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:07 crc kubenswrapper[4724]: I0223 17:32:07.804683 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:07 crc kubenswrapper[4724]: I0223 17:32:07.804698 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:07Z","lastTransitionTime":"2026-02-23T17:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:07 crc kubenswrapper[4724]: I0223 17:32:07.907744 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:07 crc kubenswrapper[4724]: I0223 17:32:07.908043 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:07 crc kubenswrapper[4724]: I0223 17:32:07.908110 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:07 crc kubenswrapper[4724]: I0223 17:32:07.908191 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:07 crc kubenswrapper[4724]: I0223 17:32:07.908261 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:07Z","lastTransitionTime":"2026-02-23T17:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:07 crc kubenswrapper[4724]: I0223 17:32:07.950184 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:32:07 crc kubenswrapper[4724]: E0223 17:32:07.950338 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 17:32:08 crc kubenswrapper[4724]: I0223 17:32:08.011405 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:08 crc kubenswrapper[4724]: I0223 17:32:08.011805 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:08 crc kubenswrapper[4724]: I0223 17:32:08.011872 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:08 crc kubenswrapper[4724]: I0223 17:32:08.011954 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:08 crc kubenswrapper[4724]: I0223 17:32:08.012017 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:08Z","lastTransitionTime":"2026-02-23T17:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:08 crc kubenswrapper[4724]: I0223 17:32:08.114797 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:08 crc kubenswrapper[4724]: I0223 17:32:08.115091 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:08 crc kubenswrapper[4724]: I0223 17:32:08.115220 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:08 crc kubenswrapper[4724]: I0223 17:32:08.115300 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:08 crc kubenswrapper[4724]: I0223 17:32:08.115366 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:08Z","lastTransitionTime":"2026-02-23T17:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:08 crc kubenswrapper[4724]: I0223 17:32:08.130919 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 15:01:53.77926858 +0000 UTC Feb 23 17:32:08 crc kubenswrapper[4724]: I0223 17:32:08.218810 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:08 crc kubenswrapper[4724]: I0223 17:32:08.219205 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:08 crc kubenswrapper[4724]: I0223 17:32:08.219419 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:08 crc kubenswrapper[4724]: I0223 17:32:08.219669 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:08 crc kubenswrapper[4724]: I0223 17:32:08.219893 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:08Z","lastTransitionTime":"2026-02-23T17:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:08 crc kubenswrapper[4724]: I0223 17:32:08.323053 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:08 crc kubenswrapper[4724]: I0223 17:32:08.323107 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:08 crc kubenswrapper[4724]: I0223 17:32:08.323124 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:08 crc kubenswrapper[4724]: I0223 17:32:08.323150 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:08 crc kubenswrapper[4724]: I0223 17:32:08.323168 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:08Z","lastTransitionTime":"2026-02-23T17:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:08 crc kubenswrapper[4724]: I0223 17:32:08.426093 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:08 crc kubenswrapper[4724]: I0223 17:32:08.426158 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:08 crc kubenswrapper[4724]: I0223 17:32:08.426180 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:08 crc kubenswrapper[4724]: I0223 17:32:08.426210 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:08 crc kubenswrapper[4724]: I0223 17:32:08.426229 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:08Z","lastTransitionTime":"2026-02-23T17:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:08 crc kubenswrapper[4724]: I0223 17:32:08.529039 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:08 crc kubenswrapper[4724]: I0223 17:32:08.529102 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:08 crc kubenswrapper[4724]: I0223 17:32:08.529144 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:08 crc kubenswrapper[4724]: I0223 17:32:08.529166 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:08 crc kubenswrapper[4724]: I0223 17:32:08.529179 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:08Z","lastTransitionTime":"2026-02-23T17:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:08 crc kubenswrapper[4724]: I0223 17:32:08.632173 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:08 crc kubenswrapper[4724]: I0223 17:32:08.632230 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:08 crc kubenswrapper[4724]: I0223 17:32:08.632242 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:08 crc kubenswrapper[4724]: I0223 17:32:08.632263 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:08 crc kubenswrapper[4724]: I0223 17:32:08.632277 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:08Z","lastTransitionTime":"2026-02-23T17:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:08 crc kubenswrapper[4724]: I0223 17:32:08.736201 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:08 crc kubenswrapper[4724]: I0223 17:32:08.736239 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:08 crc kubenswrapper[4724]: I0223 17:32:08.736251 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:08 crc kubenswrapper[4724]: I0223 17:32:08.736272 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:08 crc kubenswrapper[4724]: I0223 17:32:08.736284 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:08Z","lastTransitionTime":"2026-02-23T17:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:08 crc kubenswrapper[4724]: I0223 17:32:08.838781 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:08 crc kubenswrapper[4724]: I0223 17:32:08.838820 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:08 crc kubenswrapper[4724]: I0223 17:32:08.838828 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:08 crc kubenswrapper[4724]: I0223 17:32:08.838844 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:08 crc kubenswrapper[4724]: I0223 17:32:08.838854 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:08Z","lastTransitionTime":"2026-02-23T17:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:08 crc kubenswrapper[4724]: I0223 17:32:08.941602 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:08 crc kubenswrapper[4724]: I0223 17:32:08.941663 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:08 crc kubenswrapper[4724]: I0223 17:32:08.941675 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:08 crc kubenswrapper[4724]: I0223 17:32:08.941694 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:08 crc kubenswrapper[4724]: I0223 17:32:08.941705 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:08Z","lastTransitionTime":"2026-02-23T17:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:08 crc kubenswrapper[4724]: I0223 17:32:08.950646 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:32:08 crc kubenswrapper[4724]: I0223 17:32:08.950755 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:32:08 crc kubenswrapper[4724]: E0223 17:32:08.950814 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 17:32:08 crc kubenswrapper[4724]: E0223 17:32:08.950933 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 17:32:09 crc kubenswrapper[4724]: I0223 17:32:09.044238 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:09 crc kubenswrapper[4724]: I0223 17:32:09.044285 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:09 crc kubenswrapper[4724]: I0223 17:32:09.044298 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:09 crc kubenswrapper[4724]: I0223 17:32:09.044318 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:09 crc kubenswrapper[4724]: I0223 17:32:09.044330 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:09Z","lastTransitionTime":"2026-02-23T17:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:09 crc kubenswrapper[4724]: I0223 17:32:09.132026 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 11:11:03.378662428 +0000 UTC Feb 23 17:32:09 crc kubenswrapper[4724]: I0223 17:32:09.147337 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:09 crc kubenswrapper[4724]: I0223 17:32:09.147521 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:09 crc kubenswrapper[4724]: I0223 17:32:09.147592 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:09 crc kubenswrapper[4724]: I0223 17:32:09.147663 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:09 crc kubenswrapper[4724]: I0223 17:32:09.147737 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:09Z","lastTransitionTime":"2026-02-23T17:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:09 crc kubenswrapper[4724]: I0223 17:32:09.251227 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:09 crc kubenswrapper[4724]: I0223 17:32:09.251292 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:09 crc kubenswrapper[4724]: I0223 17:32:09.251302 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:09 crc kubenswrapper[4724]: I0223 17:32:09.251322 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:09 crc kubenswrapper[4724]: I0223 17:32:09.251334 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:09Z","lastTransitionTime":"2026-02-23T17:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:09 crc kubenswrapper[4724]: I0223 17:32:09.354963 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:09 crc kubenswrapper[4724]: I0223 17:32:09.355019 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:09 crc kubenswrapper[4724]: I0223 17:32:09.355035 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:09 crc kubenswrapper[4724]: I0223 17:32:09.355058 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:09 crc kubenswrapper[4724]: I0223 17:32:09.355075 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:09Z","lastTransitionTime":"2026-02-23T17:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:09 crc kubenswrapper[4724]: I0223 17:32:09.458308 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:09 crc kubenswrapper[4724]: I0223 17:32:09.458357 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:09 crc kubenswrapper[4724]: I0223 17:32:09.458368 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:09 crc kubenswrapper[4724]: I0223 17:32:09.458404 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:09 crc kubenswrapper[4724]: I0223 17:32:09.458423 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:09Z","lastTransitionTime":"2026-02-23T17:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:09 crc kubenswrapper[4724]: I0223 17:32:09.565451 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:09 crc kubenswrapper[4724]: I0223 17:32:09.565548 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:09 crc kubenswrapper[4724]: I0223 17:32:09.565579 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:09 crc kubenswrapper[4724]: I0223 17:32:09.565621 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:09 crc kubenswrapper[4724]: I0223 17:32:09.565643 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:09Z","lastTransitionTime":"2026-02-23T17:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:09 crc kubenswrapper[4724]: I0223 17:32:09.668092 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:09 crc kubenswrapper[4724]: I0223 17:32:09.668162 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:09 crc kubenswrapper[4724]: I0223 17:32:09.668178 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:09 crc kubenswrapper[4724]: I0223 17:32:09.668204 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:09 crc kubenswrapper[4724]: I0223 17:32:09.668221 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:09Z","lastTransitionTime":"2026-02-23T17:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:09 crc kubenswrapper[4724]: I0223 17:32:09.770735 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:09 crc kubenswrapper[4724]: I0223 17:32:09.771337 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:09 crc kubenswrapper[4724]: I0223 17:32:09.771948 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:09 crc kubenswrapper[4724]: I0223 17:32:09.772378 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:09 crc kubenswrapper[4724]: I0223 17:32:09.772793 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:09Z","lastTransitionTime":"2026-02-23T17:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:09 crc kubenswrapper[4724]: I0223 17:32:09.876926 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:09 crc kubenswrapper[4724]: I0223 17:32:09.876985 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:09 crc kubenswrapper[4724]: I0223 17:32:09.877012 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:09 crc kubenswrapper[4724]: I0223 17:32:09.877045 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:09 crc kubenswrapper[4724]: I0223 17:32:09.877059 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:09Z","lastTransitionTime":"2026-02-23T17:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:09 crc kubenswrapper[4724]: I0223 17:32:09.949971 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:32:09 crc kubenswrapper[4724]: E0223 17:32:09.950213 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 17:32:09 crc kubenswrapper[4724]: I0223 17:32:09.980199 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:09 crc kubenswrapper[4724]: I0223 17:32:09.980293 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:09 crc kubenswrapper[4724]: I0223 17:32:09.980316 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:09 crc kubenswrapper[4724]: I0223 17:32:09.980344 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:09 crc kubenswrapper[4724]: I0223 17:32:09.980366 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:09Z","lastTransitionTime":"2026-02-23T17:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:10 crc kubenswrapper[4724]: I0223 17:32:10.083371 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:10 crc kubenswrapper[4724]: I0223 17:32:10.083458 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:10 crc kubenswrapper[4724]: I0223 17:32:10.083468 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:10 crc kubenswrapper[4724]: I0223 17:32:10.083488 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:10 crc kubenswrapper[4724]: I0223 17:32:10.083501 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:10Z","lastTransitionTime":"2026-02-23T17:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:10 crc kubenswrapper[4724]: I0223 17:32:10.132670 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 16:55:43.891865929 +0000 UTC Feb 23 17:32:10 crc kubenswrapper[4724]: I0223 17:32:10.186867 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:10 crc kubenswrapper[4724]: I0223 17:32:10.186928 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:10 crc kubenswrapper[4724]: I0223 17:32:10.186944 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:10 crc kubenswrapper[4724]: I0223 17:32:10.186963 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:10 crc kubenswrapper[4724]: I0223 17:32:10.186978 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:10Z","lastTransitionTime":"2026-02-23T17:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:10 crc kubenswrapper[4724]: I0223 17:32:10.290680 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:10 crc kubenswrapper[4724]: I0223 17:32:10.290761 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:10 crc kubenswrapper[4724]: I0223 17:32:10.290787 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:10 crc kubenswrapper[4724]: I0223 17:32:10.290828 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:10 crc kubenswrapper[4724]: I0223 17:32:10.290860 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:10Z","lastTransitionTime":"2026-02-23T17:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:10 crc kubenswrapper[4724]: I0223 17:32:10.393930 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:10 crc kubenswrapper[4724]: I0223 17:32:10.393973 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:10 crc kubenswrapper[4724]: I0223 17:32:10.393985 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:10 crc kubenswrapper[4724]: I0223 17:32:10.394004 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:10 crc kubenswrapper[4724]: I0223 17:32:10.394018 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:10Z","lastTransitionTime":"2026-02-23T17:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:10 crc kubenswrapper[4724]: I0223 17:32:10.497196 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:10 crc kubenswrapper[4724]: I0223 17:32:10.497567 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:10 crc kubenswrapper[4724]: I0223 17:32:10.497678 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:10 crc kubenswrapper[4724]: I0223 17:32:10.497779 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:10 crc kubenswrapper[4724]: I0223 17:32:10.497885 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:10Z","lastTransitionTime":"2026-02-23T17:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:10 crc kubenswrapper[4724]: I0223 17:32:10.600672 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:10 crc kubenswrapper[4724]: I0223 17:32:10.601114 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:10 crc kubenswrapper[4724]: I0223 17:32:10.601439 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:10 crc kubenswrapper[4724]: I0223 17:32:10.601559 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:10 crc kubenswrapper[4724]: I0223 17:32:10.601684 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:10Z","lastTransitionTime":"2026-02-23T17:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:10 crc kubenswrapper[4724]: I0223 17:32:10.704872 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:10 crc kubenswrapper[4724]: I0223 17:32:10.704956 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:10 crc kubenswrapper[4724]: I0223 17:32:10.704977 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:10 crc kubenswrapper[4724]: I0223 17:32:10.705009 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:10 crc kubenswrapper[4724]: I0223 17:32:10.705030 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:10Z","lastTransitionTime":"2026-02-23T17:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:10 crc kubenswrapper[4724]: I0223 17:32:10.808727 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:10 crc kubenswrapper[4724]: I0223 17:32:10.808825 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:10 crc kubenswrapper[4724]: I0223 17:32:10.808849 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:10 crc kubenswrapper[4724]: I0223 17:32:10.808889 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:10 crc kubenswrapper[4724]: I0223 17:32:10.808927 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:10Z","lastTransitionTime":"2026-02-23T17:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:10 crc kubenswrapper[4724]: I0223 17:32:10.912002 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:10 crc kubenswrapper[4724]: I0223 17:32:10.912047 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:10 crc kubenswrapper[4724]: I0223 17:32:10.912061 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:10 crc kubenswrapper[4724]: I0223 17:32:10.912103 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:10 crc kubenswrapper[4724]: I0223 17:32:10.912120 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:10Z","lastTransitionTime":"2026-02-23T17:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:10 crc kubenswrapper[4724]: I0223 17:32:10.950076 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:32:10 crc kubenswrapper[4724]: I0223 17:32:10.950237 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:32:10 crc kubenswrapper[4724]: E0223 17:32:10.950300 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 17:32:10 crc kubenswrapper[4724]: E0223 17:32:10.950557 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 17:32:11 crc kubenswrapper[4724]: I0223 17:32:11.016279 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:11 crc kubenswrapper[4724]: I0223 17:32:11.016356 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:11 crc kubenswrapper[4724]: I0223 17:32:11.016423 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:11 crc kubenswrapper[4724]: I0223 17:32:11.016461 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:11 crc kubenswrapper[4724]: I0223 17:32:11.016483 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:11Z","lastTransitionTime":"2026-02-23T17:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:11 crc kubenswrapper[4724]: I0223 17:32:11.119862 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:11 crc kubenswrapper[4724]: I0223 17:32:11.119952 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:11 crc kubenswrapper[4724]: I0223 17:32:11.119973 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:11 crc kubenswrapper[4724]: I0223 17:32:11.120007 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:11 crc kubenswrapper[4724]: I0223 17:32:11.120026 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:11Z","lastTransitionTime":"2026-02-23T17:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:11 crc kubenswrapper[4724]: I0223 17:32:11.132966 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 18:01:43.14725288 +0000 UTC Feb 23 17:32:11 crc kubenswrapper[4724]: I0223 17:32:11.223008 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:11 crc kubenswrapper[4724]: I0223 17:32:11.223070 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:11 crc kubenswrapper[4724]: I0223 17:32:11.223090 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:11 crc kubenswrapper[4724]: I0223 17:32:11.223116 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:11 crc kubenswrapper[4724]: I0223 17:32:11.223134 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:11Z","lastTransitionTime":"2026-02-23T17:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:11 crc kubenswrapper[4724]: I0223 17:32:11.326150 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:11 crc kubenswrapper[4724]: I0223 17:32:11.326494 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:11 crc kubenswrapper[4724]: I0223 17:32:11.326577 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:11 crc kubenswrapper[4724]: I0223 17:32:11.326645 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:11 crc kubenswrapper[4724]: I0223 17:32:11.326712 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:11Z","lastTransitionTime":"2026-02-23T17:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:11 crc kubenswrapper[4724]: I0223 17:32:11.429794 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:11 crc kubenswrapper[4724]: I0223 17:32:11.430119 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:11 crc kubenswrapper[4724]: I0223 17:32:11.430329 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:11 crc kubenswrapper[4724]: I0223 17:32:11.430576 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:11 crc kubenswrapper[4724]: I0223 17:32:11.430812 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:11Z","lastTransitionTime":"2026-02-23T17:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:11 crc kubenswrapper[4724]: I0223 17:32:11.534012 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:11 crc kubenswrapper[4724]: I0223 17:32:11.534065 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:11 crc kubenswrapper[4724]: I0223 17:32:11.534081 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:11 crc kubenswrapper[4724]: I0223 17:32:11.534100 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:11 crc kubenswrapper[4724]: I0223 17:32:11.534116 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:11Z","lastTransitionTime":"2026-02-23T17:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:11 crc kubenswrapper[4724]: I0223 17:32:11.636833 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:11 crc kubenswrapper[4724]: I0223 17:32:11.636882 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:11 crc kubenswrapper[4724]: I0223 17:32:11.636896 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:11 crc kubenswrapper[4724]: I0223 17:32:11.636913 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:11 crc kubenswrapper[4724]: I0223 17:32:11.636923 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:11Z","lastTransitionTime":"2026-02-23T17:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:11 crc kubenswrapper[4724]: I0223 17:32:11.740007 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:11 crc kubenswrapper[4724]: I0223 17:32:11.740047 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:11 crc kubenswrapper[4724]: I0223 17:32:11.740056 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:11 crc kubenswrapper[4724]: I0223 17:32:11.740070 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:11 crc kubenswrapper[4724]: I0223 17:32:11.740080 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:11Z","lastTransitionTime":"2026-02-23T17:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:11 crc kubenswrapper[4724]: I0223 17:32:11.844104 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:11 crc kubenswrapper[4724]: I0223 17:32:11.844488 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:11 crc kubenswrapper[4724]: I0223 17:32:11.844618 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:11 crc kubenswrapper[4724]: I0223 17:32:11.844733 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:11 crc kubenswrapper[4724]: I0223 17:32:11.844823 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:11Z","lastTransitionTime":"2026-02-23T17:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:11 crc kubenswrapper[4724]: I0223 17:32:11.947324 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:11 crc kubenswrapper[4724]: I0223 17:32:11.947631 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:11 crc kubenswrapper[4724]: I0223 17:32:11.947950 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:11 crc kubenswrapper[4724]: I0223 17:32:11.948031 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:11 crc kubenswrapper[4724]: I0223 17:32:11.948096 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:11Z","lastTransitionTime":"2026-02-23T17:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:11 crc kubenswrapper[4724]: I0223 17:32:11.950755 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:32:11 crc kubenswrapper[4724]: E0223 17:32:11.951002 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 17:32:12 crc kubenswrapper[4724]: I0223 17:32:12.051145 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:12 crc kubenswrapper[4724]: I0223 17:32:12.051195 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:12 crc kubenswrapper[4724]: I0223 17:32:12.051205 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:12 crc kubenswrapper[4724]: I0223 17:32:12.051223 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:12 crc kubenswrapper[4724]: I0223 17:32:12.051235 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:12Z","lastTransitionTime":"2026-02-23T17:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:12 crc kubenswrapper[4724]: I0223 17:32:12.133834 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 03:41:16.243453736 +0000 UTC Feb 23 17:32:12 crc kubenswrapper[4724]: I0223 17:32:12.154513 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:12 crc kubenswrapper[4724]: I0223 17:32:12.154562 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:12 crc kubenswrapper[4724]: I0223 17:32:12.154572 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:12 crc kubenswrapper[4724]: I0223 17:32:12.154593 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:12 crc kubenswrapper[4724]: I0223 17:32:12.154607 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:12Z","lastTransitionTime":"2026-02-23T17:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:12 crc kubenswrapper[4724]: I0223 17:32:12.257808 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:12 crc kubenswrapper[4724]: I0223 17:32:12.257857 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:12 crc kubenswrapper[4724]: I0223 17:32:12.257868 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:12 crc kubenswrapper[4724]: I0223 17:32:12.257887 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:12 crc kubenswrapper[4724]: I0223 17:32:12.257898 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:12Z","lastTransitionTime":"2026-02-23T17:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:12 crc kubenswrapper[4724]: I0223 17:32:12.361353 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:12 crc kubenswrapper[4724]: I0223 17:32:12.361420 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:12 crc kubenswrapper[4724]: I0223 17:32:12.361437 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:12 crc kubenswrapper[4724]: I0223 17:32:12.361456 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:12 crc kubenswrapper[4724]: I0223 17:32:12.361467 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:12Z","lastTransitionTime":"2026-02-23T17:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:12 crc kubenswrapper[4724]: I0223 17:32:12.464578 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:12 crc kubenswrapper[4724]: I0223 17:32:12.464628 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:12 crc kubenswrapper[4724]: I0223 17:32:12.464639 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:12 crc kubenswrapper[4724]: I0223 17:32:12.464661 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:12 crc kubenswrapper[4724]: I0223 17:32:12.464674 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:12Z","lastTransitionTime":"2026-02-23T17:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:12 crc kubenswrapper[4724]: I0223 17:32:12.567251 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:12 crc kubenswrapper[4724]: I0223 17:32:12.567407 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:12 crc kubenswrapper[4724]: I0223 17:32:12.567428 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:12 crc kubenswrapper[4724]: I0223 17:32:12.567447 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:12 crc kubenswrapper[4724]: I0223 17:32:12.567461 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:12Z","lastTransitionTime":"2026-02-23T17:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:12 crc kubenswrapper[4724]: I0223 17:32:12.670152 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:12 crc kubenswrapper[4724]: I0223 17:32:12.670615 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:12 crc kubenswrapper[4724]: I0223 17:32:12.670719 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:12 crc kubenswrapper[4724]: I0223 17:32:12.670830 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:12 crc kubenswrapper[4724]: I0223 17:32:12.670929 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:12Z","lastTransitionTime":"2026-02-23T17:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:12 crc kubenswrapper[4724]: I0223 17:32:12.778417 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:12 crc kubenswrapper[4724]: I0223 17:32:12.778506 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:12 crc kubenswrapper[4724]: I0223 17:32:12.778518 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:12 crc kubenswrapper[4724]: I0223 17:32:12.778535 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:12 crc kubenswrapper[4724]: I0223 17:32:12.779004 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:12Z","lastTransitionTime":"2026-02-23T17:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:12 crc kubenswrapper[4724]: I0223 17:32:12.882519 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:12 crc kubenswrapper[4724]: I0223 17:32:12.882565 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:12 crc kubenswrapper[4724]: I0223 17:32:12.882574 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:12 crc kubenswrapper[4724]: I0223 17:32:12.882591 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:12 crc kubenswrapper[4724]: I0223 17:32:12.882602 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:12Z","lastTransitionTime":"2026-02-23T17:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:12 crc kubenswrapper[4724]: I0223 17:32:12.950290 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:32:12 crc kubenswrapper[4724]: I0223 17:32:12.950430 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:32:12 crc kubenswrapper[4724]: E0223 17:32:12.951073 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 17:32:12 crc kubenswrapper[4724]: E0223 17:32:12.951186 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 17:32:12 crc kubenswrapper[4724]: I0223 17:32:12.985960 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:12 crc kubenswrapper[4724]: I0223 17:32:12.986039 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:12 crc kubenswrapper[4724]: I0223 17:32:12.986060 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:12 crc kubenswrapper[4724]: I0223 17:32:12.986087 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:12 crc kubenswrapper[4724]: I0223 17:32:12.986107 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:12Z","lastTransitionTime":"2026-02-23T17:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.029014 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.029067 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.029081 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.029102 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.029116 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:13Z","lastTransitionTime":"2026-02-23T17:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:13 crc kubenswrapper[4724]: E0223 17:32:13.045683 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aaac6a71-65af-4ded-9945-71c01ce15653\\\",\\\"systemUUID\\\":\\\"883aa43b-ee67-45aa-9f6b-7760dc931d5e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:13Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.050934 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.050980 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.050991 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.051012 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.051026 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:13Z","lastTransitionTime":"2026-02-23T17:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:13 crc kubenswrapper[4724]: E0223 17:32:13.064716 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aaac6a71-65af-4ded-9945-71c01ce15653\\\",\\\"systemUUID\\\":\\\"883aa43b-ee67-45aa-9f6b-7760dc931d5e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:13Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.069161 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.069204 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.069217 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.069268 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.069281 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:13Z","lastTransitionTime":"2026-02-23T17:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:13 crc kubenswrapper[4724]: E0223 17:32:13.088463 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aaac6a71-65af-4ded-9945-71c01ce15653\\\",\\\"systemUUID\\\":\\\"883aa43b-ee67-45aa-9f6b-7760dc931d5e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:13Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.094180 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.094223 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.094236 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.094255 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.094273 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:13Z","lastTransitionTime":"2026-02-23T17:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:13 crc kubenswrapper[4724]: E0223 17:32:13.111497 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aaac6a71-65af-4ded-9945-71c01ce15653\\\",\\\"systemUUID\\\":\\\"883aa43b-ee67-45aa-9f6b-7760dc931d5e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:13Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.115730 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.115790 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.115807 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.115832 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.115853 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:13Z","lastTransitionTime":"2026-02-23T17:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.135089 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 13:10:10.433875154 +0000 UTC Feb 23 17:32:13 crc kubenswrapper[4724]: E0223 17:32:13.135453 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aaac6a71-65af-4ded-9945-71c01ce15653\\\",\\\"systemUUID\\\":\\\"883aa43b-ee67-45aa-9f6b-7760dc931d5e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:13Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:13 crc kubenswrapper[4724]: E0223 17:32:13.135670 4724 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.138275 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.138351 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.138380 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.138453 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.138491 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:13Z","lastTransitionTime":"2026-02-23T17:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.241899 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.241955 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.241963 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.241983 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.241993 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:13Z","lastTransitionTime":"2026-02-23T17:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.345326 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.345445 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.345472 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.345507 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.345529 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:13Z","lastTransitionTime":"2026-02-23T17:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.448436 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.448482 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.448491 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.448515 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.448532 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:13Z","lastTransitionTime":"2026-02-23T17:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.550923 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.550960 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.550973 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.550992 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.551006 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:13Z","lastTransitionTime":"2026-02-23T17:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.653665 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.653703 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.653713 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.653727 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.653737 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:13Z","lastTransitionTime":"2026-02-23T17:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.756829 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.756882 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.756897 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.756919 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.756936 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:13Z","lastTransitionTime":"2026-02-23T17:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.859562 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.859605 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.859617 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.859634 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.859646 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:13Z","lastTransitionTime":"2026-02-23T17:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.950834 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:32:13 crc kubenswrapper[4724]: E0223 17:32:13.951028 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.962317 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.962377 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.962429 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.962464 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:13 crc kubenswrapper[4724]: I0223 17:32:13.962484 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:13Z","lastTransitionTime":"2026-02-23T17:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:14 crc kubenswrapper[4724]: I0223 17:32:14.065662 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:14 crc kubenswrapper[4724]: I0223 17:32:14.065726 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:14 crc kubenswrapper[4724]: I0223 17:32:14.065742 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:14 crc kubenswrapper[4724]: I0223 17:32:14.065808 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:14 crc kubenswrapper[4724]: I0223 17:32:14.065828 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:14Z","lastTransitionTime":"2026-02-23T17:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:14 crc kubenswrapper[4724]: I0223 17:32:14.135545 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 00:18:58.826192542 +0000 UTC Feb 23 17:32:14 crc kubenswrapper[4724]: I0223 17:32:14.168943 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:14 crc kubenswrapper[4724]: I0223 17:32:14.168991 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:14 crc kubenswrapper[4724]: I0223 17:32:14.169001 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:14 crc kubenswrapper[4724]: I0223 17:32:14.169019 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:14 crc kubenswrapper[4724]: I0223 17:32:14.169028 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:14Z","lastTransitionTime":"2026-02-23T17:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:14 crc kubenswrapper[4724]: I0223 17:32:14.271626 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:14 crc kubenswrapper[4724]: I0223 17:32:14.271666 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:14 crc kubenswrapper[4724]: I0223 17:32:14.271677 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:14 crc kubenswrapper[4724]: I0223 17:32:14.271694 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:14 crc kubenswrapper[4724]: I0223 17:32:14.271706 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:14Z","lastTransitionTime":"2026-02-23T17:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:14 crc kubenswrapper[4724]: I0223 17:32:14.375118 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:14 crc kubenswrapper[4724]: I0223 17:32:14.375176 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:14 crc kubenswrapper[4724]: I0223 17:32:14.375188 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:14 crc kubenswrapper[4724]: I0223 17:32:14.375206 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:14 crc kubenswrapper[4724]: I0223 17:32:14.375602 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:14Z","lastTransitionTime":"2026-02-23T17:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:14 crc kubenswrapper[4724]: I0223 17:32:14.480051 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:14 crc kubenswrapper[4724]: I0223 17:32:14.480129 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:14 crc kubenswrapper[4724]: I0223 17:32:14.480148 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:14 crc kubenswrapper[4724]: I0223 17:32:14.480685 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:14 crc kubenswrapper[4724]: I0223 17:32:14.480753 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:14Z","lastTransitionTime":"2026-02-23T17:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:14 crc kubenswrapper[4724]: I0223 17:32:14.583022 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:14 crc kubenswrapper[4724]: I0223 17:32:14.583057 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:14 crc kubenswrapper[4724]: I0223 17:32:14.583066 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:14 crc kubenswrapper[4724]: I0223 17:32:14.583082 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:14 crc kubenswrapper[4724]: I0223 17:32:14.583092 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:14Z","lastTransitionTime":"2026-02-23T17:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:14 crc kubenswrapper[4724]: I0223 17:32:14.686186 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:14 crc kubenswrapper[4724]: I0223 17:32:14.686245 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:14 crc kubenswrapper[4724]: I0223 17:32:14.686257 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:14 crc kubenswrapper[4724]: I0223 17:32:14.686280 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:14 crc kubenswrapper[4724]: I0223 17:32:14.686296 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:14Z","lastTransitionTime":"2026-02-23T17:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:14 crc kubenswrapper[4724]: I0223 17:32:14.790267 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:14 crc kubenswrapper[4724]: I0223 17:32:14.790341 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:14 crc kubenswrapper[4724]: I0223 17:32:14.790358 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:14 crc kubenswrapper[4724]: I0223 17:32:14.790382 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:14 crc kubenswrapper[4724]: I0223 17:32:14.790440 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:14Z","lastTransitionTime":"2026-02-23T17:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:14 crc kubenswrapper[4724]: I0223 17:32:14.895969 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:14 crc kubenswrapper[4724]: I0223 17:32:14.896038 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:14 crc kubenswrapper[4724]: I0223 17:32:14.896049 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:14 crc kubenswrapper[4724]: I0223 17:32:14.896070 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:14 crc kubenswrapper[4724]: I0223 17:32:14.896086 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:14Z","lastTransitionTime":"2026-02-23T17:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:14 crc kubenswrapper[4724]: I0223 17:32:14.950168 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:32:14 crc kubenswrapper[4724]: I0223 17:32:14.950248 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:32:14 crc kubenswrapper[4724]: E0223 17:32:14.950411 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 17:32:14 crc kubenswrapper[4724]: E0223 17:32:14.950682 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 17:32:14 crc kubenswrapper[4724]: I0223 17:32:14.965320 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d114679fca6bbb2dbaaee0970449fe265c7c931665eb559d015abb61de3ec07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dd3d46856b5d5dece678b672db6485b68b9b7fbe9f029f7d9df37906146674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:14Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:14 crc kubenswrapper[4724]: I0223 17:32:14.977723 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:14Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:14 crc kubenswrapper[4724]: I0223 17:32:14.990818 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:14Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:14 crc kubenswrapper[4724]: I0223 17:32:14.998639 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:14 crc kubenswrapper[4724]: I0223 17:32:14.998685 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:14 crc kubenswrapper[4724]: I0223 17:32:14.998699 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:14 crc kubenswrapper[4724]: I0223 17:32:14.998717 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:14 crc kubenswrapper[4724]: I0223 17:32:14.998729 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:14Z","lastTransitionTime":"2026-02-23T17:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:15 crc kubenswrapper[4724]: I0223 17:32:15.005622 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d85ca9995343f8d309c902c37fc2db6faf539dfde44cc8334e279648ec2ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:15Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:15 crc kubenswrapper[4724]: I0223 17:32:15.016011 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74b84e23-8d8b-47b7-b3d4-269b16e9dfb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e581ef5d1fb9752e7582661dffceba50fc7b408467d9b0f0cca3b29e1838e72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:15Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:15 crc kubenswrapper[4724]: I0223 17:32:15.027990 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e1d6606-75fc-41fd-9c23-18ee248da2af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe805066e5e61f09a4f5cf1f9e912da6664229c8db86a898cbe670278e4c2530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beda3a9d030f2d4aa9c6a78508a84f5bc122debebb2c5b423233834433b8fd5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e108d74a635da102e857e2ad4f18384861e4e630ae31af9aa05841e6bbc2de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b889ee0ad15cc6ff92c69a44c4bc0bf620e23e3dc7276324662c265ba5c7d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:25Z\\\",\\\"message\\\":\\\"W0223 17:31:24.378840 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 17:31:24.379771 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771867884 cert, and key in /tmp/serving-cert-1953836024/serving-signer.crt, /tmp/serving-cert-1953836024/serving-signer.key\\\\nI0223 17:31:24.846158 1 observer_polling.go:159] Starting file observer\\\\nW0223 17:31:24.858497 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 17:31:24.858706 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 17:31:24.859835 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1953836024/tls.crt::/tmp/serving-cert-1953836024/tls.key\\\\\\\"\\\\nF0223 17:31:25.358711 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:31:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b550694e49325768a2674a4d2b7089dd267b91a273751fb62d060a0663f06619\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:15Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:15 crc kubenswrapper[4724]: I0223 17:32:15.040853 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:15Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:15 crc kubenswrapper[4724]: I0223 17:32:15.054713 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2df91905-ff3d-4a7d-8e22-337861483a5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9fabd2560e4a3826fb070276626936d120d612d37e2750e417f637b93b1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff1c2b88a730048902a1360cdbcddfb4f82c6a58c427edb24a792b254f55383\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:17Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 17:30:47.305065 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 17:30:47.307934 1 observer_polling.go:159] Starting file observer\\\\nI0223 17:30:47.346927 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 17:30:47.351923 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0223 17:31:17.828320 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:16Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbce4ddd851479ee9b40b8c773a052d5a1d1420df107d8ea0e9fa2b927300910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bfa4a72a4a487fbc7703a3b60a9ef9a18b7fea6e58fd427f50361d8df9731d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2591d81bed9e5dbad670e16b2266367cfb386a3d68ffad9d09e0652130ee2609\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:15Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:15 crc kubenswrapper[4724]: I0223 17:32:15.068342 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35021d6d-15bd-4153-9dae-7b002eff8c23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26478861fb89db7eb6b5ba0a2089cad4360893bd95a71883be07831745c5c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f971a2a092b595760de8b7a9b6a0a13fa802d3fcb6a2c2b8922b01c9c57d782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d892cd1f6ea7fabb2e64e3e78d329f6e0efbf8585cc4f72fbcd29f31a035cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:15Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:15 crc kubenswrapper[4724]: I0223 17:32:15.082067 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbcb3c1b1daa4328b6e5e049e9f1270ee94da4897df26324f6e4cd898f565de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:15Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:15 crc kubenswrapper[4724]: I0223 17:32:15.100659 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:15 crc kubenswrapper[4724]: I0223 17:32:15.100695 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:15 crc kubenswrapper[4724]: I0223 17:32:15.100706 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:15 crc kubenswrapper[4724]: I0223 17:32:15.100723 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:15 crc kubenswrapper[4724]: I0223 17:32:15.100734 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:15Z","lastTransitionTime":"2026-02-23T17:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:15 crc kubenswrapper[4724]: I0223 17:32:15.136171 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 23:19:15.284499454 +0000 UTC Feb 23 17:32:15 crc kubenswrapper[4724]: I0223 17:32:15.204468 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:15 crc kubenswrapper[4724]: I0223 17:32:15.204504 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:15 crc kubenswrapper[4724]: I0223 17:32:15.204516 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:15 crc kubenswrapper[4724]: I0223 17:32:15.204553 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:15 crc kubenswrapper[4724]: I0223 17:32:15.204563 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:15Z","lastTransitionTime":"2026-02-23T17:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:15 crc kubenswrapper[4724]: I0223 17:32:15.307095 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:15 crc kubenswrapper[4724]: I0223 17:32:15.307139 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:15 crc kubenswrapper[4724]: I0223 17:32:15.307156 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:15 crc kubenswrapper[4724]: I0223 17:32:15.307182 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:15 crc kubenswrapper[4724]: I0223 17:32:15.307197 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:15Z","lastTransitionTime":"2026-02-23T17:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:15 crc kubenswrapper[4724]: I0223 17:32:15.410205 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:15 crc kubenswrapper[4724]: I0223 17:32:15.410270 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:15 crc kubenswrapper[4724]: I0223 17:32:15.410293 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:15 crc kubenswrapper[4724]: I0223 17:32:15.410329 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:15 crc kubenswrapper[4724]: I0223 17:32:15.410354 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:15Z","lastTransitionTime":"2026-02-23T17:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:15 crc kubenswrapper[4724]: I0223 17:32:15.512935 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:15 crc kubenswrapper[4724]: I0223 17:32:15.512975 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:15 crc kubenswrapper[4724]: I0223 17:32:15.512985 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:15 crc kubenswrapper[4724]: I0223 17:32:15.513000 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:15 crc kubenswrapper[4724]: I0223 17:32:15.513011 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:15Z","lastTransitionTime":"2026-02-23T17:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:15 crc kubenswrapper[4724]: I0223 17:32:15.615844 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:15 crc kubenswrapper[4724]: I0223 17:32:15.615912 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:15 crc kubenswrapper[4724]: I0223 17:32:15.615930 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:15 crc kubenswrapper[4724]: I0223 17:32:15.615956 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:15 crc kubenswrapper[4724]: I0223 17:32:15.615973 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:15Z","lastTransitionTime":"2026-02-23T17:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:15 crc kubenswrapper[4724]: I0223 17:32:15.719233 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:15 crc kubenswrapper[4724]: I0223 17:32:15.719302 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:15 crc kubenswrapper[4724]: I0223 17:32:15.719327 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:15 crc kubenswrapper[4724]: I0223 17:32:15.719358 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:15 crc kubenswrapper[4724]: I0223 17:32:15.719383 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:15Z","lastTransitionTime":"2026-02-23T17:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:15 crc kubenswrapper[4724]: I0223 17:32:15.822839 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:15 crc kubenswrapper[4724]: I0223 17:32:15.822891 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:15 crc kubenswrapper[4724]: I0223 17:32:15.822902 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:15 crc kubenswrapper[4724]: I0223 17:32:15.822923 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:15 crc kubenswrapper[4724]: I0223 17:32:15.822936 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:15Z","lastTransitionTime":"2026-02-23T17:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:15 crc kubenswrapper[4724]: I0223 17:32:15.925645 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:15 crc kubenswrapper[4724]: I0223 17:32:15.925691 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:15 crc kubenswrapper[4724]: I0223 17:32:15.925702 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:15 crc kubenswrapper[4724]: I0223 17:32:15.925724 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:15 crc kubenswrapper[4724]: I0223 17:32:15.925739 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:15Z","lastTransitionTime":"2026-02-23T17:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:15 crc kubenswrapper[4724]: I0223 17:32:15.950179 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:32:15 crc kubenswrapper[4724]: E0223 17:32:15.950318 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.028835 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.028893 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.028909 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.028933 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.028951 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:16Z","lastTransitionTime":"2026-02-23T17:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.131992 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.132039 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.132048 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.132069 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.132082 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:16Z","lastTransitionTime":"2026-02-23T17:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.137209 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 20:45:01.247368019 +0000 UTC Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.235017 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.235328 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.235448 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.235562 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.235697 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:16Z","lastTransitionTime":"2026-02-23T17:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.339424 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.339508 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.339541 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.339577 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.339601 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:16Z","lastTransitionTime":"2026-02-23T17:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.442801 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.442848 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.442858 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.442876 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.442891 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:16Z","lastTransitionTime":"2026-02-23T17:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.545582 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.545980 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.546262 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.546495 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.546708 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:16Z","lastTransitionTime":"2026-02-23T17:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.548646 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.564342 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:16Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.582269 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d114679fca6bbb2dbaaee0970449fe265c7c931665eb559d015abb61de3ec07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dd3d46856b5d5dece678b672db6485b68b9b7fbe9f029f7d9df37906146674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:16Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.600539 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:16Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.622299 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:16Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.637081 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d85ca9995343f8d309c902c37fc2db6faf539dfde44cc8334e279648ec2ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:16Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.650001 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.650034 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.650041 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.650059 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.650068 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:16Z","lastTransitionTime":"2026-02-23T17:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.654072 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74b84e23-8d8b-47b7-b3d4-269b16e9dfb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e581ef5d1fb9752e7582661dffceba50fc7b408467d9b0f0cca3b29e1838e72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:16Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.676173 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e1d6606-75fc-41fd-9c23-18ee248da2af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe805066e5e61f09a4f5cf1f9e912da6664229c8db86a898cbe670278e4c2530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beda3a9d030f2d4aa9c6a78508a84f5bc122debebb2c5b423233834433b8fd5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e108d74a635da102e857e2ad4f18384861e4e630ae31af9aa05841e6bbc2de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b889ee0ad15cc6ff92c69a44c4bc0bf620e23e3dc7276324662c265ba5c7d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:25Z\\\",\\\"message\\\":\\\"W0223 17:31:24.378840 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 17:31:24.379771 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771867884 cert, and key in /tmp/serving-cert-1953836024/serving-signer.crt, /tmp/serving-cert-1953836024/serving-signer.key\\\\nI0223 17:31:24.846158 1 observer_polling.go:159] Starting file observer\\\\nW0223 17:31:24.858497 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 17:31:24.858706 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 17:31:24.859835 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1953836024/tls.crt::/tmp/serving-cert-1953836024/tls.key\\\\\\\"\\\\nF0223 17:31:25.358711 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:31:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b550694e49325768a2674a4d2b7089dd267b91a273751fb62d060a0663f06619\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:16Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.690527 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2df91905-ff3d-4a7d-8e22-337861483a5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9fabd2560e4a3826fb070276626936d120d612d37e2750e417f637b93b1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff1c2b88a730048902a1360cdbcddfb4f82c6a58c427edb24a792b254f55383\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:17Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 17:30:47.305065 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 17:30:47.307934 1 observer_polling.go:159] Starting file observer\\\\nI0223 17:30:47.346927 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 17:30:47.351923 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0223 17:31:17.828320 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:16Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbce4ddd851479ee9b40b8c773a052d5a1d1420df107d8ea0e9fa2b927300910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bfa4a72a4a487fbc7703a3b60a9ef9a18b7fea6e58fd427f50361d8df9731d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2591d81bed9e5dbad670e16b2266367cfb386a3d68ffad9d09e0652130ee2609\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:16Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.704622 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35021d6d-15bd-4153-9dae-7b002eff8c23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26478861fb89db7eb6b5ba0a2089cad4360893bd95a71883be07831745c5c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f971a2a092b595760de8b7a9b6a0a13fa802d3fcb6a2c2b8922b01c9c57d782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d892cd1f6ea7fabb2e64e3e78d329f6e0efbf8585cc4f72fbcd29f31a035cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:16Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.718287 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbcb3c1b1daa4328b6e5e049e9f1270ee94da4897df26324f6e4cd898f565de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:16Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.753103 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.753492 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.753637 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.753734 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.753872 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:16Z","lastTransitionTime":"2026-02-23T17:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.857183 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.857234 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.857248 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.857267 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.857281 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:16Z","lastTransitionTime":"2026-02-23T17:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.950989 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.951074 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:32:16 crc kubenswrapper[4724]: E0223 17:32:16.951217 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 17:32:16 crc kubenswrapper[4724]: E0223 17:32:16.951451 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.959105 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.959270 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.959361 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.959465 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:16 crc kubenswrapper[4724]: I0223 17:32:16.959538 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:16Z","lastTransitionTime":"2026-02-23T17:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:17 crc kubenswrapper[4724]: I0223 17:32:17.063170 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:17 crc kubenswrapper[4724]: I0223 17:32:17.063223 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:17 crc kubenswrapper[4724]: I0223 17:32:17.063237 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:17 crc kubenswrapper[4724]: I0223 17:32:17.063260 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:17 crc kubenswrapper[4724]: I0223 17:32:17.063283 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:17Z","lastTransitionTime":"2026-02-23T17:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:17 crc kubenswrapper[4724]: I0223 17:32:17.137808 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 18:40:21.612973867 +0000 UTC Feb 23 17:32:17 crc kubenswrapper[4724]: I0223 17:32:17.166300 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:17 crc kubenswrapper[4724]: I0223 17:32:17.166357 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:17 crc kubenswrapper[4724]: I0223 17:32:17.166369 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:17 crc kubenswrapper[4724]: I0223 17:32:17.166419 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:17 crc kubenswrapper[4724]: I0223 17:32:17.166438 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:17Z","lastTransitionTime":"2026-02-23T17:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:17 crc kubenswrapper[4724]: I0223 17:32:17.269165 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:17 crc kubenswrapper[4724]: I0223 17:32:17.269215 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:17 crc kubenswrapper[4724]: I0223 17:32:17.269231 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:17 crc kubenswrapper[4724]: I0223 17:32:17.269254 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:17 crc kubenswrapper[4724]: I0223 17:32:17.269270 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:17Z","lastTransitionTime":"2026-02-23T17:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:17 crc kubenswrapper[4724]: I0223 17:32:17.372247 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:17 crc kubenswrapper[4724]: I0223 17:32:17.372301 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:17 crc kubenswrapper[4724]: I0223 17:32:17.372315 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:17 crc kubenswrapper[4724]: I0223 17:32:17.372337 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:17 crc kubenswrapper[4724]: I0223 17:32:17.372356 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:17Z","lastTransitionTime":"2026-02-23T17:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:17 crc kubenswrapper[4724]: I0223 17:32:17.474935 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:17 crc kubenswrapper[4724]: I0223 17:32:17.475351 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:17 crc kubenswrapper[4724]: I0223 17:32:17.475532 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:17 crc kubenswrapper[4724]: I0223 17:32:17.475730 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:17 crc kubenswrapper[4724]: I0223 17:32:17.475858 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:17Z","lastTransitionTime":"2026-02-23T17:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:17 crc kubenswrapper[4724]: I0223 17:32:17.579276 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:17 crc kubenswrapper[4724]: I0223 17:32:17.579345 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:17 crc kubenswrapper[4724]: I0223 17:32:17.579364 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:17 crc kubenswrapper[4724]: I0223 17:32:17.579415 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:17 crc kubenswrapper[4724]: I0223 17:32:17.579433 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:17Z","lastTransitionTime":"2026-02-23T17:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:17 crc kubenswrapper[4724]: I0223 17:32:17.681546 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:17 crc kubenswrapper[4724]: I0223 17:32:17.681603 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:17 crc kubenswrapper[4724]: I0223 17:32:17.681617 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:17 crc kubenswrapper[4724]: I0223 17:32:17.681643 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:17 crc kubenswrapper[4724]: I0223 17:32:17.681659 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:17Z","lastTransitionTime":"2026-02-23T17:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:17 crc kubenswrapper[4724]: I0223 17:32:17.784346 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:17 crc kubenswrapper[4724]: I0223 17:32:17.784489 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:17 crc kubenswrapper[4724]: I0223 17:32:17.784533 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:17 crc kubenswrapper[4724]: I0223 17:32:17.784572 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:17 crc kubenswrapper[4724]: I0223 17:32:17.784602 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:17Z","lastTransitionTime":"2026-02-23T17:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:17 crc kubenswrapper[4724]: I0223 17:32:17.887579 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:17 crc kubenswrapper[4724]: I0223 17:32:17.887657 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:17 crc kubenswrapper[4724]: I0223 17:32:17.887675 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:17 crc kubenswrapper[4724]: I0223 17:32:17.887700 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:17 crc kubenswrapper[4724]: I0223 17:32:17.887719 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:17Z","lastTransitionTime":"2026-02-23T17:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:17 crc kubenswrapper[4724]: I0223 17:32:17.950674 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:32:17 crc kubenswrapper[4724]: E0223 17:32:17.951336 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 17:32:17 crc kubenswrapper[4724]: I0223 17:32:17.971377 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 23 17:32:17 crc kubenswrapper[4724]: I0223 17:32:17.991143 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:17 crc kubenswrapper[4724]: I0223 17:32:17.991231 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:17 crc kubenswrapper[4724]: I0223 17:32:17.991243 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:17 crc kubenswrapper[4724]: I0223 17:32:17.991262 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:17 crc kubenswrapper[4724]: I0223 17:32:17.991312 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:17Z","lastTransitionTime":"2026-02-23T17:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:18 crc kubenswrapper[4724]: I0223 17:32:18.094861 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:18 crc kubenswrapper[4724]: I0223 17:32:18.094968 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:18 crc kubenswrapper[4724]: I0223 17:32:18.094993 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:18 crc kubenswrapper[4724]: I0223 17:32:18.095023 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:18 crc kubenswrapper[4724]: I0223 17:32:18.095062 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:18Z","lastTransitionTime":"2026-02-23T17:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:18 crc kubenswrapper[4724]: I0223 17:32:18.138171 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 20:24:59.320938408 +0000 UTC Feb 23 17:32:18 crc kubenswrapper[4724]: I0223 17:32:18.198160 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:18 crc kubenswrapper[4724]: I0223 17:32:18.198207 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:18 crc kubenswrapper[4724]: I0223 17:32:18.198216 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:18 crc kubenswrapper[4724]: I0223 17:32:18.198235 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:18 crc kubenswrapper[4724]: I0223 17:32:18.198246 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:18Z","lastTransitionTime":"2026-02-23T17:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:18 crc kubenswrapper[4724]: I0223 17:32:18.300459 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:18 crc kubenswrapper[4724]: I0223 17:32:18.300518 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:18 crc kubenswrapper[4724]: I0223 17:32:18.300539 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:18 crc kubenswrapper[4724]: I0223 17:32:18.300567 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:18 crc kubenswrapper[4724]: I0223 17:32:18.300589 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:18Z","lastTransitionTime":"2026-02-23T17:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:18 crc kubenswrapper[4724]: I0223 17:32:18.403204 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:18 crc kubenswrapper[4724]: I0223 17:32:18.403250 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:18 crc kubenswrapper[4724]: I0223 17:32:18.403259 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:18 crc kubenswrapper[4724]: I0223 17:32:18.403277 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:18 crc kubenswrapper[4724]: I0223 17:32:18.403290 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:18Z","lastTransitionTime":"2026-02-23T17:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:18 crc kubenswrapper[4724]: I0223 17:32:18.506097 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:18 crc kubenswrapper[4724]: I0223 17:32:18.506165 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:18 crc kubenswrapper[4724]: I0223 17:32:18.506188 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:18 crc kubenswrapper[4724]: I0223 17:32:18.506221 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:18 crc kubenswrapper[4724]: I0223 17:32:18.506254 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:18Z","lastTransitionTime":"2026-02-23T17:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:18 crc kubenswrapper[4724]: I0223 17:32:18.610916 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:18 crc kubenswrapper[4724]: I0223 17:32:18.611053 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:18 crc kubenswrapper[4724]: I0223 17:32:18.611089 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:18 crc kubenswrapper[4724]: I0223 17:32:18.611137 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:18 crc kubenswrapper[4724]: I0223 17:32:18.611184 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:18Z","lastTransitionTime":"2026-02-23T17:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:18 crc kubenswrapper[4724]: I0223 17:32:18.713876 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:18 crc kubenswrapper[4724]: I0223 17:32:18.713927 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:18 crc kubenswrapper[4724]: I0223 17:32:18.713937 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:18 crc kubenswrapper[4724]: I0223 17:32:18.713955 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:18 crc kubenswrapper[4724]: I0223 17:32:18.714001 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:18Z","lastTransitionTime":"2026-02-23T17:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:18 crc kubenswrapper[4724]: I0223 17:32:18.816888 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:18 crc kubenswrapper[4724]: I0223 17:32:18.816968 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:18 crc kubenswrapper[4724]: I0223 17:32:18.816996 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:18 crc kubenswrapper[4724]: I0223 17:32:18.817028 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:18 crc kubenswrapper[4724]: I0223 17:32:18.817054 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:18Z","lastTransitionTime":"2026-02-23T17:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:18 crc kubenswrapper[4724]: I0223 17:32:18.920418 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:18 crc kubenswrapper[4724]: I0223 17:32:18.920495 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:18 crc kubenswrapper[4724]: I0223 17:32:18.920508 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:18 crc kubenswrapper[4724]: I0223 17:32:18.920533 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:18 crc kubenswrapper[4724]: I0223 17:32:18.920547 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:18Z","lastTransitionTime":"2026-02-23T17:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:18 crc kubenswrapper[4724]: I0223 17:32:18.950100 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:32:18 crc kubenswrapper[4724]: I0223 17:32:18.950100 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:32:18 crc kubenswrapper[4724]: E0223 17:32:18.950299 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 17:32:18 crc kubenswrapper[4724]: E0223 17:32:18.950700 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 17:32:19 crc kubenswrapper[4724]: I0223 17:32:19.023470 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:19 crc kubenswrapper[4724]: I0223 17:32:19.023544 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:19 crc kubenswrapper[4724]: I0223 17:32:19.023568 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:19 crc kubenswrapper[4724]: I0223 17:32:19.023596 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:19 crc kubenswrapper[4724]: I0223 17:32:19.023619 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:19Z","lastTransitionTime":"2026-02-23T17:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:19 crc kubenswrapper[4724]: I0223 17:32:19.126412 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:19 crc kubenswrapper[4724]: I0223 17:32:19.126474 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:19 crc kubenswrapper[4724]: I0223 17:32:19.126486 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:19 crc kubenswrapper[4724]: I0223 17:32:19.126509 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:19 crc kubenswrapper[4724]: I0223 17:32:19.126524 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:19Z","lastTransitionTime":"2026-02-23T17:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:19 crc kubenswrapper[4724]: I0223 17:32:19.138803 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 16:54:11.850978728 +0000 UTC Feb 23 17:32:19 crc kubenswrapper[4724]: I0223 17:32:19.229361 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:19 crc kubenswrapper[4724]: I0223 17:32:19.229418 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:19 crc kubenswrapper[4724]: I0223 17:32:19.229429 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:19 crc kubenswrapper[4724]: I0223 17:32:19.229451 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:19 crc kubenswrapper[4724]: I0223 17:32:19.229464 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:19Z","lastTransitionTime":"2026-02-23T17:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:19 crc kubenswrapper[4724]: I0223 17:32:19.332295 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:19 crc kubenswrapper[4724]: I0223 17:32:19.332356 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:19 crc kubenswrapper[4724]: I0223 17:32:19.332372 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:19 crc kubenswrapper[4724]: I0223 17:32:19.332426 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:19 crc kubenswrapper[4724]: I0223 17:32:19.332446 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:19Z","lastTransitionTime":"2026-02-23T17:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:19 crc kubenswrapper[4724]: I0223 17:32:19.436128 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:19 crc kubenswrapper[4724]: I0223 17:32:19.436186 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:19 crc kubenswrapper[4724]: I0223 17:32:19.436201 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:19 crc kubenswrapper[4724]: I0223 17:32:19.436222 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:19 crc kubenswrapper[4724]: I0223 17:32:19.436233 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:19Z","lastTransitionTime":"2026-02-23T17:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:19 crc kubenswrapper[4724]: I0223 17:32:19.539575 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:19 crc kubenswrapper[4724]: I0223 17:32:19.539827 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:19 crc kubenswrapper[4724]: I0223 17:32:19.539837 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:19 crc kubenswrapper[4724]: I0223 17:32:19.539858 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:19 crc kubenswrapper[4724]: I0223 17:32:19.539869 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:19Z","lastTransitionTime":"2026-02-23T17:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:19 crc kubenswrapper[4724]: I0223 17:32:19.642703 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:19 crc kubenswrapper[4724]: I0223 17:32:19.642785 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:19 crc kubenswrapper[4724]: I0223 17:32:19.642815 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:19 crc kubenswrapper[4724]: I0223 17:32:19.642852 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:19 crc kubenswrapper[4724]: I0223 17:32:19.642876 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:19Z","lastTransitionTime":"2026-02-23T17:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:19 crc kubenswrapper[4724]: I0223 17:32:19.746502 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:19 crc kubenswrapper[4724]: I0223 17:32:19.746561 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:19 crc kubenswrapper[4724]: I0223 17:32:19.746599 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:19 crc kubenswrapper[4724]: I0223 17:32:19.746632 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:19 crc kubenswrapper[4724]: I0223 17:32:19.746653 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:19Z","lastTransitionTime":"2026-02-23T17:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:19 crc kubenswrapper[4724]: I0223 17:32:19.849571 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:19 crc kubenswrapper[4724]: I0223 17:32:19.849632 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:19 crc kubenswrapper[4724]: I0223 17:32:19.849654 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:19 crc kubenswrapper[4724]: I0223 17:32:19.849691 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:19 crc kubenswrapper[4724]: I0223 17:32:19.849719 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:19Z","lastTransitionTime":"2026-02-23T17:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:19 crc kubenswrapper[4724]: I0223 17:32:19.950371 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:32:19 crc kubenswrapper[4724]: E0223 17:32:19.950697 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 17:32:19 crc kubenswrapper[4724]: I0223 17:32:19.953111 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:19 crc kubenswrapper[4724]: I0223 17:32:19.953166 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:19 crc kubenswrapper[4724]: I0223 17:32:19.953185 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:19 crc kubenswrapper[4724]: I0223 17:32:19.953212 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:19 crc kubenswrapper[4724]: I0223 17:32:19.953230 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:19Z","lastTransitionTime":"2026-02-23T17:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:20 crc kubenswrapper[4724]: I0223 17:32:20.056537 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:20 crc kubenswrapper[4724]: I0223 17:32:20.056606 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:20 crc kubenswrapper[4724]: I0223 17:32:20.056625 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:20 crc kubenswrapper[4724]: I0223 17:32:20.056653 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:20 crc kubenswrapper[4724]: I0223 17:32:20.056673 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:20Z","lastTransitionTime":"2026-02-23T17:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:20 crc kubenswrapper[4724]: I0223 17:32:20.139060 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 17:09:19.395183139 +0000 UTC Feb 23 17:32:20 crc kubenswrapper[4724]: I0223 17:32:20.160747 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:20 crc kubenswrapper[4724]: I0223 17:32:20.160809 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:20 crc kubenswrapper[4724]: I0223 17:32:20.160826 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:20 crc kubenswrapper[4724]: I0223 17:32:20.160849 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:20 crc kubenswrapper[4724]: I0223 17:32:20.160864 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:20Z","lastTransitionTime":"2026-02-23T17:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:20 crc kubenswrapper[4724]: I0223 17:32:20.264692 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:20 crc kubenswrapper[4724]: I0223 17:32:20.264752 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:20 crc kubenswrapper[4724]: I0223 17:32:20.264773 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:20 crc kubenswrapper[4724]: I0223 17:32:20.264804 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:20 crc kubenswrapper[4724]: I0223 17:32:20.264822 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:20Z","lastTransitionTime":"2026-02-23T17:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:20 crc kubenswrapper[4724]: I0223 17:32:20.368768 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:20 crc kubenswrapper[4724]: I0223 17:32:20.368851 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:20 crc kubenswrapper[4724]: I0223 17:32:20.368875 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:20 crc kubenswrapper[4724]: I0223 17:32:20.368910 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:20 crc kubenswrapper[4724]: I0223 17:32:20.368935 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:20Z","lastTransitionTime":"2026-02-23T17:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:20 crc kubenswrapper[4724]: I0223 17:32:20.472222 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:20 crc kubenswrapper[4724]: I0223 17:32:20.472284 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:20 crc kubenswrapper[4724]: I0223 17:32:20.472296 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:20 crc kubenswrapper[4724]: I0223 17:32:20.472320 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:20 crc kubenswrapper[4724]: I0223 17:32:20.472335 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:20Z","lastTransitionTime":"2026-02-23T17:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:20 crc kubenswrapper[4724]: I0223 17:32:20.575346 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:20 crc kubenswrapper[4724]: I0223 17:32:20.575425 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:20 crc kubenswrapper[4724]: I0223 17:32:20.575438 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:20 crc kubenswrapper[4724]: I0223 17:32:20.575459 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:20 crc kubenswrapper[4724]: I0223 17:32:20.575472 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:20Z","lastTransitionTime":"2026-02-23T17:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:20 crc kubenswrapper[4724]: I0223 17:32:20.678318 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:20 crc kubenswrapper[4724]: I0223 17:32:20.678407 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:20 crc kubenswrapper[4724]: I0223 17:32:20.678421 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:20 crc kubenswrapper[4724]: I0223 17:32:20.678447 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:20 crc kubenswrapper[4724]: I0223 17:32:20.678464 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:20Z","lastTransitionTime":"2026-02-23T17:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:20 crc kubenswrapper[4724]: I0223 17:32:20.781207 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:20 crc kubenswrapper[4724]: I0223 17:32:20.781258 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:20 crc kubenswrapper[4724]: I0223 17:32:20.781269 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:20 crc kubenswrapper[4724]: I0223 17:32:20.781285 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:20 crc kubenswrapper[4724]: I0223 17:32:20.781324 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:20Z","lastTransitionTime":"2026-02-23T17:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:20 crc kubenswrapper[4724]: I0223 17:32:20.883628 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:20 crc kubenswrapper[4724]: I0223 17:32:20.883682 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:20 crc kubenswrapper[4724]: I0223 17:32:20.883715 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:20 crc kubenswrapper[4724]: I0223 17:32:20.883732 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:20 crc kubenswrapper[4724]: I0223 17:32:20.883770 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:20Z","lastTransitionTime":"2026-02-23T17:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:20 crc kubenswrapper[4724]: I0223 17:32:20.950140 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:32:20 crc kubenswrapper[4724]: I0223 17:32:20.950179 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:32:20 crc kubenswrapper[4724]: E0223 17:32:20.950346 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 17:32:20 crc kubenswrapper[4724]: E0223 17:32:20.950471 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 17:32:20 crc kubenswrapper[4724]: I0223 17:32:20.987115 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:20 crc kubenswrapper[4724]: I0223 17:32:20.987168 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:20 crc kubenswrapper[4724]: I0223 17:32:20.987177 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:20 crc kubenswrapper[4724]: I0223 17:32:20.987202 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:20 crc kubenswrapper[4724]: I0223 17:32:20.987213 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:20Z","lastTransitionTime":"2026-02-23T17:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:21 crc kubenswrapper[4724]: I0223 17:32:21.090005 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:21 crc kubenswrapper[4724]: I0223 17:32:21.090045 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:21 crc kubenswrapper[4724]: I0223 17:32:21.090054 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:21 crc kubenswrapper[4724]: I0223 17:32:21.090069 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:21 crc kubenswrapper[4724]: I0223 17:32:21.090079 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:21Z","lastTransitionTime":"2026-02-23T17:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:21 crc kubenswrapper[4724]: I0223 17:32:21.139628 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 12:06:01.581163963 +0000 UTC Feb 23 17:32:21 crc kubenswrapper[4724]: I0223 17:32:21.192314 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:21 crc kubenswrapper[4724]: I0223 17:32:21.192355 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:21 crc kubenswrapper[4724]: I0223 17:32:21.192367 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:21 crc kubenswrapper[4724]: I0223 17:32:21.192406 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:21 crc kubenswrapper[4724]: I0223 17:32:21.192432 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:21Z","lastTransitionTime":"2026-02-23T17:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:21 crc kubenswrapper[4724]: I0223 17:32:21.295419 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:21 crc kubenswrapper[4724]: I0223 17:32:21.295477 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:21 crc kubenswrapper[4724]: I0223 17:32:21.295490 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:21 crc kubenswrapper[4724]: I0223 17:32:21.295509 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:21 crc kubenswrapper[4724]: I0223 17:32:21.295523 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:21Z","lastTransitionTime":"2026-02-23T17:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:21 crc kubenswrapper[4724]: I0223 17:32:21.399351 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:21 crc kubenswrapper[4724]: I0223 17:32:21.399428 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:21 crc kubenswrapper[4724]: I0223 17:32:21.399437 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:21 crc kubenswrapper[4724]: I0223 17:32:21.399453 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:21 crc kubenswrapper[4724]: I0223 17:32:21.399463 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:21Z","lastTransitionTime":"2026-02-23T17:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:21 crc kubenswrapper[4724]: I0223 17:32:21.502483 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:21 crc kubenswrapper[4724]: I0223 17:32:21.502575 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:21 crc kubenswrapper[4724]: I0223 17:32:21.502598 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:21 crc kubenswrapper[4724]: I0223 17:32:21.502631 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:21 crc kubenswrapper[4724]: I0223 17:32:21.502655 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:21Z","lastTransitionTime":"2026-02-23T17:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:21 crc kubenswrapper[4724]: I0223 17:32:21.605035 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:21 crc kubenswrapper[4724]: I0223 17:32:21.605107 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:21 crc kubenswrapper[4724]: I0223 17:32:21.605116 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:21 crc kubenswrapper[4724]: I0223 17:32:21.605130 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:21 crc kubenswrapper[4724]: I0223 17:32:21.605140 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:21Z","lastTransitionTime":"2026-02-23T17:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:21 crc kubenswrapper[4724]: I0223 17:32:21.707842 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:21 crc kubenswrapper[4724]: I0223 17:32:21.707917 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:21 crc kubenswrapper[4724]: I0223 17:32:21.707934 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:21 crc kubenswrapper[4724]: I0223 17:32:21.707959 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:21 crc kubenswrapper[4724]: I0223 17:32:21.707977 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:21Z","lastTransitionTime":"2026-02-23T17:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:21 crc kubenswrapper[4724]: I0223 17:32:21.811113 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:21 crc kubenswrapper[4724]: I0223 17:32:21.811193 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:21 crc kubenswrapper[4724]: I0223 17:32:21.811227 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:21 crc kubenswrapper[4724]: I0223 17:32:21.811257 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:21 crc kubenswrapper[4724]: I0223 17:32:21.811279 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:21Z","lastTransitionTime":"2026-02-23T17:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:21 crc kubenswrapper[4724]: I0223 17:32:21.913981 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:21 crc kubenswrapper[4724]: I0223 17:32:21.914032 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:21 crc kubenswrapper[4724]: I0223 17:32:21.914045 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:21 crc kubenswrapper[4724]: I0223 17:32:21.914066 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:21 crc kubenswrapper[4724]: I0223 17:32:21.914078 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:21Z","lastTransitionTime":"2026-02-23T17:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:21 crc kubenswrapper[4724]: I0223 17:32:21.950684 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:32:21 crc kubenswrapper[4724]: E0223 17:32:21.950823 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 17:32:22 crc kubenswrapper[4724]: I0223 17:32:22.017672 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:22 crc kubenswrapper[4724]: I0223 17:32:22.017722 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:22 crc kubenswrapper[4724]: I0223 17:32:22.017730 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:22 crc kubenswrapper[4724]: I0223 17:32:22.017747 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:22 crc kubenswrapper[4724]: I0223 17:32:22.017758 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:22Z","lastTransitionTime":"2026-02-23T17:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:22 crc kubenswrapper[4724]: I0223 17:32:22.121177 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:22 crc kubenswrapper[4724]: I0223 17:32:22.121248 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:22 crc kubenswrapper[4724]: I0223 17:32:22.121283 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:22 crc kubenswrapper[4724]: I0223 17:32:22.121313 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:22 crc kubenswrapper[4724]: I0223 17:32:22.121336 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:22Z","lastTransitionTime":"2026-02-23T17:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:22 crc kubenswrapper[4724]: I0223 17:32:22.140608 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 01:08:45.284345699 +0000 UTC Feb 23 17:32:22 crc kubenswrapper[4724]: I0223 17:32:22.224466 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:22 crc kubenswrapper[4724]: I0223 17:32:22.224933 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:22 crc kubenswrapper[4724]: I0223 17:32:22.225110 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:22 crc kubenswrapper[4724]: I0223 17:32:22.225297 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:22 crc kubenswrapper[4724]: I0223 17:32:22.225559 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:22Z","lastTransitionTime":"2026-02-23T17:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:22 crc kubenswrapper[4724]: I0223 17:32:22.328305 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:22 crc kubenswrapper[4724]: I0223 17:32:22.328689 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:22 crc kubenswrapper[4724]: I0223 17:32:22.328783 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:22 crc kubenswrapper[4724]: I0223 17:32:22.328889 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:22 crc kubenswrapper[4724]: I0223 17:32:22.328977 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:22Z","lastTransitionTime":"2026-02-23T17:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:22 crc kubenswrapper[4724]: I0223 17:32:22.431881 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:22 crc kubenswrapper[4724]: I0223 17:32:22.432275 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:22 crc kubenswrapper[4724]: I0223 17:32:22.432379 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:22 crc kubenswrapper[4724]: I0223 17:32:22.432536 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:22 crc kubenswrapper[4724]: I0223 17:32:22.432629 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:22Z","lastTransitionTime":"2026-02-23T17:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:22 crc kubenswrapper[4724]: I0223 17:32:22.535772 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:22 crc kubenswrapper[4724]: I0223 17:32:22.536171 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:22 crc kubenswrapper[4724]: I0223 17:32:22.536284 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:22 crc kubenswrapper[4724]: I0223 17:32:22.536422 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:22 crc kubenswrapper[4724]: I0223 17:32:22.536520 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:22Z","lastTransitionTime":"2026-02-23T17:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:22 crc kubenswrapper[4724]: I0223 17:32:22.640246 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:22 crc kubenswrapper[4724]: I0223 17:32:22.640751 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:22 crc kubenswrapper[4724]: I0223 17:32:22.640995 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:22 crc kubenswrapper[4724]: I0223 17:32:22.641193 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:22 crc kubenswrapper[4724]: I0223 17:32:22.641338 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:22Z","lastTransitionTime":"2026-02-23T17:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:22 crc kubenswrapper[4724]: I0223 17:32:22.743933 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:22 crc kubenswrapper[4724]: I0223 17:32:22.744266 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:22 crc kubenswrapper[4724]: I0223 17:32:22.744410 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:22 crc kubenswrapper[4724]: I0223 17:32:22.744521 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:22 crc kubenswrapper[4724]: I0223 17:32:22.744597 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:22Z","lastTransitionTime":"2026-02-23T17:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:22 crc kubenswrapper[4724]: I0223 17:32:22.848620 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:22 crc kubenswrapper[4724]: I0223 17:32:22.848670 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:22 crc kubenswrapper[4724]: I0223 17:32:22.848685 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:22 crc kubenswrapper[4724]: I0223 17:32:22.848710 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:22 crc kubenswrapper[4724]: I0223 17:32:22.848731 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:22Z","lastTransitionTime":"2026-02-23T17:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:22 crc kubenswrapper[4724]: I0223 17:32:22.950077 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:32:22 crc kubenswrapper[4724]: I0223 17:32:22.950081 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:32:22 crc kubenswrapper[4724]: E0223 17:32:22.950243 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 17:32:22 crc kubenswrapper[4724]: E0223 17:32:22.950524 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 17:32:22 crc kubenswrapper[4724]: I0223 17:32:22.952353 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:22 crc kubenswrapper[4724]: I0223 17:32:22.952406 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:22 crc kubenswrapper[4724]: I0223 17:32:22.952419 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:22 crc kubenswrapper[4724]: I0223 17:32:22.952439 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:22 crc kubenswrapper[4724]: I0223 17:32:22.952451 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:22Z","lastTransitionTime":"2026-02-23T17:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.056004 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.056360 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.056469 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.056575 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.056663 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:23Z","lastTransitionTime":"2026-02-23T17:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.141246 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 01:16:14.997480582 +0000 UTC Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.159863 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.160221 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.160357 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.160547 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.160676 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:23Z","lastTransitionTime":"2026-02-23T17:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.263185 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.263228 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.263237 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.263254 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.263264 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:23Z","lastTransitionTime":"2026-02-23T17:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.342231 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.342765 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.342842 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.342997 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.343103 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:23Z","lastTransitionTime":"2026-02-23T17:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:23 crc kubenswrapper[4724]: E0223 17:32:23.359301 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aaac6a71-65af-4ded-9945-71c01ce15653\\\",\\\"systemUUID\\\":\\\"883aa43b-ee67-45aa-9f6b-7760dc931d5e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:23Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.364082 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.364129 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.364142 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.364162 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.364177 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:23Z","lastTransitionTime":"2026-02-23T17:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:23 crc kubenswrapper[4724]: E0223 17:32:23.383312 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aaac6a71-65af-4ded-9945-71c01ce15653\\\",\\\"systemUUID\\\":\\\"883aa43b-ee67-45aa-9f6b-7760dc931d5e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:23Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.389011 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.389184 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.389269 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.389356 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.389467 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:23Z","lastTransitionTime":"2026-02-23T17:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:23 crc kubenswrapper[4724]: E0223 17:32:23.411329 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aaac6a71-65af-4ded-9945-71c01ce15653\\\",\\\"systemUUID\\\":\\\"883aa43b-ee67-45aa-9f6b-7760dc931d5e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:23Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.415933 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.415988 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.416007 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.416035 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.416053 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:23Z","lastTransitionTime":"2026-02-23T17:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:23 crc kubenswrapper[4724]: E0223 17:32:23.435655 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aaac6a71-65af-4ded-9945-71c01ce15653\\\",\\\"systemUUID\\\":\\\"883aa43b-ee67-45aa-9f6b-7760dc931d5e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:23Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.441268 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.441419 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.441508 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.441612 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.441707 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:23Z","lastTransitionTime":"2026-02-23T17:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:23 crc kubenswrapper[4724]: E0223 17:32:23.456161 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aaac6a71-65af-4ded-9945-71c01ce15653\\\",\\\"systemUUID\\\":\\\"883aa43b-ee67-45aa-9f6b-7760dc931d5e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:23Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:23 crc kubenswrapper[4724]: E0223 17:32:23.456292 4724 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.462067 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.462090 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.462103 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.462120 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.462133 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:23Z","lastTransitionTime":"2026-02-23T17:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.565471 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.565877 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.565986 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.566082 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.566177 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:23Z","lastTransitionTime":"2026-02-23T17:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.669095 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.669134 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.669150 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.669172 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.669184 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:23Z","lastTransitionTime":"2026-02-23T17:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.772436 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.772519 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.772544 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.772579 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.772602 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:23Z","lastTransitionTime":"2026-02-23T17:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.876136 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.876732 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.877092 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.877266 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.877470 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:23Z","lastTransitionTime":"2026-02-23T17:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.950039 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:32:23 crc kubenswrapper[4724]: E0223 17:32:23.950285 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.980792 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.980865 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.980884 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.980913 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:23 crc kubenswrapper[4724]: I0223 17:32:23.980950 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:23Z","lastTransitionTime":"2026-02-23T17:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:24 crc kubenswrapper[4724]: I0223 17:32:24.084589 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:24 crc kubenswrapper[4724]: I0223 17:32:24.084668 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:24 crc kubenswrapper[4724]: I0223 17:32:24.084687 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:24 crc kubenswrapper[4724]: I0223 17:32:24.084718 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:24 crc kubenswrapper[4724]: I0223 17:32:24.084737 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:24Z","lastTransitionTime":"2026-02-23T17:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:24 crc kubenswrapper[4724]: I0223 17:32:24.141885 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 11:36:53.942658783 +0000 UTC Feb 23 17:32:24 crc kubenswrapper[4724]: I0223 17:32:24.188259 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:24 crc kubenswrapper[4724]: I0223 17:32:24.188325 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:24 crc kubenswrapper[4724]: I0223 17:32:24.188349 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:24 crc kubenswrapper[4724]: I0223 17:32:24.188380 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:24 crc kubenswrapper[4724]: I0223 17:32:24.188448 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:24Z","lastTransitionTime":"2026-02-23T17:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:24 crc kubenswrapper[4724]: I0223 17:32:24.294059 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:24 crc kubenswrapper[4724]: I0223 17:32:24.294132 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:24 crc kubenswrapper[4724]: I0223 17:32:24.294154 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:24 crc kubenswrapper[4724]: I0223 17:32:24.294183 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:24 crc kubenswrapper[4724]: I0223 17:32:24.294203 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:24Z","lastTransitionTime":"2026-02-23T17:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:24 crc kubenswrapper[4724]: I0223 17:32:24.397746 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:24 crc kubenswrapper[4724]: I0223 17:32:24.397805 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:24 crc kubenswrapper[4724]: I0223 17:32:24.397822 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:24 crc kubenswrapper[4724]: I0223 17:32:24.397852 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:24 crc kubenswrapper[4724]: I0223 17:32:24.397872 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:24Z","lastTransitionTime":"2026-02-23T17:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:24 crc kubenswrapper[4724]: I0223 17:32:24.500889 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:24 crc kubenswrapper[4724]: I0223 17:32:24.500957 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:24 crc kubenswrapper[4724]: I0223 17:32:24.500975 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:24 crc kubenswrapper[4724]: I0223 17:32:24.501001 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:24 crc kubenswrapper[4724]: I0223 17:32:24.501021 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:24Z","lastTransitionTime":"2026-02-23T17:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:24 crc kubenswrapper[4724]: I0223 17:32:24.604708 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:24 crc kubenswrapper[4724]: I0223 17:32:24.604765 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:24 crc kubenswrapper[4724]: I0223 17:32:24.604782 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:24 crc kubenswrapper[4724]: I0223 17:32:24.604807 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:24 crc kubenswrapper[4724]: I0223 17:32:24.604824 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:24Z","lastTransitionTime":"2026-02-23T17:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:24 crc kubenswrapper[4724]: I0223 17:32:24.707336 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:24 crc kubenswrapper[4724]: I0223 17:32:24.707429 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:24 crc kubenswrapper[4724]: I0223 17:32:24.707446 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:24 crc kubenswrapper[4724]: I0223 17:32:24.707466 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:24 crc kubenswrapper[4724]: I0223 17:32:24.707477 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:24Z","lastTransitionTime":"2026-02-23T17:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:24 crc kubenswrapper[4724]: I0223 17:32:24.810095 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:24 crc kubenswrapper[4724]: I0223 17:32:24.810145 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:24 crc kubenswrapper[4724]: I0223 17:32:24.810157 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:24 crc kubenswrapper[4724]: I0223 17:32:24.810175 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:24 crc kubenswrapper[4724]: I0223 17:32:24.810189 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:24Z","lastTransitionTime":"2026-02-23T17:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:24 crc kubenswrapper[4724]: I0223 17:32:24.913221 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:24 crc kubenswrapper[4724]: I0223 17:32:24.913296 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:24 crc kubenswrapper[4724]: I0223 17:32:24.913313 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:24 crc kubenswrapper[4724]: I0223 17:32:24.913342 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:24 crc kubenswrapper[4724]: I0223 17:32:24.913361 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:24Z","lastTransitionTime":"2026-02-23T17:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:24 crc kubenswrapper[4724]: I0223 17:32:24.950620 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:32:24 crc kubenswrapper[4724]: I0223 17:32:24.950620 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:32:24 crc kubenswrapper[4724]: E0223 17:32:24.950801 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 17:32:24 crc kubenswrapper[4724]: E0223 17:32:24.950914 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 17:32:24 crc kubenswrapper[4724]: I0223 17:32:24.969315 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:24Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:24 crc kubenswrapper[4724]: I0223 17:32:24.992107 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:24Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:25 crc kubenswrapper[4724]: I0223 17:32:25.014635 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d85ca9995343f8d309c902c37fc2db6faf539dfde44cc8334e279648ec2ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:25Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:25 crc kubenswrapper[4724]: I0223 17:32:25.017159 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:25 crc kubenswrapper[4724]: I0223 17:32:25.017286 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:25 crc kubenswrapper[4724]: I0223 17:32:25.017338 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:25 crc kubenswrapper[4724]: I0223 17:32:25.017373 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:25 crc kubenswrapper[4724]: I0223 17:32:25.017453 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:25Z","lastTransitionTime":"2026-02-23T17:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:25 crc kubenswrapper[4724]: I0223 17:32:25.035934 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74b84e23-8d8b-47b7-b3d4-269b16e9dfb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e581ef5d1fb9752e7582661dffceba50fc7b408467d9b0f0cca3b29e1838e72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:25Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:25 crc kubenswrapper[4724]: I0223 17:32:25.060766 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5334ad9-e4a5-4eec-b535-44664ba43c0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47ad3efcec8d953297d16ec3aa049656f57e69121a9dfc2f83572d1fe0c1f8ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8b4c85243660568a55df3eb0867530e60512a01960c5be951763ccd9459be30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b43f0bff4db1bc23f440c84a11c3018196009ec0fe47a9cb4b4cd136e5fffd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee21f0fcd431e1f400bb68f6a5c7b5b8d38ced5a33133684160a3255ac61ec20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65474f581d37631a7193e3ebc061e9df69206ec245113f16c7930111a5f74d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://200faace6300bc9dc3138b270b3fd029a6580cb06bfdd89eb73f97635abc7ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://200faace6300bc9dc3138b270b3fd029a6580cb06bfdd89eb73f97635abc7ac3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3268af99383f73c85412d56d39515a43362510c7a3a972e83752d5929d7d4935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3268af99383f73c85412d56d39515a43362510c7a3a972e83752d5929d7d4935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5ae1a073932b6e36d2af6015c8cd09df8d031c65d67e2d06c5bcc2fcd6b35407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ae1a073932b6e36d2af6015c8cd09df8d031c65d67e2d06c5bcc2fcd6b35407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:25Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:25 crc kubenswrapper[4724]: I0223 17:32:25.082314 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e1d6606-75fc-41fd-9c23-18ee248da2af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe805066e5e61f09a4f5cf1f9e912da6664229c8db86a898cbe670278e4c2530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beda3a9d030f2d4aa9c6a78508a84f5bc122debebb2c5b423233834433b8fd5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e108d74a635da102e857e2ad4f18384861e4e630ae31af9aa05841e6bbc2de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b889ee0ad15cc6ff92c69a44c4bc0bf620e23e3dc7276324662c265ba5c7d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:25Z\\\",\\\"message\\\":\\\"W0223 17:31:24.378840 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 17:31:24.379771 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771867884 cert, and key in /tmp/serving-cert-1953836024/serving-signer.crt, /tmp/serving-cert-1953836024/serving-signer.key\\\\nI0223 17:31:24.846158 1 observer_polling.go:159] Starting file observer\\\\nW0223 17:31:24.858497 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 17:31:24.858706 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 17:31:24.859835 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1953836024/tls.crt::/tmp/serving-cert-1953836024/tls.key\\\\\\\"\\\\nF0223 17:31:25.358711 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:31:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b550694e49325768a2674a4d2b7089dd267b91a273751fb62d060a0663f06619\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:25Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:25 crc kubenswrapper[4724]: I0223 17:32:25.099086 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:25Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:25 crc kubenswrapper[4724]: I0223 17:32:25.118920 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d114679fca6bbb2dbaaee0970449fe265c7c931665eb559d015abb61de3ec07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dd3d46856b5d5dece678b672db6485b68b9b7fbe9f029f7d9df37906146674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:25Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:25 crc kubenswrapper[4724]: I0223 17:32:25.121682 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:25 crc kubenswrapper[4724]: I0223 17:32:25.121867 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:25 crc kubenswrapper[4724]: I0223 17:32:25.121988 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:25 crc kubenswrapper[4724]: I0223 17:32:25.122142 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:25 crc kubenswrapper[4724]: I0223 17:32:25.122298 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:25Z","lastTransitionTime":"2026-02-23T17:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:25 crc kubenswrapper[4724]: I0223 17:32:25.137082 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2df91905-ff3d-4a7d-8e22-337861483a5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9fabd2560e4a3826fb070276626936d120d612d37e2750e417f637b93b1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff1c2b88a730048902a1360cdbcddfb4f82c6a58c427edb24a792b254f55383\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:17Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 17:30:47.305065 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 17:30:47.307934 1 observer_polling.go:159] Starting file observer\\\\nI0223 17:30:47.346927 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 17:30:47.351923 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0223 17:31:17.828320 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:16Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbce4ddd851479ee9b40b8c773a052d5a1d1420df107d8ea0e9fa2b927300910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bfa4a72a4a487fbc7703a3b60a9ef9a18b7fea6e58fd427f50361d8df9731d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2591d81bed9e5dbad670e16b2266367cfb386a3d68ffad9d09e0652130ee2609\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:25Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:25 crc kubenswrapper[4724]: I0223 17:32:25.142531 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 10:59:24.002385907 +0000 UTC Feb 23 17:32:25 crc kubenswrapper[4724]: I0223 17:32:25.156562 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35021d6d-15bd-4153-9dae-7b002eff8c23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26478861fb89db7eb6b5ba0a2089cad4360893bd95a71883be07831745c5c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f971a2a092b595760de8b7a9b6a0a13fa802d3fcb6a2c2b8922b01c9c57d782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d892cd1f6ea7fabb2e64e3e78d329f6e0efbf8585cc4f72fbcd29f31a035cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:25Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:25 crc kubenswrapper[4724]: I0223 17:32:25.179566 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbcb3c1b1daa4328b6e5e049e9f1270ee94da4897df26324f6e4cd898f565de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:25Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:25 crc kubenswrapper[4724]: I0223 17:32:25.225816 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:25 crc kubenswrapper[4724]: I0223 17:32:25.225869 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:25 crc kubenswrapper[4724]: I0223 17:32:25.225881 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:25 crc kubenswrapper[4724]: I0223 17:32:25.225900 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:25 crc kubenswrapper[4724]: I0223 17:32:25.225910 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:25Z","lastTransitionTime":"2026-02-23T17:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:25 crc kubenswrapper[4724]: I0223 17:32:25.328939 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:25 crc kubenswrapper[4724]: I0223 17:32:25.329314 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:25 crc kubenswrapper[4724]: I0223 17:32:25.329485 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:25 crc kubenswrapper[4724]: I0223 17:32:25.329658 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:25 crc kubenswrapper[4724]: I0223 17:32:25.329799 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:25Z","lastTransitionTime":"2026-02-23T17:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:25 crc kubenswrapper[4724]: I0223 17:32:25.433418 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:25 crc kubenswrapper[4724]: I0223 17:32:25.433793 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:25 crc kubenswrapper[4724]: I0223 17:32:25.433920 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:25 crc kubenswrapper[4724]: I0223 17:32:25.434057 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:25 crc kubenswrapper[4724]: I0223 17:32:25.434210 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:25Z","lastTransitionTime":"2026-02-23T17:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:25 crc kubenswrapper[4724]: I0223 17:32:25.537301 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:25 crc kubenswrapper[4724]: I0223 17:32:25.537762 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:25 crc kubenswrapper[4724]: I0223 17:32:25.537857 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:25 crc kubenswrapper[4724]: I0223 17:32:25.538176 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:25 crc kubenswrapper[4724]: I0223 17:32:25.538303 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:25Z","lastTransitionTime":"2026-02-23T17:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:25 crc kubenswrapper[4724]: I0223 17:32:25.642167 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:25 crc kubenswrapper[4724]: I0223 17:32:25.642241 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:25 crc kubenswrapper[4724]: I0223 17:32:25.642255 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:25 crc kubenswrapper[4724]: I0223 17:32:25.642299 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:25 crc kubenswrapper[4724]: I0223 17:32:25.642315 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:25Z","lastTransitionTime":"2026-02-23T17:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:25 crc kubenswrapper[4724]: I0223 17:32:25.745422 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:25 crc kubenswrapper[4724]: I0223 17:32:25.745774 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:25 crc kubenswrapper[4724]: I0223 17:32:25.745868 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:25 crc kubenswrapper[4724]: I0223 17:32:25.745987 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:25 crc kubenswrapper[4724]: I0223 17:32:25.746075 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:25Z","lastTransitionTime":"2026-02-23T17:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:25 crc kubenswrapper[4724]: I0223 17:32:25.849223 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:25 crc kubenswrapper[4724]: I0223 17:32:25.849273 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:25 crc kubenswrapper[4724]: I0223 17:32:25.849287 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:25 crc kubenswrapper[4724]: I0223 17:32:25.849303 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:25 crc kubenswrapper[4724]: I0223 17:32:25.849315 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:25Z","lastTransitionTime":"2026-02-23T17:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:25 crc kubenswrapper[4724]: I0223 17:32:25.950249 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:32:25 crc kubenswrapper[4724]: E0223 17:32:25.950784 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 17:32:25 crc kubenswrapper[4724]: I0223 17:32:25.953292 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:25 crc kubenswrapper[4724]: I0223 17:32:25.953373 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:25 crc kubenswrapper[4724]: I0223 17:32:25.953439 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:25 crc kubenswrapper[4724]: I0223 17:32:25.953460 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:25 crc kubenswrapper[4724]: I0223 17:32:25.953477 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:25Z","lastTransitionTime":"2026-02-23T17:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:26 crc kubenswrapper[4724]: I0223 17:32:26.055868 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:26 crc kubenswrapper[4724]: I0223 17:32:26.055930 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:26 crc kubenswrapper[4724]: I0223 17:32:26.055941 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:26 crc kubenswrapper[4724]: I0223 17:32:26.055960 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:26 crc kubenswrapper[4724]: I0223 17:32:26.055972 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:26Z","lastTransitionTime":"2026-02-23T17:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:26 crc kubenswrapper[4724]: I0223 17:32:26.143316 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 13:16:42.082719602 +0000 UTC Feb 23 17:32:26 crc kubenswrapper[4724]: I0223 17:32:26.158508 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:26 crc kubenswrapper[4724]: I0223 17:32:26.158572 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:26 crc kubenswrapper[4724]: I0223 17:32:26.158583 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:26 crc kubenswrapper[4724]: I0223 17:32:26.158602 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:26 crc kubenswrapper[4724]: I0223 17:32:26.158612 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:26Z","lastTransitionTime":"2026-02-23T17:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:26 crc kubenswrapper[4724]: I0223 17:32:26.262166 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:26 crc kubenswrapper[4724]: I0223 17:32:26.262218 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:26 crc kubenswrapper[4724]: I0223 17:32:26.262234 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:26 crc kubenswrapper[4724]: I0223 17:32:26.262259 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:26 crc kubenswrapper[4724]: I0223 17:32:26.262276 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:26Z","lastTransitionTime":"2026-02-23T17:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:26 crc kubenswrapper[4724]: I0223 17:32:26.364858 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:26 crc kubenswrapper[4724]: I0223 17:32:26.364915 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:26 crc kubenswrapper[4724]: I0223 17:32:26.364937 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:26 crc kubenswrapper[4724]: I0223 17:32:26.364967 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:26 crc kubenswrapper[4724]: I0223 17:32:26.364989 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:26Z","lastTransitionTime":"2026-02-23T17:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:26 crc kubenswrapper[4724]: I0223 17:32:26.467110 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:26 crc kubenswrapper[4724]: I0223 17:32:26.467152 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:26 crc kubenswrapper[4724]: I0223 17:32:26.467165 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:26 crc kubenswrapper[4724]: I0223 17:32:26.467182 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:26 crc kubenswrapper[4724]: I0223 17:32:26.467197 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:26Z","lastTransitionTime":"2026-02-23T17:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:26 crc kubenswrapper[4724]: I0223 17:32:26.570125 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:26 crc kubenswrapper[4724]: I0223 17:32:26.570182 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:26 crc kubenswrapper[4724]: I0223 17:32:26.570194 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:26 crc kubenswrapper[4724]: I0223 17:32:26.570213 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:26 crc kubenswrapper[4724]: I0223 17:32:26.570229 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:26Z","lastTransitionTime":"2026-02-23T17:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:26 crc kubenswrapper[4724]: I0223 17:32:26.672905 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:26 crc kubenswrapper[4724]: I0223 17:32:26.672988 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:26 crc kubenswrapper[4724]: I0223 17:32:26.673007 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:26 crc kubenswrapper[4724]: I0223 17:32:26.673047 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:26 crc kubenswrapper[4724]: I0223 17:32:26.673066 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:26Z","lastTransitionTime":"2026-02-23T17:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:26 crc kubenswrapper[4724]: I0223 17:32:26.775766 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:26 crc kubenswrapper[4724]: I0223 17:32:26.775807 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:26 crc kubenswrapper[4724]: I0223 17:32:26.775819 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:26 crc kubenswrapper[4724]: I0223 17:32:26.775836 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:26 crc kubenswrapper[4724]: I0223 17:32:26.775847 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:26Z","lastTransitionTime":"2026-02-23T17:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:26 crc kubenswrapper[4724]: I0223 17:32:26.879730 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:26 crc kubenswrapper[4724]: I0223 17:32:26.879808 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:26 crc kubenswrapper[4724]: I0223 17:32:26.879834 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:26 crc kubenswrapper[4724]: I0223 17:32:26.879865 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:26 crc kubenswrapper[4724]: I0223 17:32:26.879887 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:26Z","lastTransitionTime":"2026-02-23T17:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:26 crc kubenswrapper[4724]: I0223 17:32:26.950481 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:32:26 crc kubenswrapper[4724]: I0223 17:32:26.950493 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:32:26 crc kubenswrapper[4724]: E0223 17:32:26.950782 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 17:32:26 crc kubenswrapper[4724]: E0223 17:32:26.950960 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 17:32:26 crc kubenswrapper[4724]: I0223 17:32:26.982920 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:26 crc kubenswrapper[4724]: I0223 17:32:26.982986 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:26 crc kubenswrapper[4724]: I0223 17:32:26.983000 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:26 crc kubenswrapper[4724]: I0223 17:32:26.983026 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:26 crc kubenswrapper[4724]: I0223 17:32:26.983044 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:26Z","lastTransitionTime":"2026-02-23T17:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.008092 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-2dn8m"] Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.009166 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-2dn8m" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.012732 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.013178 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.014894 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.033227 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e1d6606-75fc-41fd-9c23-18ee248da2af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe805066e5e61f09a4f5cf1f9e912da6664229c8db86a898cbe670278e4c2530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beda3a9d030f2d4aa9c6a78508a84f5bc122debebb2c5b423233834433b8fd5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e108d74a635da102e857e2ad4f18384861e4e630ae31af9aa05841e6bbc2de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b889ee0ad15cc6ff92c69a44c4bc0bf620e23e3dc7276324662c265ba5c7d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:25Z\\\",\\\"message\\\":\\\"W0223 17:31:24.378840 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 17:31:24.379771 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771867884 cert, and key in /tmp/serving-cert-1953836024/serving-signer.crt, /tmp/serving-cert-1953836024/serving-signer.key\\\\nI0223 17:31:24.846158 1 observer_polling.go:159] Starting file observer\\\\nW0223 17:31:24.858497 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 17:31:24.858706 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 17:31:24.859835 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1953836024/tls.crt::/tmp/serving-cert-1953836024/tls.key\\\\\\\"\\\\nF0223 17:31:25.358711 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:31:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b550694e49325768a2674a4d2b7089dd267b91a273751fb62d060a0663f06619\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:27Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.049614 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:27Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.068560 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d114679fca6bbb2dbaaee0970449fe265c7c931665eb559d015abb61de3ec07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dd3d46856b5d5dece678b672db6485b68b9b7fbe9f029f7d9df37906146674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:27Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.086576 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.086768 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.086800 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.086860 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.086889 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:27Z","lastTransitionTime":"2026-02-23T17:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.088202 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:27Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.108619 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:27Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.123625 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/00434a2a-97a5-4d8f-9a6f-9dc5b372cd20-hosts-file\") pod \"node-resolver-2dn8m\" (UID: \"00434a2a-97a5-4d8f-9a6f-9dc5b372cd20\") " pod="openshift-dns/node-resolver-2dn8m" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.123713 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5pmd\" (UniqueName: \"kubernetes.io/projected/00434a2a-97a5-4d8f-9a6f-9dc5b372cd20-kube-api-access-b5pmd\") pod \"node-resolver-2dn8m\" (UID: \"00434a2a-97a5-4d8f-9a6f-9dc5b372cd20\") " pod="openshift-dns/node-resolver-2dn8m" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.124699 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d85ca9995343f8d309c902c37fc2db6faf539dfde44cc8334e279648ec2ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:27Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.138530 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74b84e23-8d8b-47b7-b3d4-269b16e9dfb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e581ef5d1fb9752e7582661dffceba50fc7b408467d9b0f0cca3b29e1838e72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:27Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.143536 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 15:44:56.779921271 +0000 UTC Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.163466 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5334ad9-e4a5-4eec-b535-44664ba43c0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47ad3efcec8d953297d16ec3aa049656f57e69121a9dfc2f83572d1fe0c1f8ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8b4c85243660568a55df3eb0867530e60512a01960c5be951763ccd9459be30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b43f0bff4db1bc23f440c84a11c3018196009ec0fe47a9cb4b4cd136e5fffd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee21f0fcd431e1f400bb68f6a5c7b5b8d38ced5a33133684160a3255ac61ec20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65474f581d37631a7193e3ebc061e9df69206ec245113f16c7930111a5f74d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://200faace6300bc9dc3138b270b3fd029a6580cb06bfdd89eb73f97635abc7ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://200faace6300bc9dc3138b270b3fd029a6580cb06bfdd89eb73f97635abc7ac3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3268af99383f73c85412d56d39515a43362510c7a3a972e83752d5929d7d4935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3268af99383f73c85412d56d39515a43362510c7a3a972e83752d5929d7d4935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5ae1a073932b6e36d2af6015c8cd09df8d031c65d67e2d06c5bcc2fcd6b35407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ae1a073932b6e36d2af6015c8cd09df8d031c65d67e2d06c5bcc2fcd6b35407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:27Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.177491 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2dn8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00434a2a-97a5-4d8f-9a6f-9dc5b372cd20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5pmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2dn8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:27Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.190787 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.191298 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.191916 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.192124 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.192574 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:27Z","lastTransitionTime":"2026-02-23T17:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.193681 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbcb3c1b1daa4328b6e5e049e9f1270ee94da4897df26324f6e4cd898f565de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:27Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.208942 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2df91905-ff3d-4a7d-8e22-337861483a5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9fabd2560e4a3826fb070276626936d120d612d37e2750e417f637b93b1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff1c2b88a730048902a1360cdbcddfb4f82c6a58c427edb24a792b254f55383\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:17Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 17:30:47.305065 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 17:30:47.307934 1 observer_polling.go:159] Starting file observer\\\\nI0223 17:30:47.346927 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 17:30:47.351923 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0223 17:31:17.828320 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:16Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbce4ddd851479ee9b40b8c773a052d5a1d1420df107d8ea0e9fa2b927300910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bfa4a72a4a487fbc7703a3b60a9ef9a18b7fea6e58fd427f50361d8df9731d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2591d81bed9e5dbad670e16b2266367cfb386a3d68ffad9d09e0652130ee2609\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:27Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.221590 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35021d6d-15bd-4153-9dae-7b002eff8c23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26478861fb89db7eb6b5ba0a2089cad4360893bd95a71883be07831745c5c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f971a2a092b595760de8b7a9b6a0a13fa802d3fcb6a2c2b8922b01c9c57d782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d892cd1f6ea7fabb2e64e3e78d329f6e0efbf8585cc4f72fbcd29f31a035cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:27Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.225013 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/00434a2a-97a5-4d8f-9a6f-9dc5b372cd20-hosts-file\") pod \"node-resolver-2dn8m\" (UID: \"00434a2a-97a5-4d8f-9a6f-9dc5b372cd20\") " pod="openshift-dns/node-resolver-2dn8m" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.225086 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5pmd\" (UniqueName: \"kubernetes.io/projected/00434a2a-97a5-4d8f-9a6f-9dc5b372cd20-kube-api-access-b5pmd\") pod \"node-resolver-2dn8m\" (UID: \"00434a2a-97a5-4d8f-9a6f-9dc5b372cd20\") " pod="openshift-dns/node-resolver-2dn8m" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.225226 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/00434a2a-97a5-4d8f-9a6f-9dc5b372cd20-hosts-file\") pod \"node-resolver-2dn8m\" (UID: \"00434a2a-97a5-4d8f-9a6f-9dc5b372cd20\") " pod="openshift-dns/node-resolver-2dn8m" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.245032 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5pmd\" (UniqueName: \"kubernetes.io/projected/00434a2a-97a5-4d8f-9a6f-9dc5b372cd20-kube-api-access-b5pmd\") pod \"node-resolver-2dn8m\" (UID: \"00434a2a-97a5-4d8f-9a6f-9dc5b372cd20\") " pod="openshift-dns/node-resolver-2dn8m" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.296180 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.296243 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.296256 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.296277 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.296290 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:27Z","lastTransitionTime":"2026-02-23T17:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.326738 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-2dn8m" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.392420 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-qssx7"] Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.393556 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-rw78r"] Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.393773 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-mmxrg"] Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.393995 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-mmxrg" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.395148 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-qssx7" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.395985 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.396501 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.397016 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.397712 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.397872 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.399134 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.399255 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.401511 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.401532 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.401543 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.401560 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.401573 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:27Z","lastTransitionTime":"2026-02-23T17:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.403277 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.403736 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.403850 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.403994 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.404057 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.404233 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.417553 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d85ca9995343f8d309c902c37fc2db6faf539dfde44cc8334e279648ec2ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:27Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.430916 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74b84e23-8d8b-47b7-b3d4-269b16e9dfb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e581ef5d1fb9752e7582661dffceba50fc7b408467d9b0f0cca3b29e1838e72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:27Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.453176 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5334ad9-e4a5-4eec-b535-44664ba43c0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47ad3efcec8d953297d16ec3aa049656f57e69121a9dfc2f83572d1fe0c1f8ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8b4c85243660568a55df3eb0867530e60512a01960c5be951763ccd9459be30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b43f0bff4db1bc23f440c84a11c3018196009ec0fe47a9cb4b4cd136e5fffd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee21f0fcd431e1f400bb68f6a5c7b5b8d38ced5a33133684160a3255ac61ec20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65474f581d37631a7193e3ebc061e9df69206ec245113f16c7930111a5f74d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://200faace6300bc9dc3138b270b3fd029a6580cb06bfdd89eb73f97635abc7ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://200faace6300bc9dc3138b270b3fd029a6580cb06bfdd89eb73f97635abc7ac3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3268af99383f73c85412d56d39515a43362510c7a3a972e83752d5929d7d4935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3268af99383f73c85412d56d39515a43362510c7a3a972e83752d5929d7d4935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5ae1a073932b6e36d2af6015c8cd09df8d031c65d67e2d06c5bcc2fcd6b35407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ae1a073932b6e36d2af6015c8cd09df8d031c65d67e2d06c5bcc2fcd6b35407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:27Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.473431 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e1d6606-75fc-41fd-9c23-18ee248da2af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe805066e5e61f09a4f5cf1f9e912da6664229c8db86a898cbe670278e4c2530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beda3a9d030f2d4aa9c6a78508a84f5bc122debebb2c5b423233834433b8fd5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e108d74a635da102e857e2ad4f18384861e4e630ae31af9aa05841e6bbc2de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b889ee0ad15cc6ff92c69a44c4bc0bf620e23e3dc7276324662c265ba5c7d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:25Z\\\",\\\"message\\\":\\\"W0223 17:31:24.378840 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 17:31:24.379771 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771867884 cert, and key in /tmp/serving-cert-1953836024/serving-signer.crt, /tmp/serving-cert-1953836024/serving-signer.key\\\\nI0223 17:31:24.846158 1 observer_polling.go:159] Starting file observer\\\\nW0223 17:31:24.858497 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 17:31:24.858706 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 17:31:24.859835 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1953836024/tls.crt::/tmp/serving-cert-1953836024/tls.key\\\\\\\"\\\\nF0223 17:31:25.358711 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:31:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b550694e49325768a2674a4d2b7089dd267b91a273751fb62d060a0663f06619\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:27Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.488340 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:27Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.504736 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.504794 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.504807 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.504926 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.504943 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:27Z","lastTransitionTime":"2026-02-23T17:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.507203 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d114679fca6bbb2dbaaee0970449fe265c7c931665eb559d015abb61de3ec07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dd3d46856b5d5dece678b672db6485b68b9b7fbe9f029f7d9df37906146674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:27Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.524248 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:27Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.527940 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/45a042db-4057-4913-8091-da7d8c79feba-cnibin\") pod \"multus-mmxrg\" (UID: \"45a042db-4057-4913-8091-da7d8c79feba\") " pod="openshift-multus/multus-mmxrg" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.527969 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/45a042db-4057-4913-8091-da7d8c79feba-hostroot\") pod \"multus-mmxrg\" (UID: \"45a042db-4057-4913-8091-da7d8c79feba\") " pod="openshift-multus/multus-mmxrg" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.527991 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wrps\" (UniqueName: \"kubernetes.io/projected/45a042db-4057-4913-8091-da7d8c79feba-kube-api-access-2wrps\") pod \"multus-mmxrg\" (UID: \"45a042db-4057-4913-8091-da7d8c79feba\") " pod="openshift-multus/multus-mmxrg" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.528012 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/45a042db-4057-4913-8091-da7d8c79feba-system-cni-dir\") pod \"multus-mmxrg\" (UID: \"45a042db-4057-4913-8091-da7d8c79feba\") " pod="openshift-multus/multus-mmxrg" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.528038 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/45a042db-4057-4913-8091-da7d8c79feba-multus-daemon-config\") pod \"multus-mmxrg\" (UID: \"45a042db-4057-4913-8091-da7d8c79feba\") " pod="openshift-multus/multus-mmxrg" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.528160 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a065b197-b354-4d9b-b2e9-7d4882a3d1a2-rootfs\") pod \"machine-config-daemon-rw78r\" (UID: \"a065b197-b354-4d9b-b2e9-7d4882a3d1a2\") " pod="openshift-machine-config-operator/machine-config-daemon-rw78r" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.528233 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8179b275-39bb-472c-915f-a02b2a09c88d-cni-binary-copy\") pod \"multus-additional-cni-plugins-qssx7\" (UID: \"8179b275-39bb-472c-915f-a02b2a09c88d\") " pod="openshift-multus/multus-additional-cni-plugins-qssx7" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.528285 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8179b275-39bb-472c-915f-a02b2a09c88d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-qssx7\" (UID: \"8179b275-39bb-472c-915f-a02b2a09c88d\") " pod="openshift-multus/multus-additional-cni-plugins-qssx7" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.528306 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/45a042db-4057-4913-8091-da7d8c79feba-multus-socket-dir-parent\") pod \"multus-mmxrg\" (UID: \"45a042db-4057-4913-8091-da7d8c79feba\") " pod="openshift-multus/multus-mmxrg" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.528331 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8179b275-39bb-472c-915f-a02b2a09c88d-os-release\") pod \"multus-additional-cni-plugins-qssx7\" (UID: \"8179b275-39bb-472c-915f-a02b2a09c88d\") " pod="openshift-multus/multus-additional-cni-plugins-qssx7" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.528356 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/45a042db-4057-4913-8091-da7d8c79feba-etc-kubernetes\") pod \"multus-mmxrg\" (UID: \"45a042db-4057-4913-8091-da7d8c79feba\") " pod="openshift-multus/multus-mmxrg" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.528414 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/45a042db-4057-4913-8091-da7d8c79feba-host-run-k8s-cni-cncf-io\") pod \"multus-mmxrg\" (UID: \"45a042db-4057-4913-8091-da7d8c79feba\") " pod="openshift-multus/multus-mmxrg" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.528436 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/45a042db-4057-4913-8091-da7d8c79feba-os-release\") pod \"multus-mmxrg\" (UID: \"45a042db-4057-4913-8091-da7d8c79feba\") " pod="openshift-multus/multus-mmxrg" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.528450 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/45a042db-4057-4913-8091-da7d8c79feba-host-run-netns\") pod \"multus-mmxrg\" (UID: \"45a042db-4057-4913-8091-da7d8c79feba\") " pod="openshift-multus/multus-mmxrg" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.528470 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/45a042db-4057-4913-8091-da7d8c79feba-host-run-multus-certs\") pod \"multus-mmxrg\" (UID: \"45a042db-4057-4913-8091-da7d8c79feba\") " pod="openshift-multus/multus-mmxrg" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.528493 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8179b275-39bb-472c-915f-a02b2a09c88d-cnibin\") pod \"multus-additional-cni-plugins-qssx7\" (UID: \"8179b275-39bb-472c-915f-a02b2a09c88d\") " pod="openshift-multus/multus-additional-cni-plugins-qssx7" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.528517 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8179b275-39bb-472c-915f-a02b2a09c88d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-qssx7\" (UID: \"8179b275-39bb-472c-915f-a02b2a09c88d\") " pod="openshift-multus/multus-additional-cni-plugins-qssx7" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.528539 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/45a042db-4057-4913-8091-da7d8c79feba-multus-cni-dir\") pod \"multus-mmxrg\" (UID: \"45a042db-4057-4913-8091-da7d8c79feba\") " pod="openshift-multus/multus-mmxrg" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.528554 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/45a042db-4057-4913-8091-da7d8c79feba-host-var-lib-cni-multus\") pod \"multus-mmxrg\" (UID: \"45a042db-4057-4913-8091-da7d8c79feba\") " pod="openshift-multus/multus-mmxrg" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.528587 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/45a042db-4057-4913-8091-da7d8c79feba-cni-binary-copy\") pod \"multus-mmxrg\" (UID: \"45a042db-4057-4913-8091-da7d8c79feba\") " pod="openshift-multus/multus-mmxrg" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.528613 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/45a042db-4057-4913-8091-da7d8c79feba-host-var-lib-kubelet\") pod \"multus-mmxrg\" (UID: \"45a042db-4057-4913-8091-da7d8c79feba\") " pod="openshift-multus/multus-mmxrg" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.528636 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8179b275-39bb-472c-915f-a02b2a09c88d-system-cni-dir\") pod \"multus-additional-cni-plugins-qssx7\" (UID: \"8179b275-39bb-472c-915f-a02b2a09c88d\") " pod="openshift-multus/multus-additional-cni-plugins-qssx7" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.528657 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a065b197-b354-4d9b-b2e9-7d4882a3d1a2-mcd-auth-proxy-config\") pod \"machine-config-daemon-rw78r\" (UID: \"a065b197-b354-4d9b-b2e9-7d4882a3d1a2\") " pod="openshift-machine-config-operator/machine-config-daemon-rw78r" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.528684 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cgpd\" (UniqueName: \"kubernetes.io/projected/a065b197-b354-4d9b-b2e9-7d4882a3d1a2-kube-api-access-6cgpd\") pod \"machine-config-daemon-rw78r\" (UID: \"a065b197-b354-4d9b-b2e9-7d4882a3d1a2\") " pod="openshift-machine-config-operator/machine-config-daemon-rw78r" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.528707 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a065b197-b354-4d9b-b2e9-7d4882a3d1a2-proxy-tls\") pod \"machine-config-daemon-rw78r\" (UID: \"a065b197-b354-4d9b-b2e9-7d4882a3d1a2\") " pod="openshift-machine-config-operator/machine-config-daemon-rw78r" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.528763 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/45a042db-4057-4913-8091-da7d8c79feba-multus-conf-dir\") pod \"multus-mmxrg\" (UID: \"45a042db-4057-4913-8091-da7d8c79feba\") " pod="openshift-multus/multus-mmxrg" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.528803 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8km28\" (UniqueName: \"kubernetes.io/projected/8179b275-39bb-472c-915f-a02b2a09c88d-kube-api-access-8km28\") pod \"multus-additional-cni-plugins-qssx7\" (UID: \"8179b275-39bb-472c-915f-a02b2a09c88d\") " pod="openshift-multus/multus-additional-cni-plugins-qssx7" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.528831 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/45a042db-4057-4913-8091-da7d8c79feba-host-var-lib-cni-bin\") pod \"multus-mmxrg\" (UID: \"45a042db-4057-4913-8091-da7d8c79feba\") " pod="openshift-multus/multus-mmxrg" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.541667 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:27Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.555516 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2dn8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00434a2a-97a5-4d8f-9a6f-9dc5b372cd20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5pmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2dn8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:27Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.570723 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2df91905-ff3d-4a7d-8e22-337861483a5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9fabd2560e4a3826fb070276626936d120d612d37e2750e417f637b93b1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff1c2b88a730048902a1360cdbcddfb4f82c6a58c427edb24a792b254f55383\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:17Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 17:30:47.305065 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 17:30:47.307934 1 observer_polling.go:159] Starting file observer\\\\nI0223 17:30:47.346927 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 17:30:47.351923 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0223 17:31:17.828320 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:16Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbce4ddd851479ee9b40b8c773a052d5a1d1420df107d8ea0e9fa2b927300910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bfa4a72a4a487fbc7703a3b60a9ef9a18b7fea6e58fd427f50361d8df9731d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2591d81bed9e5dbad670e16b2266367cfb386a3d68ffad9d09e0652130ee2609\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:27Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.583704 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35021d6d-15bd-4153-9dae-7b002eff8c23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26478861fb89db7eb6b5ba0a2089cad4360893bd95a71883be07831745c5c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f971a2a092b595760de8b7a9b6a0a13fa802d3fcb6a2c2b8922b01c9c57d782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d892cd1f6ea7fabb2e64e3e78d329f6e0efbf8585cc4f72fbcd29f31a035cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:27Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.600500 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbcb3c1b1daa4328b6e5e049e9f1270ee94da4897df26324f6e4cd898f565de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:27Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.608795 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.608841 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.608853 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.608886 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.608928 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:27Z","lastTransitionTime":"2026-02-23T17:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.618273 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mmxrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a042db-4057-4913-8091-da7d8c79feba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wrps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mmxrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:27Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.630329 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8179b275-39bb-472c-915f-a02b2a09c88d-cni-binary-copy\") pod \"multus-additional-cni-plugins-qssx7\" (UID: \"8179b275-39bb-472c-915f-a02b2a09c88d\") " pod="openshift-multus/multus-additional-cni-plugins-qssx7" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.630379 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8179b275-39bb-472c-915f-a02b2a09c88d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-qssx7\" (UID: \"8179b275-39bb-472c-915f-a02b2a09c88d\") " pod="openshift-multus/multus-additional-cni-plugins-qssx7" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.630423 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/45a042db-4057-4913-8091-da7d8c79feba-multus-socket-dir-parent\") pod \"multus-mmxrg\" (UID: \"45a042db-4057-4913-8091-da7d8c79feba\") " pod="openshift-multus/multus-mmxrg" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.630474 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8179b275-39bb-472c-915f-a02b2a09c88d-os-release\") pod \"multus-additional-cni-plugins-qssx7\" (UID: \"8179b275-39bb-472c-915f-a02b2a09c88d\") " pod="openshift-multus/multus-additional-cni-plugins-qssx7" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.630500 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/45a042db-4057-4913-8091-da7d8c79feba-etc-kubernetes\") pod \"multus-mmxrg\" (UID: \"45a042db-4057-4913-8091-da7d8c79feba\") " pod="openshift-multus/multus-mmxrg" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.630527 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/45a042db-4057-4913-8091-da7d8c79feba-host-run-k8s-cni-cncf-io\") pod \"multus-mmxrg\" (UID: \"45a042db-4057-4913-8091-da7d8c79feba\") " pod="openshift-multus/multus-mmxrg" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.630550 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/45a042db-4057-4913-8091-da7d8c79feba-os-release\") pod \"multus-mmxrg\" (UID: \"45a042db-4057-4913-8091-da7d8c79feba\") " pod="openshift-multus/multus-mmxrg" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.630577 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/45a042db-4057-4913-8091-da7d8c79feba-host-run-netns\") pod \"multus-mmxrg\" (UID: \"45a042db-4057-4913-8091-da7d8c79feba\") " pod="openshift-multus/multus-mmxrg" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.630602 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/45a042db-4057-4913-8091-da7d8c79feba-host-run-multus-certs\") pod \"multus-mmxrg\" (UID: \"45a042db-4057-4913-8091-da7d8c79feba\") " pod="openshift-multus/multus-mmxrg" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.630630 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8179b275-39bb-472c-915f-a02b2a09c88d-cnibin\") pod \"multus-additional-cni-plugins-qssx7\" (UID: \"8179b275-39bb-472c-915f-a02b2a09c88d\") " pod="openshift-multus/multus-additional-cni-plugins-qssx7" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.630659 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8179b275-39bb-472c-915f-a02b2a09c88d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-qssx7\" (UID: \"8179b275-39bb-472c-915f-a02b2a09c88d\") " pod="openshift-multus/multus-additional-cni-plugins-qssx7" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.630687 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/45a042db-4057-4913-8091-da7d8c79feba-multus-cni-dir\") pod \"multus-mmxrg\" (UID: \"45a042db-4057-4913-8091-da7d8c79feba\") " pod="openshift-multus/multus-mmxrg" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.630730 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/45a042db-4057-4913-8091-da7d8c79feba-host-var-lib-cni-multus\") pod \"multus-mmxrg\" (UID: \"45a042db-4057-4913-8091-da7d8c79feba\") " pod="openshift-multus/multus-mmxrg" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.630770 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/45a042db-4057-4913-8091-da7d8c79feba-cni-binary-copy\") pod \"multus-mmxrg\" (UID: \"45a042db-4057-4913-8091-da7d8c79feba\") " pod="openshift-multus/multus-mmxrg" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.630797 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/45a042db-4057-4913-8091-da7d8c79feba-host-var-lib-kubelet\") pod \"multus-mmxrg\" (UID: \"45a042db-4057-4913-8091-da7d8c79feba\") " pod="openshift-multus/multus-mmxrg" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.630807 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8179b275-39bb-472c-915f-a02b2a09c88d-os-release\") pod \"multus-additional-cni-plugins-qssx7\" (UID: \"8179b275-39bb-472c-915f-a02b2a09c88d\") " pod="openshift-multus/multus-additional-cni-plugins-qssx7" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.630899 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/45a042db-4057-4913-8091-da7d8c79feba-etc-kubernetes\") pod \"multus-mmxrg\" (UID: \"45a042db-4057-4913-8091-da7d8c79feba\") " pod="openshift-multus/multus-mmxrg" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.630906 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8179b275-39bb-472c-915f-a02b2a09c88d-system-cni-dir\") pod \"multus-additional-cni-plugins-qssx7\" (UID: \"8179b275-39bb-472c-915f-a02b2a09c88d\") " pod="openshift-multus/multus-additional-cni-plugins-qssx7" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.630888 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/45a042db-4057-4913-8091-da7d8c79feba-multus-socket-dir-parent\") pod \"multus-mmxrg\" (UID: \"45a042db-4057-4913-8091-da7d8c79feba\") " pod="openshift-multus/multus-mmxrg" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.630822 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8179b275-39bb-472c-915f-a02b2a09c88d-system-cni-dir\") pod \"multus-additional-cni-plugins-qssx7\" (UID: \"8179b275-39bb-472c-915f-a02b2a09c88d\") " pod="openshift-multus/multus-additional-cni-plugins-qssx7" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.630951 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8179b275-39bb-472c-915f-a02b2a09c88d-cnibin\") pod \"multus-additional-cni-plugins-qssx7\" (UID: \"8179b275-39bb-472c-915f-a02b2a09c88d\") " pod="openshift-multus/multus-additional-cni-plugins-qssx7" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.630992 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a065b197-b354-4d9b-b2e9-7d4882a3d1a2-mcd-auth-proxy-config\") pod \"machine-config-daemon-rw78r\" (UID: \"a065b197-b354-4d9b-b2e9-7d4882a3d1a2\") " pod="openshift-machine-config-operator/machine-config-daemon-rw78r" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.631040 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cgpd\" (UniqueName: \"kubernetes.io/projected/a065b197-b354-4d9b-b2e9-7d4882a3d1a2-kube-api-access-6cgpd\") pod \"machine-config-daemon-rw78r\" (UID: \"a065b197-b354-4d9b-b2e9-7d4882a3d1a2\") " pod="openshift-machine-config-operator/machine-config-daemon-rw78r" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.631167 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a065b197-b354-4d9b-b2e9-7d4882a3d1a2-proxy-tls\") pod \"machine-config-daemon-rw78r\" (UID: \"a065b197-b354-4d9b-b2e9-7d4882a3d1a2\") " pod="openshift-machine-config-operator/machine-config-daemon-rw78r" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.631195 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/45a042db-4057-4913-8091-da7d8c79feba-multus-conf-dir\") pod \"multus-mmxrg\" (UID: \"45a042db-4057-4913-8091-da7d8c79feba\") " pod="openshift-multus/multus-mmxrg" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.631224 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8km28\" (UniqueName: \"kubernetes.io/projected/8179b275-39bb-472c-915f-a02b2a09c88d-kube-api-access-8km28\") pod \"multus-additional-cni-plugins-qssx7\" (UID: \"8179b275-39bb-472c-915f-a02b2a09c88d\") " pod="openshift-multus/multus-additional-cni-plugins-qssx7" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.631244 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/45a042db-4057-4913-8091-da7d8c79feba-host-var-lib-cni-bin\") pod \"multus-mmxrg\" (UID: \"45a042db-4057-4913-8091-da7d8c79feba\") " pod="openshift-multus/multus-mmxrg" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.631277 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/45a042db-4057-4913-8091-da7d8c79feba-cnibin\") pod \"multus-mmxrg\" (UID: \"45a042db-4057-4913-8091-da7d8c79feba\") " pod="openshift-multus/multus-mmxrg" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.631299 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/45a042db-4057-4913-8091-da7d8c79feba-hostroot\") pod \"multus-mmxrg\" (UID: \"45a042db-4057-4913-8091-da7d8c79feba\") " pod="openshift-multus/multus-mmxrg" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.631320 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wrps\" (UniqueName: \"kubernetes.io/projected/45a042db-4057-4913-8091-da7d8c79feba-kube-api-access-2wrps\") pod \"multus-mmxrg\" (UID: \"45a042db-4057-4913-8091-da7d8c79feba\") " pod="openshift-multus/multus-mmxrg" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.631340 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/45a042db-4057-4913-8091-da7d8c79feba-system-cni-dir\") pod \"multus-mmxrg\" (UID: \"45a042db-4057-4913-8091-da7d8c79feba\") " pod="openshift-multus/multus-mmxrg" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.631366 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/45a042db-4057-4913-8091-da7d8c79feba-multus-daemon-config\") pod \"multus-mmxrg\" (UID: \"45a042db-4057-4913-8091-da7d8c79feba\") " pod="openshift-multus/multus-mmxrg" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.631404 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/45a042db-4057-4913-8091-da7d8c79feba-multus-cni-dir\") pod \"multus-mmxrg\" (UID: \"45a042db-4057-4913-8091-da7d8c79feba\") " pod="openshift-multus/multus-mmxrg" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.631466 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a065b197-b354-4d9b-b2e9-7d4882a3d1a2-rootfs\") pod \"machine-config-daemon-rw78r\" (UID: \"a065b197-b354-4d9b-b2e9-7d4882a3d1a2\") " pod="openshift-machine-config-operator/machine-config-daemon-rw78r" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.631469 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/45a042db-4057-4913-8091-da7d8c79feba-host-run-k8s-cni-cncf-io\") pod \"multus-mmxrg\" (UID: \"45a042db-4057-4913-8091-da7d8c79feba\") " pod="openshift-multus/multus-mmxrg" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.631428 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a065b197-b354-4d9b-b2e9-7d4882a3d1a2-rootfs\") pod \"machine-config-daemon-rw78r\" (UID: \"a065b197-b354-4d9b-b2e9-7d4882a3d1a2\") " pod="openshift-machine-config-operator/machine-config-daemon-rw78r" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.631527 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/45a042db-4057-4913-8091-da7d8c79feba-os-release\") pod \"multus-mmxrg\" (UID: \"45a042db-4057-4913-8091-da7d8c79feba\") " pod="openshift-multus/multus-mmxrg" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.631565 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/45a042db-4057-4913-8091-da7d8c79feba-host-var-lib-cni-bin\") pod \"multus-mmxrg\" (UID: \"45a042db-4057-4913-8091-da7d8c79feba\") " pod="openshift-multus/multus-mmxrg" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.631611 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/45a042db-4057-4913-8091-da7d8c79feba-cnibin\") pod \"multus-mmxrg\" (UID: \"45a042db-4057-4913-8091-da7d8c79feba\") " pod="openshift-multus/multus-mmxrg" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.631611 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8179b275-39bb-472c-915f-a02b2a09c88d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-qssx7\" (UID: \"8179b275-39bb-472c-915f-a02b2a09c88d\") " pod="openshift-multus/multus-additional-cni-plugins-qssx7" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.631646 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/45a042db-4057-4913-8091-da7d8c79feba-hostroot\") pod \"multus-mmxrg\" (UID: \"45a042db-4057-4913-8091-da7d8c79feba\") " pod="openshift-multus/multus-mmxrg" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.631927 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/45a042db-4057-4913-8091-da7d8c79feba-host-run-netns\") pod \"multus-mmxrg\" (UID: \"45a042db-4057-4913-8091-da7d8c79feba\") " pod="openshift-multus/multus-mmxrg" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.631966 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/45a042db-4057-4913-8091-da7d8c79feba-host-run-multus-certs\") pod \"multus-mmxrg\" (UID: \"45a042db-4057-4913-8091-da7d8c79feba\") " pod="openshift-multus/multus-mmxrg" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.632076 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/45a042db-4057-4913-8091-da7d8c79feba-system-cni-dir\") pod \"multus-mmxrg\" (UID: \"45a042db-4057-4913-8091-da7d8c79feba\") " pod="openshift-multus/multus-mmxrg" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.632079 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/45a042db-4057-4913-8091-da7d8c79feba-host-var-lib-cni-multus\") pod \"multus-mmxrg\" (UID: \"45a042db-4057-4913-8091-da7d8c79feba\") " pod="openshift-multus/multus-mmxrg" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.632165 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/45a042db-4057-4913-8091-da7d8c79feba-multus-conf-dir\") pod \"multus-mmxrg\" (UID: \"45a042db-4057-4913-8091-da7d8c79feba\") " pod="openshift-multus/multus-mmxrg" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.632205 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/45a042db-4057-4913-8091-da7d8c79feba-host-var-lib-kubelet\") pod \"multus-mmxrg\" (UID: \"45a042db-4057-4913-8091-da7d8c79feba\") " pod="openshift-multus/multus-mmxrg" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.632515 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/45a042db-4057-4913-8091-da7d8c79feba-cni-binary-copy\") pod \"multus-mmxrg\" (UID: \"45a042db-4057-4913-8091-da7d8c79feba\") " pod="openshift-multus/multus-mmxrg" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.632586 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a065b197-b354-4d9b-b2e9-7d4882a3d1a2-mcd-auth-proxy-config\") pod \"machine-config-daemon-rw78r\" (UID: \"a065b197-b354-4d9b-b2e9-7d4882a3d1a2\") " pod="openshift-machine-config-operator/machine-config-daemon-rw78r" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.632629 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8179b275-39bb-472c-915f-a02b2a09c88d-cni-binary-copy\") pod \"multus-additional-cni-plugins-qssx7\" (UID: \"8179b275-39bb-472c-915f-a02b2a09c88d\") " pod="openshift-multus/multus-additional-cni-plugins-qssx7" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.632634 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2df91905-ff3d-4a7d-8e22-337861483a5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9fabd2560e4a3826fb070276626936d120d612d37e2750e417f637b93b1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff1c2b88a730048902a1360cdbcddfb4f82c6a58c427edb24a792b254f55383\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:17Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 17:30:47.305065 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 17:30:47.307934 1 observer_polling.go:159] Starting file observer\\\\nI0223 17:30:47.346927 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 17:30:47.351923 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0223 17:31:17.828320 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:16Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbce4ddd851479ee9b40b8c773a052d5a1d1420df107d8ea0e9fa2b927300910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bfa4a72a4a487fbc7703a3b60a9ef9a18b7fea6e58fd427f50361d8df9731d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2591d81bed9e5dbad670e16b2266367cfb386a3d68ffad9d09e0652130ee2609\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:27Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.632878 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/45a042db-4057-4913-8091-da7d8c79feba-multus-daemon-config\") pod \"multus-mmxrg\" (UID: \"45a042db-4057-4913-8091-da7d8c79feba\") " pod="openshift-multus/multus-mmxrg" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.633090 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8179b275-39bb-472c-915f-a02b2a09c88d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-qssx7\" (UID: \"8179b275-39bb-472c-915f-a02b2a09c88d\") " pod="openshift-multus/multus-additional-cni-plugins-qssx7" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.635962 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a065b197-b354-4d9b-b2e9-7d4882a3d1a2-proxy-tls\") pod \"machine-config-daemon-rw78r\" (UID: \"a065b197-b354-4d9b-b2e9-7d4882a3d1a2\") " pod="openshift-machine-config-operator/machine-config-daemon-rw78r" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.646720 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbcb3c1b1daa4328b6e5e049e9f1270ee94da4897df26324f6e4cd898f565de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:27Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.648149 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cgpd\" (UniqueName: \"kubernetes.io/projected/a065b197-b354-4d9b-b2e9-7d4882a3d1a2-kube-api-access-6cgpd\") pod \"machine-config-daemon-rw78r\" (UID: \"a065b197-b354-4d9b-b2e9-7d4882a3d1a2\") " pod="openshift-machine-config-operator/machine-config-daemon-rw78r" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.648829 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wrps\" (UniqueName: \"kubernetes.io/projected/45a042db-4057-4913-8091-da7d8c79feba-kube-api-access-2wrps\") pod \"multus-mmxrg\" (UID: \"45a042db-4057-4913-8091-da7d8c79feba\") " pod="openshift-multus/multus-mmxrg" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.655566 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8km28\" (UniqueName: \"kubernetes.io/projected/8179b275-39bb-472c-915f-a02b2a09c88d-kube-api-access-8km28\") pod \"multus-additional-cni-plugins-qssx7\" (UID: \"8179b275-39bb-472c-915f-a02b2a09c88d\") " pod="openshift-multus/multus-additional-cni-plugins-qssx7" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.659904 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mmxrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a042db-4057-4913-8091-da7d8c79feba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wrps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mmxrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:27Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.671408 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74b84e23-8d8b-47b7-b3d4-269b16e9dfb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e581ef5d1fb9752e7582661dffceba50fc7b408467d9b0f0cca3b29e1838e72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:27Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.686051 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e1d6606-75fc-41fd-9c23-18ee248da2af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe805066e5e61f09a4f5cf1f9e912da6664229c8db86a898cbe670278e4c2530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beda3a9d030f2d4aa9c6a78508a84f5bc122debebb2c5b423233834433b8fd5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e108d74a635da102e857e2ad4f18384861e4e630ae31af9aa05841e6bbc2de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b889ee0ad15cc6ff92c69a44c4bc0bf620e23e3dc7276324662c265ba5c7d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:25Z\\\",\\\"message\\\":\\\"W0223 17:31:24.378840 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 17:31:24.379771 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771867884 cert, and key in /tmp/serving-cert-1953836024/serving-signer.crt, /tmp/serving-cert-1953836024/serving-signer.key\\\\nI0223 17:31:24.846158 1 observer_polling.go:159] Starting file observer\\\\nW0223 17:31:24.858497 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 17:31:24.858706 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 17:31:24.859835 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1953836024/tls.crt::/tmp/serving-cert-1953836024/tls.key\\\\\\\"\\\\nF0223 17:31:25.358711 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:31:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b550694e49325768a2674a4d2b7089dd267b91a273751fb62d060a0663f06619\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:27Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.698705 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:27Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.703977 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-2dn8m" event={"ID":"00434a2a-97a5-4d8f-9a6f-9dc5b372cd20","Type":"ContainerStarted","Data":"271eec7ff87c2b2d14aded00b308e9cd801b51352cbb3c7075bdc7037560eddc"} Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.704041 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-2dn8m" event={"ID":"00434a2a-97a5-4d8f-9a6f-9dc5b372cd20","Type":"ContainerStarted","Data":"67068c73e317459c308db81d710f8ea79c6f60e5fbac3ddc76f147da272d75ce"} Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.711916 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.711964 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.711980 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.712005 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.712022 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:27Z","lastTransitionTime":"2026-02-23T17:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.714259 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:27Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.727270 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-mmxrg" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.727404 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:27Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.741675 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35021d6d-15bd-4153-9dae-7b002eff8c23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26478861fb89db7eb6b5ba0a2089cad4360893bd95a71883be07831745c5c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f971a2a092b595760de8b7a9b6a0a13fa802d3fcb6a2c2b8922b01c9c57d782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d892cd1f6ea7fabb2e64e3e78d329f6e0efbf8585cc4f72fbcd29f31a035cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:27Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.741863 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-qssx7" Feb 23 17:32:27 crc kubenswrapper[4724]: W0223 17:32:27.742718 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45a042db_4057_4913_8091_da7d8c79feba.slice/crio-6d9753732859c409907c016e2a39419c7f8694fd5a5f4bed489dfdab354070e6 WatchSource:0}: Error finding container 6d9753732859c409907c016e2a39419c7f8694fd5a5f4bed489dfdab354070e6: Status 404 returned error can't find the container with id 6d9753732859c409907c016e2a39419c7f8694fd5a5f4bed489dfdab354070e6 Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.751211 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" Feb 23 17:32:27 crc kubenswrapper[4724]: W0223 17:32:27.752151 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8179b275_39bb_472c_915f_a02b2a09c88d.slice/crio-8439350a148df1e8a4c0ad3b586abc3d8c994558347d99372b6de62a82703a1f WatchSource:0}: Error finding container 8439350a148df1e8a4c0ad3b586abc3d8c994558347d99372b6de62a82703a1f: Status 404 returned error can't find the container with id 8439350a148df1e8a4c0ad3b586abc3d8c994558347d99372b6de62a82703a1f Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.766636 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5334ad9-e4a5-4eec-b535-44664ba43c0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47ad3efcec8d953297d16ec3aa049656f57e69121a9dfc2f83572d1fe0c1f8ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8b4c85243660568a55df3eb0867530e60512a01960c5be951763ccd9459be30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b43f0bff4db1bc23f440c84a11c3018196009ec0fe47a9cb4b4cd136e5fffd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee21f0fcd431e1f400bb68f6a5c7b5b8d38ced5a33133684160a3255ac61ec20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65474f581d37631a7193e3ebc061e9df69206ec245113f16c7930111a5f74d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://200faace6300bc9dc3138b270b3fd029a6580cb06bfdd89eb73f97635abc7ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://200faace6300bc9dc3138b270b3fd029a6580cb06bfdd89eb73f97635abc7ac3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3268af99383f73c85412d56d39515a43362510c7a3a972e83752d5929d7d4935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3268af99383f73c85412d56d39515a43362510c7a3a972e83752d5929d7d4935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5ae1a073932b6e36d2af6015c8cd09df8d031c65d67e2d06c5bcc2fcd6b35407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ae1a073932b6e36d2af6015c8cd09df8d031c65d67e2d06c5bcc2fcd6b35407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:27Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.783561 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-78fmj"] Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.784507 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.787361 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.787562 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d114679fca6bbb2dbaaee0970449fe265c7c931665eb559d015abb61de3ec07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dd3d46856b5d5dece678b672db6485b68b9b7fbe9f029f7d9df37906146674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:27Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.787714 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.789105 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.789135 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.789235 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.789262 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.789383 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.808237 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d85ca9995343f8d309c902c37fc2db6faf539dfde44cc8334e279648ec2ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:27Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.816596 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.816635 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.816651 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.816672 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.816685 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:27Z","lastTransitionTime":"2026-02-23T17:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.820364 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2dn8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00434a2a-97a5-4d8f-9a6f-9dc5b372cd20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5pmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2dn8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:27Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.833814 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-ovn-node-metrics-cert\") pod \"ovnkube-node-78fmj\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.833858 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpsrk\" (UniqueName: \"kubernetes.io/projected/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-kube-api-access-vpsrk\") pod \"ovnkube-node-78fmj\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.833880 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-node-log\") pod \"ovnkube-node-78fmj\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.833898 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-host-run-netns\") pod \"ovnkube-node-78fmj\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.833930 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-host-kubelet\") pod \"ovnkube-node-78fmj\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.833973 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-log-socket\") pod \"ovnkube-node-78fmj\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.833993 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-host-cni-bin\") pod \"ovnkube-node-78fmj\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.834027 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-host-slash\") pod \"ovnkube-node-78fmj\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.834049 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-host-cni-netd\") pod \"ovnkube-node-78fmj\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.834072 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-ovnkube-config\") pod \"ovnkube-node-78fmj\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.834096 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-ovnkube-script-lib\") pod \"ovnkube-node-78fmj\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.834199 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-run-ovn\") pod \"ovnkube-node-78fmj\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.834275 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-run-systemd\") pod \"ovnkube-node-78fmj\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.834327 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-systemd-units\") pod \"ovnkube-node-78fmj\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.834361 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-run-openvswitch\") pod \"ovnkube-node-78fmj\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.834431 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-var-lib-openvswitch\") pod \"ovnkube-node-78fmj\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.834470 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-78fmj\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.834505 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-etc-openvswitch\") pod \"ovnkube-node-78fmj\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.834527 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-env-overrides\") pod \"ovnkube-node-78fmj\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.834553 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-host-run-ovn-kubernetes\") pod \"ovnkube-node-78fmj\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.839246 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qssx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8179b275-39bb-472c-915f-a02b2a09c88d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qssx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:27Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.853682 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a065b197-b354-4d9b-b2e9-7d4882a3d1a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cgpd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cgpd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw78r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:27Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.873795 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e1d6606-75fc-41fd-9c23-18ee248da2af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe805066e5e61f09a4f5cf1f9e912da6664229c8db86a898cbe670278e4c2530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beda3a9d030f2d4aa9c6a78508a84f5bc122debebb2c5b423233834433b8fd5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e108d74a635da102e857e2ad4f18384861e4e630ae31af9aa05841e6bbc2de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b889ee0ad15cc6ff92c69a44c4bc0bf620e23e3dc7276324662c265ba5c7d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:25Z\\\",\\\"message\\\":\\\"W0223 17:31:24.378840 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 17:31:24.379771 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771867884 cert, and key in /tmp/serving-cert-1953836024/serving-signer.crt, /tmp/serving-cert-1953836024/serving-signer.key\\\\nI0223 17:31:24.846158 1 observer_polling.go:159] Starting file observer\\\\nW0223 17:31:24.858497 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 17:31:24.858706 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 17:31:24.859835 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1953836024/tls.crt::/tmp/serving-cert-1953836024/tls.key\\\\\\\"\\\\nF0223 17:31:25.358711 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:31:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b550694e49325768a2674a4d2b7089dd267b91a273751fb62d060a0663f06619\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:27Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.888765 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:27Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.902572 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:27Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.918546 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:27Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.919519 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.919560 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.919569 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.919587 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.919598 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:27Z","lastTransitionTime":"2026-02-23T17:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.933017 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74b84e23-8d8b-47b7-b3d4-269b16e9dfb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e581ef5d1fb9752e7582661dffceba50fc7b408467d9b0f0cca3b29e1838e72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:27Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.935276 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-host-slash\") pod \"ovnkube-node-78fmj\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.935322 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-host-cni-netd\") pod \"ovnkube-node-78fmj\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.935344 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-ovnkube-config\") pod \"ovnkube-node-78fmj\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.935365 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-ovnkube-script-lib\") pod \"ovnkube-node-78fmj\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.935406 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-host-cni-netd\") pod \"ovnkube-node-78fmj\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.935440 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-run-ovn\") pod \"ovnkube-node-78fmj\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.935365 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-host-slash\") pod \"ovnkube-node-78fmj\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.935410 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-run-ovn\") pod \"ovnkube-node-78fmj\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.935676 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-run-systemd\") pod \"ovnkube-node-78fmj\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.935821 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-systemd-units\") pod \"ovnkube-node-78fmj\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.935846 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-run-openvswitch\") pod \"ovnkube-node-78fmj\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.935871 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-var-lib-openvswitch\") pod \"ovnkube-node-78fmj\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.935909 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-78fmj\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.935969 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-etc-openvswitch\") pod \"ovnkube-node-78fmj\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.935991 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-env-overrides\") pod \"ovnkube-node-78fmj\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.936015 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-host-run-ovn-kubernetes\") pod \"ovnkube-node-78fmj\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.936067 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-ovn-node-metrics-cert\") pod \"ovnkube-node-78fmj\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.936103 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpsrk\" (UniqueName: \"kubernetes.io/projected/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-kube-api-access-vpsrk\") pod \"ovnkube-node-78fmj\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.936214 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-node-log\") pod \"ovnkube-node-78fmj\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.936234 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-ovnkube-script-lib\") pod \"ovnkube-node-78fmj\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.936254 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-host-run-netns\") pod \"ovnkube-node-78fmj\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.936297 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-ovnkube-config\") pod \"ovnkube-node-78fmj\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.936325 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-host-kubelet\") pod \"ovnkube-node-78fmj\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.936308 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-host-run-netns\") pod \"ovnkube-node-78fmj\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.936347 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-systemd-units\") pod \"ovnkube-node-78fmj\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.936329 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-run-systemd\") pod \"ovnkube-node-78fmj\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.936305 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-host-kubelet\") pod \"ovnkube-node-78fmj\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.936406 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-log-socket\") pod \"ovnkube-node-78fmj\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.936425 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-host-cni-bin\") pod \"ovnkube-node-78fmj\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.936508 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-host-cni-bin\") pod \"ovnkube-node-78fmj\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.936681 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-node-log\") pod \"ovnkube-node-78fmj\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.936738 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-host-run-ovn-kubernetes\") pod \"ovnkube-node-78fmj\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.936783 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-log-socket\") pod \"ovnkube-node-78fmj\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.936834 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-var-lib-openvswitch\") pod \"ovnkube-node-78fmj\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.936874 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-run-openvswitch\") pod \"ovnkube-node-78fmj\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.936913 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-78fmj\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.936950 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-etc-openvswitch\") pod \"ovnkube-node-78fmj\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.937188 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-env-overrides\") pod \"ovnkube-node-78fmj\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.944204 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-ovn-node-metrics-cert\") pod \"ovnkube-node-78fmj\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.951672 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:32:27 crc kubenswrapper[4724]: E0223 17:32:27.951953 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.957134 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78fmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:27Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.959860 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpsrk\" (UniqueName: \"kubernetes.io/projected/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-kube-api-access-vpsrk\") pod \"ovnkube-node-78fmj\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.970819 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35021d6d-15bd-4153-9dae-7b002eff8c23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26478861fb89db7eb6b5ba0a2089cad4360893bd95a71883be07831745c5c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f971a2a092b595760de8b7a9b6a0a13fa802d3fcb6a2c2b8922b01c9c57d782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d892cd1f6ea7fabb2e64e3e78d329f6e0efbf8585cc4f72fbcd29f31a035cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:27Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:27 crc kubenswrapper[4724]: I0223 17:32:27.988238 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d114679fca6bbb2dbaaee0970449fe265c7c931665eb559d015abb61de3ec07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dd3d46856b5d5dece678b672db6485b68b9b7fbe9f029f7d9df37906146674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:27Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.003623 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d85ca9995343f8d309c902c37fc2db6faf539dfde44cc8334e279648ec2ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:28Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.016210 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2dn8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00434a2a-97a5-4d8f-9a6f-9dc5b372cd20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://271eec7ff87c2b2d14aded00b308e9cd801b51352cbb3c7075bdc7037560eddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5pmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2dn8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:28Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.022228 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.022302 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.022314 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.022336 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.022348 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:28Z","lastTransitionTime":"2026-02-23T17:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.032715 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qssx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8179b275-39bb-472c-915f-a02b2a09c88d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qssx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:28Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.045644 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a065b197-b354-4d9b-b2e9-7d4882a3d1a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cgpd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cgpd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw78r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:28Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.067430 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5334ad9-e4a5-4eec-b535-44664ba43c0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47ad3efcec8d953297d16ec3aa049656f57e69121a9dfc2f83572d1fe0c1f8ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8b4c85243660568a55df3eb0867530e60512a01960c5be951763ccd9459be30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b43f0bff4db1bc23f440c84a11c3018196009ec0fe47a9cb4b4cd136e5fffd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee21f0fcd431e1f400bb68f6a5c7b5b8d38ced5a33133684160a3255ac61ec20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65474f581d37631a7193e3ebc061e9df69206ec245113f16c7930111a5f74d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://200faace6300bc9dc3138b270b3fd029a6580cb06bfdd89eb73f97635abc7ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://200faace6300bc9dc3138b270b3fd029a6580cb06bfdd89eb73f97635abc7ac3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3268af99383f73c85412d56d39515a43362510c7a3a972e83752d5929d7d4935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3268af99383f73c85412d56d39515a43362510c7a3a972e83752d5929d7d4935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5ae1a073932b6e36d2af6015c8cd09df8d031c65d67e2d06c5bcc2fcd6b35407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ae1a073932b6e36d2af6015c8cd09df8d031c65d67e2d06c5bcc2fcd6b35407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:28Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.081914 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbcb3c1b1daa4328b6e5e049e9f1270ee94da4897df26324f6e4cd898f565de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:28Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.095965 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mmxrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a042db-4057-4913-8091-da7d8c79feba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wrps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mmxrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:28Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.107235 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.112990 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2df91905-ff3d-4a7d-8e22-337861483a5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9fabd2560e4a3826fb070276626936d120d612d37e2750e417f637b93b1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff1c2b88a730048902a1360cdbcddfb4f82c6a58c427edb24a792b254f55383\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:17Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 17:30:47.305065 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 17:30:47.307934 1 observer_polling.go:159] Starting file observer\\\\nI0223 17:30:47.346927 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 17:30:47.351923 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0223 17:31:17.828320 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:16Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbce4ddd851479ee9b40b8c773a052d5a1d1420df107d8ea0e9fa2b927300910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bfa4a72a4a487fbc7703a3b60a9ef9a18b7fea6e58fd427f50361d8df9731d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2591d81bed9e5dbad670e16b2266367cfb386a3d68ffad9d09e0652130ee2609\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:28Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:28 crc kubenswrapper[4724]: W0223 17:32:28.121551 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c8df7b6_e5f2_4950_b2d2_9f1583fe76c1.slice/crio-4d3e62c813d4b4e51956aba87980d5e1132e8213ba780042a99f3f6149163ef8 WatchSource:0}: Error finding container 4d3e62c813d4b4e51956aba87980d5e1132e8213ba780042a99f3f6149163ef8: Status 404 returned error can't find the container with id 4d3e62c813d4b4e51956aba87980d5e1132e8213ba780042a99f3f6149163ef8 Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.124341 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.124381 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.124406 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.124427 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.124443 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:28Z","lastTransitionTime":"2026-02-23T17:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.144371 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 07:27:27.173215899 +0000 UTC Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.227472 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.227515 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.227526 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.227548 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.227560 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:28Z","lastTransitionTime":"2026-02-23T17:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.330158 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.330188 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.330197 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.330214 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.330225 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:28Z","lastTransitionTime":"2026-02-23T17:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.434267 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.434658 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.434735 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.434831 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.434896 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:28Z","lastTransitionTime":"2026-02-23T17:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.538678 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.538737 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.538749 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.538769 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.538786 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:28Z","lastTransitionTime":"2026-02-23T17:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.641752 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.641815 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.641830 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.641853 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.641869 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:28Z","lastTransitionTime":"2026-02-23T17:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.715770 4724 generic.go:334] "Generic (PLEG): container finished" podID="8179b275-39bb-472c-915f-a02b2a09c88d" containerID="e5834e144917a9f037152ea525a0b425f1053407ebc5db5a09433bc8950db3a2" exitCode=0 Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.715836 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qssx7" event={"ID":"8179b275-39bb-472c-915f-a02b2a09c88d","Type":"ContainerDied","Data":"e5834e144917a9f037152ea525a0b425f1053407ebc5db5a09433bc8950db3a2"} Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.715879 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qssx7" event={"ID":"8179b275-39bb-472c-915f-a02b2a09c88d","Type":"ContainerStarted","Data":"8439350a148df1e8a4c0ad3b586abc3d8c994558347d99372b6de62a82703a1f"} Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.718726 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mmxrg" event={"ID":"45a042db-4057-4913-8091-da7d8c79feba","Type":"ContainerStarted","Data":"1b6df06ac80efb6de6b872bec086095e2b4ba0c13dba6e949a4f7309ef3e4300"} Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.718775 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mmxrg" event={"ID":"45a042db-4057-4913-8091-da7d8c79feba","Type":"ContainerStarted","Data":"6d9753732859c409907c016e2a39419c7f8694fd5a5f4bed489dfdab354070e6"} Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.727321 4724 generic.go:334] "Generic (PLEG): container finished" podID="8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" containerID="31a45d3dd297c3d2d4ca5e437f703790082ce9d5429688cc058e989da24daf02" exitCode=0 Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.727478 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" event={"ID":"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1","Type":"ContainerDied","Data":"31a45d3dd297c3d2d4ca5e437f703790082ce9d5429688cc058e989da24daf02"} Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.727559 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" event={"ID":"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1","Type":"ContainerStarted","Data":"4d3e62c813d4b4e51956aba87980d5e1132e8213ba780042a99f3f6149163ef8"} Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.733943 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" event={"ID":"a065b197-b354-4d9b-b2e9-7d4882a3d1a2","Type":"ContainerStarted","Data":"380efbba5a3d4d1c9997f3f90f6737934b602c7b736796ede2939f205beab4c4"} Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.734118 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" event={"ID":"a065b197-b354-4d9b-b2e9-7d4882a3d1a2","Type":"ContainerStarted","Data":"716e3c7a8293727fc9315beb9da7fea72ec299ea5d8f15035ab77347c201a7db"} Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.734223 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" event={"ID":"a065b197-b354-4d9b-b2e9-7d4882a3d1a2","Type":"ContainerStarted","Data":"5b6353b4c8fc2e7dc3c2d7b3b986044c837b6b0c190cc75525f6dc673916ac9c"} Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.738851 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2df91905-ff3d-4a7d-8e22-337861483a5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9fabd2560e4a3826fb070276626936d120d612d37e2750e417f637b93b1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff1c2b88a730048902a1360cdbcddfb4f82c6a58c427edb24a792b254f55383\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:17Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 17:30:47.305065 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 17:30:47.307934 1 observer_polling.go:159] Starting file observer\\\\nI0223 17:30:47.346927 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 17:30:47.351923 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0223 17:31:17.828320 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:16Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbce4ddd851479ee9b40b8c773a052d5a1d1420df107d8ea0e9fa2b927300910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bfa4a72a4a487fbc7703a3b60a9ef9a18b7fea6e58fd427f50361d8df9731d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2591d81bed9e5dbad670e16b2266367cfb386a3d68ffad9d09e0652130ee2609\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:28Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.747276 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.747317 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.747327 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.747344 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.747355 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:28Z","lastTransitionTime":"2026-02-23T17:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.755906 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbcb3c1b1daa4328b6e5e049e9f1270ee94da4897df26324f6e4cd898f565de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:28Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.775384 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mmxrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a042db-4057-4913-8091-da7d8c79feba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wrps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mmxrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:28Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.789854 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:28Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.806531 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74b84e23-8d8b-47b7-b3d4-269b16e9dfb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e581ef5d1fb9752e7582661dffceba50fc7b408467d9b0f0cca3b29e1838e72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:28Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.850269 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.850607 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.850714 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.850842 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.850949 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:28Z","lastTransitionTime":"2026-02-23T17:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.950230 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.950249 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:32:28 crc kubenswrapper[4724]: E0223 17:32:28.950555 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 17:32:28 crc kubenswrapper[4724]: E0223 17:32:28.950734 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.953060 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.953229 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.953300 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.953376 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:28 crc kubenswrapper[4724]: I0223 17:32:28.953454 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:28Z","lastTransitionTime":"2026-02-23T17:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.056723 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.056774 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.056787 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.056826 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.056845 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:29Z","lastTransitionTime":"2026-02-23T17:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.074411 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e1d6606-75fc-41fd-9c23-18ee248da2af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe805066e5e61f09a4f5cf1f9e912da6664229c8db86a898cbe670278e4c2530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beda3a9d030f2d4aa9c6a78508a84f5bc122debebb2c5b423233834433b8fd5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e108d74a635da102e857e2ad4f18384861e4e630ae31af9aa05841e6bbc2de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b889ee0ad15cc6ff92c69a44c4bc0bf620e23e3dc7276324662c265ba5c7d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:25Z\\\",\\\"message\\\":\\\"W0223 17:31:24.378840 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 17:31:24.379771 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771867884 cert, and key in /tmp/serving-cert-1953836024/serving-signer.crt, /tmp/serving-cert-1953836024/serving-signer.key\\\\nI0223 17:31:24.846158 1 observer_polling.go:159] Starting file observer\\\\nW0223 17:31:24.858497 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 17:31:24.858706 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 17:31:24.859835 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1953836024/tls.crt::/tmp/serving-cert-1953836024/tls.key\\\\\\\"\\\\nF0223 17:31:25.358711 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:31:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b550694e49325768a2674a4d2b7089dd267b91a273751fb62d060a0663f06619\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:28Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.095612 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:29Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.113780 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:29Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.129858 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35021d6d-15bd-4153-9dae-7b002eff8c23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26478861fb89db7eb6b5ba0a2089cad4360893bd95a71883be07831745c5c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f971a2a092b595760de8b7a9b6a0a13fa802d3fcb6a2c2b8922b01c9c57d782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d892cd1f6ea7fabb2e64e3e78d329f6e0efbf8585cc4f72fbcd29f31a035cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:29Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.145013 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 19:07:53.827405637 +0000 UTC Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.158985 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.159030 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.159044 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.159062 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.159074 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:29Z","lastTransitionTime":"2026-02-23T17:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.165315 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78fmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:29Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.479534 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qssx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8179b275-39bb-472c-915f-a02b2a09c88d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5834e144917a9f037152ea525a0b425f1053407ebc5db5a09433bc8950db3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5834e144917a9f037152ea525a0b425f1053407ebc5db5a09433bc8950db3a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qssx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:29Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.485998 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.486052 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.486062 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.486079 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.486093 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:29Z","lastTransitionTime":"2026-02-23T17:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.496732 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a065b197-b354-4d9b-b2e9-7d4882a3d1a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cgpd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cgpd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw78r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:29Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.521280 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5334ad9-e4a5-4eec-b535-44664ba43c0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47ad3efcec8d953297d16ec3aa049656f57e69121a9dfc2f83572d1fe0c1f8ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8b4c85243660568a55df3eb0867530e60512a01960c5be951763ccd9459be30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b43f0bff4db1bc23f440c84a11c3018196009ec0fe47a9cb4b4cd136e5fffd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee21f0fcd431e1f400bb68f6a5c7b5b8d38ced5a33133684160a3255ac61ec20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65474f581d37631a7193e3ebc061e9df69206ec245113f16c7930111a5f74d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://200faace6300bc9dc3138b270b3fd029a6580cb06bfdd89eb73f97635abc7ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://200faace6300bc9dc3138b270b3fd029a6580cb06bfdd89eb73f97635abc7ac3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3268af99383f73c85412d56d39515a43362510c7a3a972e83752d5929d7d4935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3268af99383f73c85412d56d39515a43362510c7a3a972e83752d5929d7d4935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5ae1a073932b6e36d2af6015c8cd09df8d031c65d67e2d06c5bcc2fcd6b35407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ae1a073932b6e36d2af6015c8cd09df8d031c65d67e2d06c5bcc2fcd6b35407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:29Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.537597 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d114679fca6bbb2dbaaee0970449fe265c7c931665eb559d015abb61de3ec07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dd3d46856b5d5dece678b672db6485b68b9b7fbe9f029f7d9df37906146674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:29Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.553154 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d85ca9995343f8d309c902c37fc2db6faf539dfde44cc8334e279648ec2ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:29Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.568404 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2dn8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00434a2a-97a5-4d8f-9a6f-9dc5b372cd20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://271eec7ff87c2b2d14aded00b308e9cd801b51352cbb3c7075bdc7037560eddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5pmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2dn8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:29Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.584104 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d85ca9995343f8d309c902c37fc2db6faf539dfde44cc8334e279648ec2ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:29Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.589639 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.589753 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.589766 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.589786 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.589800 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:29Z","lastTransitionTime":"2026-02-23T17:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.597875 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2dn8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00434a2a-97a5-4d8f-9a6f-9dc5b372cd20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://271eec7ff87c2b2d14aded00b308e9cd801b51352cbb3c7075bdc7037560eddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5pmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2dn8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:29Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.614324 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qssx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8179b275-39bb-472c-915f-a02b2a09c88d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5834e144917a9f037152ea525a0b425f1053407ebc5db5a09433bc8950db3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5834e144917a9f037152ea525a0b425f1053407ebc5db5a09433bc8950db3a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qssx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:29Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.628129 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a065b197-b354-4d9b-b2e9-7d4882a3d1a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380efbba5a3d4d1c9997f3f90f6737934b602c7b736796ede2939f205beab4c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cgpd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://716e3c7a8293727fc9315beb9da7fea72ec299ea5d8f15035ab77347c201a7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cgpd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw78r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:29Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.649589 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5334ad9-e4a5-4eec-b535-44664ba43c0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47ad3efcec8d953297d16ec3aa049656f57e69121a9dfc2f83572d1fe0c1f8ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8b4c85243660568a55df3eb0867530e60512a01960c5be951763ccd9459be30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b43f0bff4db1bc23f440c84a11c3018196009ec0fe47a9cb4b4cd136e5fffd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee21f0fcd431e1f400bb68f6a5c7b5b8d38ced5a33133684160a3255ac61ec20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65474f581d37631a7193e3ebc061e9df69206ec245113f16c7930111a5f74d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://200faace6300bc9dc3138b270b3fd029a6580cb06bfdd89eb73f97635abc7ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://200faace6300bc9dc3138b270b3fd029a6580cb06bfdd89eb73f97635abc7ac3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3268af99383f73c85412d56d39515a43362510c7a3a972e83752d5929d7d4935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3268af99383f73c85412d56d39515a43362510c7a3a972e83752d5929d7d4935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5ae1a073932b6e36d2af6015c8cd09df8d031c65d67e2d06c5bcc2fcd6b35407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ae1a073932b6e36d2af6015c8cd09df8d031c65d67e2d06c5bcc2fcd6b35407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:29Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.665429 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d114679fca6bbb2dbaaee0970449fe265c7c931665eb559d015abb61de3ec07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dd3d46856b5d5dece678b672db6485b68b9b7fbe9f029f7d9df37906146674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:29Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.678326 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mmxrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a042db-4057-4913-8091-da7d8c79feba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6df06ac80efb6de6b872bec086095e2b4ba0c13dba6e949a4f7309ef3e4300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wrps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mmxrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:29Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.692434 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.692505 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.692515 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.692535 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.692548 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:29Z","lastTransitionTime":"2026-02-23T17:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.693070 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2df91905-ff3d-4a7d-8e22-337861483a5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9fabd2560e4a3826fb070276626936d120d612d37e2750e417f637b93b1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff1c2b88a730048902a1360cdbcddfb4f82c6a58c427edb24a792b254f55383\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:17Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 17:30:47.305065 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 17:30:47.307934 1 observer_polling.go:159] Starting file observer\\\\nI0223 17:30:47.346927 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 17:30:47.351923 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0223 17:31:17.828320 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:16Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbce4ddd851479ee9b40b8c773a052d5a1d1420df107d8ea0e9fa2b927300910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bfa4a72a4a487fbc7703a3b60a9ef9a18b7fea6e58fd427f50361d8df9731d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2591d81bed9e5dbad670e16b2266367cfb386a3d68ffad9d09e0652130ee2609\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:29Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.704976 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbcb3c1b1daa4328b6e5e049e9f1270ee94da4897df26324f6e4cd898f565de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:29Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.716219 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:29Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.731158 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:29Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.739066 4724 generic.go:334] "Generic (PLEG): container finished" podID="8179b275-39bb-472c-915f-a02b2a09c88d" containerID="0ee8d74243fdc2b520c1c145c37246abc0e2b653025c8626f05c62a9c8b20c7a" exitCode=0 Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.739153 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qssx7" event={"ID":"8179b275-39bb-472c-915f-a02b2a09c88d","Type":"ContainerDied","Data":"0ee8d74243fdc2b520c1c145c37246abc0e2b653025c8626f05c62a9c8b20c7a"} Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.741935 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" event={"ID":"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1","Type":"ContainerStarted","Data":"6574641e63ac4e0e79550d6b28b2a683f11be521784b19d2fbfc9226dd5b7ef0"} Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.744415 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:29Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.754786 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74b84e23-8d8b-47b7-b3d4-269b16e9dfb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e581ef5d1fb9752e7582661dffceba50fc7b408467d9b0f0cca3b29e1838e72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:29Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.768002 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e1d6606-75fc-41fd-9c23-18ee248da2af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe805066e5e61f09a4f5cf1f9e912da6664229c8db86a898cbe670278e4c2530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beda3a9d030f2d4aa9c6a78508a84f5bc122debebb2c5b423233834433b8fd5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e108d74a635da102e857e2ad4f18384861e4e630ae31af9aa05841e6bbc2de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b889ee0ad15cc6ff92c69a44c4bc0bf620e23e3dc7276324662c265ba5c7d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:25Z\\\",\\\"message\\\":\\\"W0223 17:31:24.378840 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 17:31:24.379771 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771867884 cert, and key in /tmp/serving-cert-1953836024/serving-signer.crt, /tmp/serving-cert-1953836024/serving-signer.key\\\\nI0223 17:31:24.846158 1 observer_polling.go:159] Starting file observer\\\\nW0223 17:31:24.858497 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 17:31:24.858706 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 17:31:24.859835 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1953836024/tls.crt::/tmp/serving-cert-1953836024/tls.key\\\\\\\"\\\\nF0223 17:31:25.358711 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:31:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b550694e49325768a2674a4d2b7089dd267b91a273751fb62d060a0663f06619\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:29Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.780997 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35021d6d-15bd-4153-9dae-7b002eff8c23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26478861fb89db7eb6b5ba0a2089cad4360893bd95a71883be07831745c5c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f971a2a092b595760de8b7a9b6a0a13fa802d3fcb6a2c2b8922b01c9c57d782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d892cd1f6ea7fabb2e64e3e78d329f6e0efbf8585cc4f72fbcd29f31a035cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:29Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.797308 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.797352 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.797365 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.797405 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.797421 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:29Z","lastTransitionTime":"2026-02-23T17:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.804344 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a45d3dd297c3d2d4ca5e437f703790082ce9d5429688cc058e989da24daf02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31a45d3dd297c3d2d4ca5e437f703790082ce9d5429688cc058e989da24daf02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78fmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:29Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.820958 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35021d6d-15bd-4153-9dae-7b002eff8c23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26478861fb89db7eb6b5ba0a2089cad4360893bd95a71883be07831745c5c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f971a2a092b595760de8b7a9b6a0a13fa802d3fcb6a2c2b8922b01c9c57d782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d892cd1f6ea7fabb2e64e3e78d329f6e0efbf8585cc4f72fbcd29f31a035cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:29Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.839288 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a45d3dd297c3d2d4ca5e437f703790082ce9d5429688cc058e989da24daf02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31a45d3dd297c3d2d4ca5e437f703790082ce9d5429688cc058e989da24daf02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78fmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:29Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.850497 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2dn8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00434a2a-97a5-4d8f-9a6f-9dc5b372cd20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://271eec7ff87c2b2d14aded00b308e9cd801b51352cbb3c7075bdc7037560eddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5pmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2dn8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:29Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.892909 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qssx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8179b275-39bb-472c-915f-a02b2a09c88d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5834e144917a9f037152ea525a0b425f1053407ebc5db5a09433bc8950db3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5834e144917a9f037152ea525a0b425f1053407ebc5db5a09433bc8950db3a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ee8d74243fdc2b520c1c145c37246abc0e2b653025c8626f05c62a9c8b20c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee8d74243fdc2b520c1c145c37246abc0e2b653025c8626f05c62a9c8b20c7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qssx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:29Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.904255 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.904305 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.904320 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.904339 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.904351 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:29Z","lastTransitionTime":"2026-02-23T17:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.915780 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a065b197-b354-4d9b-b2e9-7d4882a3d1a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380efbba5a3d4d1c9997f3f90f6737934b602c7b736796ede2939f205beab4c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cgpd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://716e3c7a8293727fc9315beb9da7fea72ec299ea5d8f15035ab77347c201a7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cgpd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw78r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:29Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.950056 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.950109 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5334ad9-e4a5-4eec-b535-44664ba43c0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47ad3efcec8d953297d16ec3aa049656f57e69121a9dfc2f83572d1fe0c1f8ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8b4c85243660568a55df3eb0867530e60512a01960c5be951763ccd9459be30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b43f0bff4db1bc23f440c84a11c3018196009ec0fe47a9cb4b4cd136e5fffd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee21f0fcd431e1f400bb68f6a5c7b5b8d38ced5a33133684160a3255ac61ec20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65474f581d37631a7193e3ebc061e9df69206ec245113f16c7930111a5f74d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://200faace6300bc9dc3138b270b3fd029a6580cb06bfdd89eb73f97635abc7ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://200faace6300bc9dc3138b270b3fd029a6580cb06bfdd89eb73f97635abc7ac3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3268af99383f73c85412d56d39515a43362510c7a3a972e83752d5929d7d4935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3268af99383f73c85412d56d39515a43362510c7a3a972e83752d5929d7d4935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5ae1a073932b6e36d2af6015c8cd09df8d031c65d67e2d06c5bcc2fcd6b35407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ae1a073932b6e36d2af6015c8cd09df8d031c65d67e2d06c5bcc2fcd6b35407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:29Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:29 crc kubenswrapper[4724]: E0223 17:32:29.950238 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.965897 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d114679fca6bbb2dbaaee0970449fe265c7c931665eb559d015abb61de3ec07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dd3d46856b5d5dece678b672db6485b68b9b7fbe9f029f7d9df37906146674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:29Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:29 crc kubenswrapper[4724]: I0223 17:32:29.984412 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d85ca9995343f8d309c902c37fc2db6faf539dfde44cc8334e279648ec2ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:29Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.000576 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2df91905-ff3d-4a7d-8e22-337861483a5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9fabd2560e4a3826fb070276626936d120d612d37e2750e417f637b93b1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff1c2b88a730048902a1360cdbcddfb4f82c6a58c427edb24a792b254f55383\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:17Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 17:30:47.305065 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 17:30:47.307934 1 observer_polling.go:159] Starting file observer\\\\nI0223 17:30:47.346927 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 17:30:47.351923 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0223 17:31:17.828320 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:16Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbce4ddd851479ee9b40b8c773a052d5a1d1420df107d8ea0e9fa2b927300910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bfa4a72a4a487fbc7703a3b60a9ef9a18b7fea6e58fd427f50361d8df9731d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2591d81bed9e5dbad670e16b2266367cfb386a3d68ffad9d09e0652130ee2609\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:29Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.012492 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.012536 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.012549 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.012571 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.012583 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:30Z","lastTransitionTime":"2026-02-23T17:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.014889 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbcb3c1b1daa4328b6e5e049e9f1270ee94da4897df26324f6e4cd898f565de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:30Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.030107 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mmxrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a042db-4057-4913-8091-da7d8c79feba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6df06ac80efb6de6b872bec086095e2b4ba0c13dba6e949a4f7309ef3e4300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wrps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mmxrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:30Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.047859 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:30Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.060523 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:30Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.069441 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74b84e23-8d8b-47b7-b3d4-269b16e9dfb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e581ef5d1fb9752e7582661dffceba50fc7b408467d9b0f0cca3b29e1838e72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:30Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.089331 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e1d6606-75fc-41fd-9c23-18ee248da2af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe805066e5e61f09a4f5cf1f9e912da6664229c8db86a898cbe670278e4c2530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beda3a9d030f2d4aa9c6a78508a84f5bc122debebb2c5b423233834433b8fd5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e108d74a635da102e857e2ad4f18384861e4e630ae31af9aa05841e6bbc2de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b889ee0ad15cc6ff92c69a44c4bc0bf620e23e3dc7276324662c265ba5c7d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:25Z\\\",\\\"message\\\":\\\"W0223 17:31:24.378840 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 17:31:24.379771 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771867884 cert, and key in /tmp/serving-cert-1953836024/serving-signer.crt, /tmp/serving-cert-1953836024/serving-signer.key\\\\nI0223 17:31:24.846158 1 observer_polling.go:159] Starting file observer\\\\nW0223 17:31:24.858497 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 17:31:24.858706 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 17:31:24.859835 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1953836024/tls.crt::/tmp/serving-cert-1953836024/tls.key\\\\\\\"\\\\nF0223 17:31:25.358711 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:31:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b550694e49325768a2674a4d2b7089dd267b91a273751fb62d060a0663f06619\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:30Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.103984 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:30Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.114563 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.114610 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.114620 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.114644 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.114658 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:30Z","lastTransitionTime":"2026-02-23T17:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.145211 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 13:21:22.605431323 +0000 UTC Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.217340 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.217407 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.217421 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.217440 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.217452 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:30Z","lastTransitionTime":"2026-02-23T17:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.320171 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.320221 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.320236 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.320258 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.320272 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:30Z","lastTransitionTime":"2026-02-23T17:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.423823 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.423882 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.423903 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.423929 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.423949 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:30Z","lastTransitionTime":"2026-02-23T17:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.526823 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.526886 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.526902 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.526925 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.526941 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:30Z","lastTransitionTime":"2026-02-23T17:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.630612 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.630671 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.630686 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.630711 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.630730 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:30Z","lastTransitionTime":"2026-02-23T17:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.733302 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.733353 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.733364 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.733409 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.733430 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:30Z","lastTransitionTime":"2026-02-23T17:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.746878 4724 generic.go:334] "Generic (PLEG): container finished" podID="8179b275-39bb-472c-915f-a02b2a09c88d" containerID="92923ccd7509d6f535098f7e46d3231ffaeed8c19edb7ed11c864d17e9489611" exitCode=0 Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.746969 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qssx7" event={"ID":"8179b275-39bb-472c-915f-a02b2a09c88d","Type":"ContainerDied","Data":"92923ccd7509d6f535098f7e46d3231ffaeed8c19edb7ed11c864d17e9489611"} Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.751247 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" event={"ID":"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1","Type":"ContainerStarted","Data":"fcecf13d69c640f2efa0b9abefbb13a08c15b82126e11159faed92da3b2addf7"} Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.751291 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" event={"ID":"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1","Type":"ContainerStarted","Data":"16f5feb6e9a7c7f7013f55ac108791b423b2ba7e1e443d1653c7a85e5622cfc1"} Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.751309 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" event={"ID":"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1","Type":"ContainerStarted","Data":"9724a1e6d050110c119972cc34cf64fa8dc6c515f8bc0aec4dbf82463c18d3f9"} Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.751325 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" event={"ID":"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1","Type":"ContainerStarted","Data":"c4cc6f0c20370fbabd53ec282c0d89022adfd7706f939163b00f543f7086aad1"} Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.751339 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" event={"ID":"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1","Type":"ContainerStarted","Data":"0cd0bf4870b6f1d6369d4d1a12817188568f572708708271de41e71e46f2002e"} Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.766743 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35021d6d-15bd-4153-9dae-7b002eff8c23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26478861fb89db7eb6b5ba0a2089cad4360893bd95a71883be07831745c5c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f971a2a092b595760de8b7a9b6a0a13fa802d3fcb6a2c2b8922b01c9c57d782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d892cd1f6ea7fabb2e64e3e78d329f6e0efbf8585cc4f72fbcd29f31a035cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:30Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.805292 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a45d3dd297c3d2d4ca5e437f703790082ce9d5429688cc058e989da24daf02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31a45d3dd297c3d2d4ca5e437f703790082ce9d5429688cc058e989da24daf02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78fmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:30Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.826486 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a065b197-b354-4d9b-b2e9-7d4882a3d1a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380efbba5a3d4d1c9997f3f90f6737934b602c7b736796ede2939f205beab4c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cgpd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://716e3c7a8293727fc9315beb9da7fea72ec299ea5d8f15035ab77347c201a7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cgpd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw78r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:30Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.836271 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.836324 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.836346 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.836375 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.836415 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:30Z","lastTransitionTime":"2026-02-23T17:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.851548 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5334ad9-e4a5-4eec-b535-44664ba43c0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47ad3efcec8d953297d16ec3aa049656f57e69121a9dfc2f83572d1fe0c1f8ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8b4c85243660568a55df3eb0867530e60512a01960c5be951763ccd9459be30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b43f0bff4db1bc23f440c84a11c3018196009ec0fe47a9cb4b4cd136e5fffd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee21f0fcd431e1f400bb68f6a5c7b5b8d38ced5a33133684160a3255ac61ec20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65474f581d37631a7193e3ebc061e9df69206ec245113f16c7930111a5f74d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://200faace6300bc9dc3138b270b3fd029a6580cb06bfdd89eb73f97635abc7ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://200faace6300bc9dc3138b270b3fd029a6580cb06bfdd89eb73f97635abc7ac3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3268af99383f73c85412d56d39515a43362510c7a3a972e83752d5929d7d4935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3268af99383f73c85412d56d39515a43362510c7a3a972e83752d5929d7d4935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5ae1a073932b6e36d2af6015c8cd09df8d031c65d67e2d06c5bcc2fcd6b35407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ae1a073932b6e36d2af6015c8cd09df8d031c65d67e2d06c5bcc2fcd6b35407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:30Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.869709 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d114679fca6bbb2dbaaee0970449fe265c7c931665eb559d015abb61de3ec07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dd3d46856b5d5dece678b672db6485b68b9b7fbe9f029f7d9df37906146674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:30Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.888681 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d85ca9995343f8d309c902c37fc2db6faf539dfde44cc8334e279648ec2ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:30Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.901739 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2dn8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00434a2a-97a5-4d8f-9a6f-9dc5b372cd20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://271eec7ff87c2b2d14aded00b308e9cd801b51352cbb3c7075bdc7037560eddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5pmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2dn8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:30Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.919886 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qssx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8179b275-39bb-472c-915f-a02b2a09c88d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5834e144917a9f037152ea525a0b425f1053407ebc5db5a09433bc8950db3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5834e144917a9f037152ea525a0b425f1053407ebc5db5a09433bc8950db3a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ee8d74243fdc2b520c1c145c37246abc0e2b653025c8626f05c62a9c8b20c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee8d74243fdc2b520c1c145c37246abc0e2b653025c8626f05c62a9c8b20c7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92923ccd7509d6f535098f7e46d3231ffaeed8c19edb7ed11c864d17e9489611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92923ccd7509d6f535098f7e46d3231ffaeed8c19edb7ed11c864d17e9489611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qssx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:30Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.937581 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2df91905-ff3d-4a7d-8e22-337861483a5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9fabd2560e4a3826fb070276626936d120d612d37e2750e417f637b93b1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff1c2b88a730048902a1360cdbcddfb4f82c6a58c427edb24a792b254f55383\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:17Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 17:30:47.305065 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 17:30:47.307934 1 observer_polling.go:159] Starting file observer\\\\nI0223 17:30:47.346927 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 17:30:47.351923 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0223 17:31:17.828320 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:16Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbce4ddd851479ee9b40b8c773a052d5a1d1420df107d8ea0e9fa2b927300910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bfa4a72a4a487fbc7703a3b60a9ef9a18b7fea6e58fd427f50361d8df9731d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2591d81bed9e5dbad670e16b2266367cfb386a3d68ffad9d09e0652130ee2609\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:30Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.941641 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.941692 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.941706 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.941728 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.941740 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:30Z","lastTransitionTime":"2026-02-23T17:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.950462 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.950512 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:32:30 crc kubenswrapper[4724]: E0223 17:32:30.950591 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 17:32:30 crc kubenswrapper[4724]: E0223 17:32:30.950735 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.959310 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbcb3c1b1daa4328b6e5e049e9f1270ee94da4897df26324f6e4cd898f565de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:30Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.977523 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mmxrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a042db-4057-4913-8091-da7d8c79feba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6df06ac80efb6de6b872bec086095e2b4ba0c13dba6e949a4f7309ef3e4300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wrps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mmxrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:30Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:30 crc kubenswrapper[4724]: I0223 17:32:30.994158 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74b84e23-8d8b-47b7-b3d4-269b16e9dfb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e581ef5d1fb9752e7582661dffceba50fc7b408467d9b0f0cca3b29e1838e72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:30Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.011921 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e1d6606-75fc-41fd-9c23-18ee248da2af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe805066e5e61f09a4f5cf1f9e912da6664229c8db86a898cbe670278e4c2530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beda3a9d030f2d4aa9c6a78508a84f5bc122debebb2c5b423233834433b8fd5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e108d74a635da102e857e2ad4f18384861e4e630ae31af9aa05841e6bbc2de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b889ee0ad15cc6ff92c69a44c4bc0bf620e23e3dc7276324662c265ba5c7d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:25Z\\\",\\\"message\\\":\\\"W0223 17:31:24.378840 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 17:31:24.379771 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771867884 cert, and key in /tmp/serving-cert-1953836024/serving-signer.crt, /tmp/serving-cert-1953836024/serving-signer.key\\\\nI0223 17:31:24.846158 1 observer_polling.go:159] Starting file observer\\\\nW0223 17:31:24.858497 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 17:31:24.858706 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 17:31:24.859835 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1953836024/tls.crt::/tmp/serving-cert-1953836024/tls.key\\\\\\\"\\\\nF0223 17:31:25.358711 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:31:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b550694e49325768a2674a4d2b7089dd267b91a273751fb62d060a0663f06619\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:31Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.026226 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:31Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.044241 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:31Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.045111 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.045159 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.045177 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.045197 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.045207 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:31Z","lastTransitionTime":"2026-02-23T17:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.064365 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:31Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.146266 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 11:33:29.434121227 +0000 UTC Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.148494 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.148580 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.148605 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.148640 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.148665 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:31Z","lastTransitionTime":"2026-02-23T17:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.252381 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.252486 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.252505 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.252535 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.252554 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:31Z","lastTransitionTime":"2026-02-23T17:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.356282 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.356342 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.356358 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.356380 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.356411 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:31Z","lastTransitionTime":"2026-02-23T17:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.459685 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.459717 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.459726 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.459743 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.459754 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:31Z","lastTransitionTime":"2026-02-23T17:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.563904 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.564489 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.564509 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.564537 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.564553 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:31Z","lastTransitionTime":"2026-02-23T17:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.668761 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.668818 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.668837 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.668864 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.668882 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:31Z","lastTransitionTime":"2026-02-23T17:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.758154 4724 generic.go:334] "Generic (PLEG): container finished" podID="8179b275-39bb-472c-915f-a02b2a09c88d" containerID="9f91baa7de890c1792a75725746bacad51e728f8d8bdc1c0ccdc30ff78b4ca77" exitCode=0 Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.758228 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qssx7" event={"ID":"8179b275-39bb-472c-915f-a02b2a09c88d","Type":"ContainerDied","Data":"9f91baa7de890c1792a75725746bacad51e728f8d8bdc1c0ccdc30ff78b4ca77"} Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.771223 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.771272 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.771285 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.771309 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.771322 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:31Z","lastTransitionTime":"2026-02-23T17:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.781014 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:31Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.797445 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:31Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.817771 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:31Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.837445 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74b84e23-8d8b-47b7-b3d4-269b16e9dfb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e581ef5d1fb9752e7582661dffceba50fc7b408467d9b0f0cca3b29e1838e72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:31Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.856583 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e1d6606-75fc-41fd-9c23-18ee248da2af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe805066e5e61f09a4f5cf1f9e912da6664229c8db86a898cbe670278e4c2530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beda3a9d030f2d4aa9c6a78508a84f5bc122debebb2c5b423233834433b8fd5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e108d74a635da102e857e2ad4f18384861e4e630ae31af9aa05841e6bbc2de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b889ee0ad15cc6ff92c69a44c4bc0bf620e23e3dc7276324662c265ba5c7d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:25Z\\\",\\\"message\\\":\\\"W0223 17:31:24.378840 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 17:31:24.379771 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771867884 cert, and key in /tmp/serving-cert-1953836024/serving-signer.crt, /tmp/serving-cert-1953836024/serving-signer.key\\\\nI0223 17:31:24.846158 1 observer_polling.go:159] Starting file observer\\\\nW0223 17:31:24.858497 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 17:31:24.858706 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 17:31:24.859835 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1953836024/tls.crt::/tmp/serving-cert-1953836024/tls.key\\\\\\\"\\\\nF0223 17:31:25.358711 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:31:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b550694e49325768a2674a4d2b7089dd267b91a273751fb62d060a0663f06619\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:31Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.874807 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.874896 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.874925 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.874960 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.874990 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:31Z","lastTransitionTime":"2026-02-23T17:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.877070 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35021d6d-15bd-4153-9dae-7b002eff8c23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26478861fb89db7eb6b5ba0a2089cad4360893bd95a71883be07831745c5c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f971a2a092b595760de8b7a9b6a0a13fa802d3fcb6a2c2b8922b01c9c57d782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d892cd1f6ea7fabb2e64e3e78d329f6e0efbf8585cc4f72fbcd29f31a035cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:31Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.911133 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a45d3dd297c3d2d4ca5e437f703790082ce9d5429688cc058e989da24daf02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31a45d3dd297c3d2d4ca5e437f703790082ce9d5429688cc058e989da24daf02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78fmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:31Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.931976 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d85ca9995343f8d309c902c37fc2db6faf539dfde44cc8334e279648ec2ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:31Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.949951 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2dn8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00434a2a-97a5-4d8f-9a6f-9dc5b372cd20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://271eec7ff87c2b2d14aded00b308e9cd801b51352cbb3c7075bdc7037560eddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5pmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2dn8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:31Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.950162 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:32:31 crc kubenswrapper[4724]: E0223 17:32:31.950341 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.968636 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qssx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8179b275-39bb-472c-915f-a02b2a09c88d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5834e144917a9f037152ea525a0b425f1053407ebc5db5a09433bc8950db3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5834e144917a9f037152ea525a0b425f1053407ebc5db5a09433bc8950db3a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ee8d74243fdc2b520c1c145c37246abc0e2b653025c8626f05c62a9c8b20c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee8d74243fdc2b520c1c145c37246abc0e2b653025c8626f05c62a9c8b20c7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92923ccd7509d6f535098f7e46d3231ffaeed8c19edb7ed11c864d17e9489611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92923ccd7509d6f535098f7e46d3231ffaeed8c19edb7ed11c864d17e9489611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f91baa7de890c1792a75725746bacad51e728f8d8bdc1c0ccdc30ff78b4ca77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f91baa7de890c1792a75725746bacad51e728f8d8bdc1c0ccdc30ff78b4ca77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qssx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:31Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.979500 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.979642 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.979674 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.979710 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.979739 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:31Z","lastTransitionTime":"2026-02-23T17:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:31 crc kubenswrapper[4724]: I0223 17:32:31.983857 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a065b197-b354-4d9b-b2e9-7d4882a3d1a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380efbba5a3d4d1c9997f3f90f6737934b602c7b736796ede2939f205beab4c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cgpd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://716e3c7a8293727fc9315beb9da7fea72ec299ea5d8f15035ab77347c201a7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cgpd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw78r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:31Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.004662 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5334ad9-e4a5-4eec-b535-44664ba43c0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47ad3efcec8d953297d16ec3aa049656f57e69121a9dfc2f83572d1fe0c1f8ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8b4c85243660568a55df3eb0867530e60512a01960c5be951763ccd9459be30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b43f0bff4db1bc23f440c84a11c3018196009ec0fe47a9cb4b4cd136e5fffd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee21f0fcd431e1f400bb68f6a5c7b5b8d38ced5a33133684160a3255ac61ec20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65474f581d37631a7193e3ebc061e9df69206ec245113f16c7930111a5f74d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://200faace6300bc9dc3138b270b3fd029a6580cb06bfdd89eb73f97635abc7ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://200faace6300bc9dc3138b270b3fd029a6580cb06bfdd89eb73f97635abc7ac3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3268af99383f73c85412d56d39515a43362510c7a3a972e83752d5929d7d4935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3268af99383f73c85412d56d39515a43362510c7a3a972e83752d5929d7d4935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5ae1a073932b6e36d2af6015c8cd09df8d031c65d67e2d06c5bcc2fcd6b35407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ae1a073932b6e36d2af6015c8cd09df8d031c65d67e2d06c5bcc2fcd6b35407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:32Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.021258 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d114679fca6bbb2dbaaee0970449fe265c7c931665eb559d015abb61de3ec07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dd3d46856b5d5dece678b672db6485b68b9b7fbe9f029f7d9df37906146674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:32Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.037933 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mmxrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a042db-4057-4913-8091-da7d8c79feba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6df06ac80efb6de6b872bec086095e2b4ba0c13dba6e949a4f7309ef3e4300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wrps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mmxrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:32Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.056423 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2df91905-ff3d-4a7d-8e22-337861483a5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9fabd2560e4a3826fb070276626936d120d612d37e2750e417f637b93b1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff1c2b88a730048902a1360cdbcddfb4f82c6a58c427edb24a792b254f55383\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:17Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 17:30:47.305065 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 17:30:47.307934 1 observer_polling.go:159] Starting file observer\\\\nI0223 17:30:47.346927 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 17:30:47.351923 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0223 17:31:17.828320 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:16Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbce4ddd851479ee9b40b8c773a052d5a1d1420df107d8ea0e9fa2b927300910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bfa4a72a4a487fbc7703a3b60a9ef9a18b7fea6e58fd427f50361d8df9731d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2591d81bed9e5dbad670e16b2266367cfb386a3d68ffad9d09e0652130ee2609\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:32Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.074499 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbcb3c1b1daa4328b6e5e049e9f1270ee94da4897df26324f6e4cd898f565de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:32Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.083056 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.083115 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.083128 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.083151 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.083165 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:32Z","lastTransitionTime":"2026-02-23T17:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.147188 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 00:54:22.192512925 +0000 UTC Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.186532 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.186583 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.186594 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.186613 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.186625 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:32Z","lastTransitionTime":"2026-02-23T17:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.289970 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.290038 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.290054 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.290129 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.290150 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:32Z","lastTransitionTime":"2026-02-23T17:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.392915 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.392984 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.392997 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.393019 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.393033 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:32Z","lastTransitionTime":"2026-02-23T17:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.495509 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.495563 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.495575 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.495596 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.495609 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:32Z","lastTransitionTime":"2026-02-23T17:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.598773 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.598817 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.598828 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.598848 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.598860 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:32Z","lastTransitionTime":"2026-02-23T17:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.704553 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.704624 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.704636 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.704684 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.704697 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:32Z","lastTransitionTime":"2026-02-23T17:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.774204 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" event={"ID":"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1","Type":"ContainerStarted","Data":"0c79e6fcccea909f19d459103ec63aed7de2f0a498ffab190e63a647f2f20bdb"} Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.778650 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qssx7" event={"ID":"8179b275-39bb-472c-915f-a02b2a09c88d","Type":"ContainerStarted","Data":"c5570859da5ce3899539e8c87dc2ed79e262b6abd82c80893a6a9aa3c26f4e37"} Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.799593 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a065b197-b354-4d9b-b2e9-7d4882a3d1a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380efbba5a3d4d1c9997f3f90f6737934b602c7b736796ede2939f205beab4c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cgpd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://716e3c7a8293727fc9315beb9da7fea72ec299ea5d8f15035ab77347c201a7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cgpd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw78r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:32Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.807611 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.807681 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.807700 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.807723 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.807738 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:32Z","lastTransitionTime":"2026-02-23T17:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.823528 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5334ad9-e4a5-4eec-b535-44664ba43c0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47ad3efcec8d953297d16ec3aa049656f57e69121a9dfc2f83572d1fe0c1f8ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8b4c85243660568a55df3eb0867530e60512a01960c5be951763ccd9459be30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b43f0bff4db1bc23f440c84a11c3018196009ec0fe47a9cb4b4cd136e5fffd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee21f0fcd431e1f400bb68f6a5c7b5b8d38ced5a33133684160a3255ac61ec20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65474f581d37631a7193e3ebc061e9df69206ec245113f16c7930111a5f74d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://200faace6300bc9dc3138b270b3fd029a6580cb06bfdd89eb73f97635abc7ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://200faace6300bc9dc3138b270b3fd029a6580cb06bfdd89eb73f97635abc7ac3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3268af99383f73c85412d56d39515a43362510c7a3a972e83752d5929d7d4935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3268af99383f73c85412d56d39515a43362510c7a3a972e83752d5929d7d4935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5ae1a073932b6e36d2af6015c8cd09df8d031c65d67e2d06c5bcc2fcd6b35407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ae1a073932b6e36d2af6015c8cd09df8d031c65d67e2d06c5bcc2fcd6b35407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:32Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.840337 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d114679fca6bbb2dbaaee0970449fe265c7c931665eb559d015abb61de3ec07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dd3d46856b5d5dece678b672db6485b68b9b7fbe9f029f7d9df37906146674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:32Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.857212 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d85ca9995343f8d309c902c37fc2db6faf539dfde44cc8334e279648ec2ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:32Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.869475 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2dn8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00434a2a-97a5-4d8f-9a6f-9dc5b372cd20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://271eec7ff87c2b2d14aded00b308e9cd801b51352cbb3c7075bdc7037560eddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5pmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2dn8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:32Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.884218 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qssx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8179b275-39bb-472c-915f-a02b2a09c88d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5834e144917a9f037152ea525a0b425f1053407ebc5db5a09433bc8950db3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5834e144917a9f037152ea525a0b425f1053407ebc5db5a09433bc8950db3a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ee8d74243fdc2b520c1c145c37246abc0e2b653025c8626f05c62a9c8b20c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee8d74243fdc2b520c1c145c37246abc0e2b653025c8626f05c62a9c8b20c7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92923ccd7509d6f535098f7e46d3231ffaeed8c19edb7ed11c864d17e9489611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92923ccd7509d6f535098f7e46d3231ffaeed8c19edb7ed11c864d17e9489611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f91baa7de890c1792a75725746bacad51e728f8d8bdc1c0ccdc30ff78b4ca77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f91baa7de890c1792a75725746bacad51e728f8d8bdc1c0ccdc30ff78b4ca77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5570859da5ce3899539e8c87dc2ed79e262b6abd82c80893a6a9aa3c26f4e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qssx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:32Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.899661 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2df91905-ff3d-4a7d-8e22-337861483a5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9fabd2560e4a3826fb070276626936d120d612d37e2750e417f637b93b1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff1c2b88a730048902a1360cdbcddfb4f82c6a58c427edb24a792b254f55383\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:17Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 17:30:47.305065 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 17:30:47.307934 1 observer_polling.go:159] Starting file observer\\\\nI0223 17:30:47.346927 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 17:30:47.351923 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0223 17:31:17.828320 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:16Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbce4ddd851479ee9b40b8c773a052d5a1d1420df107d8ea0e9fa2b927300910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bfa4a72a4a487fbc7703a3b60a9ef9a18b7fea6e58fd427f50361d8df9731d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2591d81bed9e5dbad670e16b2266367cfb386a3d68ffad9d09e0652130ee2609\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:32Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.909575 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.909614 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.909627 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.909668 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.909686 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:32Z","lastTransitionTime":"2026-02-23T17:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.913989 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbcb3c1b1daa4328b6e5e049e9f1270ee94da4897df26324f6e4cd898f565de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:32Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.927941 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mmxrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a042db-4057-4913-8091-da7d8c79feba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6df06ac80efb6de6b872bec086095e2b4ba0c13dba6e949a4f7309ef3e4300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wrps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mmxrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:32Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.939746 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74b84e23-8d8b-47b7-b3d4-269b16e9dfb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e581ef5d1fb9752e7582661dffceba50fc7b408467d9b0f0cca3b29e1838e72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:32Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.950670 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.950764 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:32:32 crc kubenswrapper[4724]: E0223 17:32:32.950911 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 17:32:32 crc kubenswrapper[4724]: E0223 17:32:32.951083 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.956932 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e1d6606-75fc-41fd-9c23-18ee248da2af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe805066e5e61f09a4f5cf1f9e912da6664229c8db86a898cbe670278e4c2530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beda3a9d030f2d4aa9c6a78508a84f5bc122debebb2c5b423233834433b8fd5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e108d74a635da102e857e2ad4f18384861e4e630ae31af9aa05841e6bbc2de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b889ee0ad15cc6ff92c69a44c4bc0bf620e23e3dc7276324662c265ba5c7d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:25Z\\\",\\\"message\\\":\\\"W0223 17:31:24.378840 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 17:31:24.379771 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771867884 cert, and key in /tmp/serving-cert-1953836024/serving-signer.crt, /tmp/serving-cert-1953836024/serving-signer.key\\\\nI0223 17:31:24.846158 1 observer_polling.go:159] Starting file observer\\\\nW0223 17:31:24.858497 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 17:31:24.858706 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 17:31:24.859835 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1953836024/tls.crt::/tmp/serving-cert-1953836024/tls.key\\\\\\\"\\\\nF0223 17:31:25.358711 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:31:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b550694e49325768a2674a4d2b7089dd267b91a273751fb62d060a0663f06619\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:32Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.971348 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:32Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:32 crc kubenswrapper[4724]: I0223 17:32:32.987296 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:32Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.002998 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:32Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.012473 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.012525 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.012540 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.012563 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.012578 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:33Z","lastTransitionTime":"2026-02-23T17:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.018783 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35021d6d-15bd-4153-9dae-7b002eff8c23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26478861fb89db7eb6b5ba0a2089cad4360893bd95a71883be07831745c5c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f971a2a092b595760de8b7a9b6a0a13fa802d3fcb6a2c2b8922b01c9c57d782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d892cd1f6ea7fabb2e64e3e78d329f6e0efbf8585cc4f72fbcd29f31a035cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:33Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.040586 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a45d3dd297c3d2d4ca5e437f703790082ce9d5429688cc058e989da24daf02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31a45d3dd297c3d2d4ca5e437f703790082ce9d5429688cc058e989da24daf02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78fmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:33Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.115210 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.115257 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.115268 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.115288 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.115304 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:33Z","lastTransitionTime":"2026-02-23T17:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.147937 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 02:59:49.476457705 +0000 UTC Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.218133 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.218190 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.218203 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.218272 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.218287 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:33Z","lastTransitionTime":"2026-02-23T17:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.320972 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.321016 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.321026 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.321042 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.321052 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:33Z","lastTransitionTime":"2026-02-23T17:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.424563 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.424627 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.424637 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.424664 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.424676 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:33Z","lastTransitionTime":"2026-02-23T17:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.527549 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.527594 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.527607 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.527628 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.527641 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:33Z","lastTransitionTime":"2026-02-23T17:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.620583 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.620628 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.620637 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.620654 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.620664 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:33Z","lastTransitionTime":"2026-02-23T17:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:33 crc kubenswrapper[4724]: E0223 17:32:33.632758 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aaac6a71-65af-4ded-9945-71c01ce15653\\\",\\\"systemUUID\\\":\\\"883aa43b-ee67-45aa-9f6b-7760dc931d5e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:33Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.636683 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.636737 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.636748 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.636768 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.636782 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:33Z","lastTransitionTime":"2026-02-23T17:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:33 crc kubenswrapper[4724]: E0223 17:32:33.649336 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aaac6a71-65af-4ded-9945-71c01ce15653\\\",\\\"systemUUID\\\":\\\"883aa43b-ee67-45aa-9f6b-7760dc931d5e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:33Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.653846 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.653905 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.653918 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.653944 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.653961 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:33Z","lastTransitionTime":"2026-02-23T17:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:33 crc kubenswrapper[4724]: E0223 17:32:33.666492 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aaac6a71-65af-4ded-9945-71c01ce15653\\\",\\\"systemUUID\\\":\\\"883aa43b-ee67-45aa-9f6b-7760dc931d5e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:33Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.670445 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.670512 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.670525 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.670548 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.670564 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:33Z","lastTransitionTime":"2026-02-23T17:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:33 crc kubenswrapper[4724]: E0223 17:32:33.691014 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aaac6a71-65af-4ded-9945-71c01ce15653\\\",\\\"systemUUID\\\":\\\"883aa43b-ee67-45aa-9f6b-7760dc931d5e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:33Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.695501 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.695553 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.695576 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.695606 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.695628 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:33Z","lastTransitionTime":"2026-02-23T17:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:33 crc kubenswrapper[4724]: E0223 17:32:33.707252 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aaac6a71-65af-4ded-9945-71c01ce15653\\\",\\\"systemUUID\\\":\\\"883aa43b-ee67-45aa-9f6b-7760dc931d5e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:33Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:33 crc kubenswrapper[4724]: E0223 17:32:33.707377 4724 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.709431 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.709473 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.709489 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.709512 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.709529 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:33Z","lastTransitionTime":"2026-02-23T17:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.813609 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.813650 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.813660 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.813677 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.813689 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:33Z","lastTransitionTime":"2026-02-23T17:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.818057 4724 generic.go:334] "Generic (PLEG): container finished" podID="8179b275-39bb-472c-915f-a02b2a09c88d" containerID="c5570859da5ce3899539e8c87dc2ed79e262b6abd82c80893a6a9aa3c26f4e37" exitCode=0 Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.818119 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qssx7" event={"ID":"8179b275-39bb-472c-915f-a02b2a09c88d","Type":"ContainerDied","Data":"c5570859da5ce3899539e8c87dc2ed79e262b6abd82c80893a6a9aa3c26f4e37"} Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.831939 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74b84e23-8d8b-47b7-b3d4-269b16e9dfb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e581ef5d1fb9752e7582661dffceba50fc7b408467d9b0f0cca3b29e1838e72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:33Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.849598 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e1d6606-75fc-41fd-9c23-18ee248da2af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe805066e5e61f09a4f5cf1f9e912da6664229c8db86a898cbe670278e4c2530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beda3a9d030f2d4aa9c6a78508a84f5bc122debebb2c5b423233834433b8fd5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e108d74a635da102e857e2ad4f18384861e4e630ae31af9aa05841e6bbc2de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b889ee0ad15cc6ff92c69a44c4bc0bf620e23e3dc7276324662c265ba5c7d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:25Z\\\",\\\"message\\\":\\\"W0223 17:31:24.378840 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 17:31:24.379771 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771867884 cert, and key in /tmp/serving-cert-1953836024/serving-signer.crt, /tmp/serving-cert-1953836024/serving-signer.key\\\\nI0223 17:31:24.846158 1 observer_polling.go:159] Starting file observer\\\\nW0223 17:31:24.858497 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 17:31:24.858706 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 17:31:24.859835 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1953836024/tls.crt::/tmp/serving-cert-1953836024/tls.key\\\\\\\"\\\\nF0223 17:31:25.358711 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:31:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b550694e49325768a2674a4d2b7089dd267b91a273751fb62d060a0663f06619\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:33Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.863159 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:33Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.876719 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:33Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.887952 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:33Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.899556 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35021d6d-15bd-4153-9dae-7b002eff8c23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26478861fb89db7eb6b5ba0a2089cad4360893bd95a71883be07831745c5c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f971a2a092b595760de8b7a9b6a0a13fa802d3fcb6a2c2b8922b01c9c57d782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d892cd1f6ea7fabb2e64e3e78d329f6e0efbf8585cc4f72fbcd29f31a035cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:33Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.915564 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a45d3dd297c3d2d4ca5e437f703790082ce9d5429688cc058e989da24daf02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31a45d3dd297c3d2d4ca5e437f703790082ce9d5429688cc058e989da24daf02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78fmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:33Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.936781 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5334ad9-e4a5-4eec-b535-44664ba43c0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47ad3efcec8d953297d16ec3aa049656f57e69121a9dfc2f83572d1fe0c1f8ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8b4c85243660568a55df3eb0867530e60512a01960c5be951763ccd9459be30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b43f0bff4db1bc23f440c84a11c3018196009ec0fe47a9cb4b4cd136e5fffd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee21f0fcd431e1f400bb68f6a5c7b5b8d38ced5a33133684160a3255ac61ec20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65474f581d37631a7193e3ebc061e9df69206ec245113f16c7930111a5f74d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://200faace6300bc9dc3138b270b3fd029a6580cb06bfdd89eb73f97635abc7ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://200faace6300bc9dc3138b270b3fd029a6580cb06bfdd89eb73f97635abc7ac3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3268af99383f73c85412d56d39515a43362510c7a3a972e83752d5929d7d4935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3268af99383f73c85412d56d39515a43362510c7a3a972e83752d5929d7d4935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5ae1a073932b6e36d2af6015c8cd09df8d031c65d67e2d06c5bcc2fcd6b35407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ae1a073932b6e36d2af6015c8cd09df8d031c65d67e2d06c5bcc2fcd6b35407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:33Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.950413 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:32:33 crc kubenswrapper[4724]: E0223 17:32:33.950624 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.953093 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d114679fca6bbb2dbaaee0970449fe265c7c931665eb559d015abb61de3ec07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dd3d46856b5d5dece678b672db6485b68b9b7fbe9f029f7d9df37906146674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:33Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.970282 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d85ca9995343f8d309c902c37fc2db6faf539dfde44cc8334e279648ec2ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:33Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.979707 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2dn8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00434a2a-97a5-4d8f-9a6f-9dc5b372cd20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://271eec7ff87c2b2d14aded00b308e9cd801b51352cbb3c7075bdc7037560eddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5pmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2dn8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:33Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:33 crc kubenswrapper[4724]: I0223 17:32:33.993447 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qssx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8179b275-39bb-472c-915f-a02b2a09c88d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5834e144917a9f037152ea525a0b425f1053407ebc5db5a09433bc8950db3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5834e144917a9f037152ea525a0b425f1053407ebc5db5a09433bc8950db3a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ee8d74243fdc2b520c1c145c37246abc0e2b653025c8626f05c62a9c8b20c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee8d74243fdc2b520c1c145c37246abc0e2b653025c8626f05c62a9c8b20c7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92923ccd7509d6f535098f7e46d3231ffaeed8c19edb7ed11c864d17e9489611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92923ccd7509d6f535098f7e46d3231ffaeed8c19edb7ed11c864d17e9489611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f91baa7de890c1792a75725746bacad51e728f8d8bdc1c0ccdc30ff78b4ca77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f91baa7de890c1792a75725746bacad51e728f8d8bdc1c0ccdc30ff78b4ca77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5570859da5ce3899539e8c87dc2ed79e262b6abd82c80893a6a9aa3c26f4e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5570859da5ce3899539e8c87dc2ed79e262b6abd82c80893a6a9aa3c26f4e37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qssx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:33Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.003901 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a065b197-b354-4d9b-b2e9-7d4882a3d1a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380efbba5a3d4d1c9997f3f90f6737934b602c7b736796ede2939f205beab4c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cgpd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://716e3c7a8293727fc9315beb9da7fea72ec299ea5d8f15035ab77347c201a7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cgpd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw78r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:34Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.014792 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2df91905-ff3d-4a7d-8e22-337861483a5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9fabd2560e4a3826fb070276626936d120d612d37e2750e417f637b93b1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff1c2b88a730048902a1360cdbcddfb4f82c6a58c427edb24a792b254f55383\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:17Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 17:30:47.305065 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 17:30:47.307934 1 observer_polling.go:159] Starting file observer\\\\nI0223 17:30:47.346927 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 17:30:47.351923 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0223 17:31:17.828320 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:16Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbce4ddd851479ee9b40b8c773a052d5a1d1420df107d8ea0e9fa2b927300910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bfa4a72a4a487fbc7703a3b60a9ef9a18b7fea6e58fd427f50361d8df9731d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2591d81bed9e5dbad670e16b2266367cfb386a3d68ffad9d09e0652130ee2609\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:34Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.025450 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbcb3c1b1daa4328b6e5e049e9f1270ee94da4897df26324f6e4cd898f565de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:34Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.037204 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mmxrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a042db-4057-4913-8091-da7d8c79feba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6df06ac80efb6de6b872bec086095e2b4ba0c13dba6e949a4f7309ef3e4300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wrps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mmxrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:34Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.048116 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.048169 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.048182 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.048204 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.048219 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:34Z","lastTransitionTime":"2026-02-23T17:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.148225 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 06:53:58.252850611 +0000 UTC Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.151719 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.151780 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.151793 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.151821 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.151835 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:34Z","lastTransitionTime":"2026-02-23T17:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.254668 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.254725 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.254737 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.254756 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.254772 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:34Z","lastTransitionTime":"2026-02-23T17:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.306989 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-k77s6"] Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.307657 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-k77s6" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.310034 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.311443 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.311511 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.312384 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.322157 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:34Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.334221 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:34Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.335536 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hrjs\" (UniqueName: \"kubernetes.io/projected/582f9368-9429-4cf2-a78d-8a255fc140a8-kube-api-access-6hrjs\") pod \"node-ca-k77s6\" (UID: \"582f9368-9429-4cf2-a78d-8a255fc140a8\") " pod="openshift-image-registry/node-ca-k77s6" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.335583 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/582f9368-9429-4cf2-a78d-8a255fc140a8-serviceca\") pod \"node-ca-k77s6\" (UID: \"582f9368-9429-4cf2-a78d-8a255fc140a8\") " pod="openshift-image-registry/node-ca-k77s6" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.335604 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/582f9368-9429-4cf2-a78d-8a255fc140a8-host\") pod \"node-ca-k77s6\" (UID: \"582f9368-9429-4cf2-a78d-8a255fc140a8\") " pod="openshift-image-registry/node-ca-k77s6" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.344116 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:34Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.354930 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74b84e23-8d8b-47b7-b3d4-269b16e9dfb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e581ef5d1fb9752e7582661dffceba50fc7b408467d9b0f0cca3b29e1838e72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:34Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.356814 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.356839 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.356848 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.356863 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.356872 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:34Z","lastTransitionTime":"2026-02-23T17:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.368155 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e1d6606-75fc-41fd-9c23-18ee248da2af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe805066e5e61f09a4f5cf1f9e912da6664229c8db86a898cbe670278e4c2530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beda3a9d030f2d4aa9c6a78508a84f5bc122debebb2c5b423233834433b8fd5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e108d74a635da102e857e2ad4f18384861e4e630ae31af9aa05841e6bbc2de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b889ee0ad15cc6ff92c69a44c4bc0bf620e23e3dc7276324662c265ba5c7d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:25Z\\\",\\\"message\\\":\\\"W0223 17:31:24.378840 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 17:31:24.379771 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771867884 cert, and key in /tmp/serving-cert-1953836024/serving-signer.crt, /tmp/serving-cert-1953836024/serving-signer.key\\\\nI0223 17:31:24.846158 1 observer_polling.go:159] Starting file observer\\\\nW0223 17:31:24.858497 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 17:31:24.858706 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 17:31:24.859835 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1953836024/tls.crt::/tmp/serving-cert-1953836024/tls.key\\\\\\\"\\\\nF0223 17:31:25.358711 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:31:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b550694e49325768a2674a4d2b7089dd267b91a273751fb62d060a0663f06619\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:34Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.379705 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k77s6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"582f9368-9429-4cf2-a78d-8a255fc140a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:34Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:34Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hrjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k77s6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:34Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.394653 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35021d6d-15bd-4153-9dae-7b002eff8c23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26478861fb89db7eb6b5ba0a2089cad4360893bd95a71883be07831745c5c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f971a2a092b595760de8b7a9b6a0a13fa802d3fcb6a2c2b8922b01c9c57d782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d892cd1f6ea7fabb2e64e3e78d329f6e0efbf8585cc4f72fbcd29f31a035cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:34Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.411050 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a45d3dd297c3d2d4ca5e437f703790082ce9d5429688cc058e989da24daf02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31a45d3dd297c3d2d4ca5e437f703790082ce9d5429688cc058e989da24daf02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78fmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:34Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.421772 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d85ca9995343f8d309c902c37fc2db6faf539dfde44cc8334e279648ec2ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:34Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.436362 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/582f9368-9429-4cf2-a78d-8a255fc140a8-serviceca\") pod \"node-ca-k77s6\" (UID: \"582f9368-9429-4cf2-a78d-8a255fc140a8\") " pod="openshift-image-registry/node-ca-k77s6" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.436474 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/582f9368-9429-4cf2-a78d-8a255fc140a8-host\") pod \"node-ca-k77s6\" (UID: \"582f9368-9429-4cf2-a78d-8a255fc140a8\") " pod="openshift-image-registry/node-ca-k77s6" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.436647 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hrjs\" (UniqueName: \"kubernetes.io/projected/582f9368-9429-4cf2-a78d-8a255fc140a8-kube-api-access-6hrjs\") pod \"node-ca-k77s6\" (UID: \"582f9368-9429-4cf2-a78d-8a255fc140a8\") " pod="openshift-image-registry/node-ca-k77s6" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.437244 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/582f9368-9429-4cf2-a78d-8a255fc140a8-host\") pod \"node-ca-k77s6\" (UID: \"582f9368-9429-4cf2-a78d-8a255fc140a8\") " pod="openshift-image-registry/node-ca-k77s6" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.437853 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2dn8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00434a2a-97a5-4d8f-9a6f-9dc5b372cd20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://271eec7ff87c2b2d14aded00b308e9cd801b51352cbb3c7075bdc7037560eddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5pmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2dn8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:34Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.438892 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/582f9368-9429-4cf2-a78d-8a255fc140a8-serviceca\") pod \"node-ca-k77s6\" (UID: \"582f9368-9429-4cf2-a78d-8a255fc140a8\") " pod="openshift-image-registry/node-ca-k77s6" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.452906 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qssx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8179b275-39bb-472c-915f-a02b2a09c88d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5834e144917a9f037152ea525a0b425f1053407ebc5db5a09433bc8950db3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5834e144917a9f037152ea525a0b425f1053407ebc5db5a09433bc8950db3a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ee8d74243fdc2b520c1c145c37246abc0e2b653025c8626f05c62a9c8b20c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee8d74243fdc2b520c1c145c37246abc0e2b653025c8626f05c62a9c8b20c7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92923ccd7509d6f535098f7e46d3231ffaeed8c19edb7ed11c864d17e9489611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92923ccd7509d6f535098f7e46d3231ffaeed8c19edb7ed11c864d17e9489611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f91baa7de890c1792a75725746bacad51e728f8d8bdc1c0ccdc30ff78b4ca77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f91baa7de890c1792a75725746bacad51e728f8d8bdc1c0ccdc30ff78b4ca77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5570859da5ce3899539e8c87dc2ed79e262b6abd82c80893a6a9aa3c26f4e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5570859da5ce3899539e8c87dc2ed79e262b6abd82c80893a6a9aa3c26f4e37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qssx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:34Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.458337 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hrjs\" (UniqueName: \"kubernetes.io/projected/582f9368-9429-4cf2-a78d-8a255fc140a8-kube-api-access-6hrjs\") pod \"node-ca-k77s6\" (UID: \"582f9368-9429-4cf2-a78d-8a255fc140a8\") " pod="openshift-image-registry/node-ca-k77s6" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.459879 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.459944 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.459956 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.459979 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.459997 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:34Z","lastTransitionTime":"2026-02-23T17:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.465346 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a065b197-b354-4d9b-b2e9-7d4882a3d1a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380efbba5a3d4d1c9997f3f90f6737934b602c7b736796ede2939f205beab4c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cgpd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://716e3c7a8293727fc9315beb9da7fea72ec299ea5d8f15035ab77347c201a7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cgpd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw78r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:34Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.488557 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5334ad9-e4a5-4eec-b535-44664ba43c0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47ad3efcec8d953297d16ec3aa049656f57e69121a9dfc2f83572d1fe0c1f8ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8b4c85243660568a55df3eb0867530e60512a01960c5be951763ccd9459be30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b43f0bff4db1bc23f440c84a11c3018196009ec0fe47a9cb4b4cd136e5fffd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee21f0fcd431e1f400bb68f6a5c7b5b8d38ced5a33133684160a3255ac61ec20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65474f581d37631a7193e3ebc061e9df69206ec245113f16c7930111a5f74d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://200faace6300bc9dc3138b270b3fd029a6580cb06bfdd89eb73f97635abc7ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://200faace6300bc9dc3138b270b3fd029a6580cb06bfdd89eb73f97635abc7ac3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3268af99383f73c85412d56d39515a43362510c7a3a972e83752d5929d7d4935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3268af99383f73c85412d56d39515a43362510c7a3a972e83752d5929d7d4935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5ae1a073932b6e36d2af6015c8cd09df8d031c65d67e2d06c5bcc2fcd6b35407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ae1a073932b6e36d2af6015c8cd09df8d031c65d67e2d06c5bcc2fcd6b35407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:34Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.501241 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d114679fca6bbb2dbaaee0970449fe265c7c931665eb559d015abb61de3ec07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dd3d46856b5d5dece678b672db6485b68b9b7fbe9f029f7d9df37906146674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:34Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.513636 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mmxrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a042db-4057-4913-8091-da7d8c79feba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6df06ac80efb6de6b872bec086095e2b4ba0c13dba6e949a4f7309ef3e4300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wrps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mmxrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:34Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.524637 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2df91905-ff3d-4a7d-8e22-337861483a5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9fabd2560e4a3826fb070276626936d120d612d37e2750e417f637b93b1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff1c2b88a730048902a1360cdbcddfb4f82c6a58c427edb24a792b254f55383\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:17Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 17:30:47.305065 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 17:30:47.307934 1 observer_polling.go:159] Starting file observer\\\\nI0223 17:30:47.346927 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 17:30:47.351923 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0223 17:31:17.828320 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:16Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbce4ddd851479ee9b40b8c773a052d5a1d1420df107d8ea0e9fa2b927300910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bfa4a72a4a487fbc7703a3b60a9ef9a18b7fea6e58fd427f50361d8df9731d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2591d81bed9e5dbad670e16b2266367cfb386a3d68ffad9d09e0652130ee2609\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:34Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.536273 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbcb3c1b1daa4328b6e5e049e9f1270ee94da4897df26324f6e4cd898f565de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:34Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.566829 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.567254 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.567435 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.567609 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.567735 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:34Z","lastTransitionTime":"2026-02-23T17:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.621287 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-k77s6" Feb 23 17:32:34 crc kubenswrapper[4724]: W0223 17:32:34.641904 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod582f9368_9429_4cf2_a78d_8a255fc140a8.slice/crio-6dc149e0c8390e7f058e51c4b8a3e2491ba3a3d7514d7c45fd866d4b2e4ad65e WatchSource:0}: Error finding container 6dc149e0c8390e7f058e51c4b8a3e2491ba3a3d7514d7c45fd866d4b2e4ad65e: Status 404 returned error can't find the container with id 6dc149e0c8390e7f058e51c4b8a3e2491ba3a3d7514d7c45fd866d4b2e4ad65e Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.670632 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.670669 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.670679 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.670695 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.670706 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:34Z","lastTransitionTime":"2026-02-23T17:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.773786 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.773828 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.773839 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.773856 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.773870 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:34Z","lastTransitionTime":"2026-02-23T17:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.824660 4724 generic.go:334] "Generic (PLEG): container finished" podID="8179b275-39bb-472c-915f-a02b2a09c88d" containerID="a11e87c5073ca26ea5e71e822a38110312b4381fb1d1e62f52b1f55ff7d1cfdb" exitCode=0 Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.824754 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qssx7" event={"ID":"8179b275-39bb-472c-915f-a02b2a09c88d","Type":"ContainerDied","Data":"a11e87c5073ca26ea5e71e822a38110312b4381fb1d1e62f52b1f55ff7d1cfdb"} Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.826048 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-k77s6" event={"ID":"582f9368-9429-4cf2-a78d-8a255fc140a8","Type":"ContainerStarted","Data":"6dc149e0c8390e7f058e51c4b8a3e2491ba3a3d7514d7c45fd866d4b2e4ad65e"} Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.839204 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:34Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.847791 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74b84e23-8d8b-47b7-b3d4-269b16e9dfb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e581ef5d1fb9752e7582661dffceba50fc7b408467d9b0f0cca3b29e1838e72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:34Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.860800 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e1d6606-75fc-41fd-9c23-18ee248da2af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe805066e5e61f09a4f5cf1f9e912da6664229c8db86a898cbe670278e4c2530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beda3a9d030f2d4aa9c6a78508a84f5bc122debebb2c5b423233834433b8fd5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e108d74a635da102e857e2ad4f18384861e4e630ae31af9aa05841e6bbc2de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b889ee0ad15cc6ff92c69a44c4bc0bf620e23e3dc7276324662c265ba5c7d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:25Z\\\",\\\"message\\\":\\\"W0223 17:31:24.378840 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 17:31:24.379771 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771867884 cert, and key in /tmp/serving-cert-1953836024/serving-signer.crt, /tmp/serving-cert-1953836024/serving-signer.key\\\\nI0223 17:31:24.846158 1 observer_polling.go:159] Starting file observer\\\\nW0223 17:31:24.858497 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 17:31:24.858706 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 17:31:24.859835 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1953836024/tls.crt::/tmp/serving-cert-1953836024/tls.key\\\\\\\"\\\\nF0223 17:31:25.358711 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:31:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b550694e49325768a2674a4d2b7089dd267b91a273751fb62d060a0663f06619\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:34Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.875727 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.875771 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.875783 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.875805 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.875818 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:34Z","lastTransitionTime":"2026-02-23T17:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.877750 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:34Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.889799 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:34Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.902550 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35021d6d-15bd-4153-9dae-7b002eff8c23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26478861fb89db7eb6b5ba0a2089cad4360893bd95a71883be07831745c5c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f971a2a092b595760de8b7a9b6a0a13fa802d3fcb6a2c2b8922b01c9c57d782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d892cd1f6ea7fabb2e64e3e78d329f6e0efbf8585cc4f72fbcd29f31a035cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:34Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.926766 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a45d3dd297c3d2d4ca5e437f703790082ce9d5429688cc058e989da24daf02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31a45d3dd297c3d2d4ca5e437f703790082ce9d5429688cc058e989da24daf02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78fmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:34Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.943459 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k77s6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"582f9368-9429-4cf2-a78d-8a255fc140a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:34Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:34Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hrjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k77s6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:34Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.950151 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.950263 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:32:34 crc kubenswrapper[4724]: E0223 17:32:34.950309 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 17:32:34 crc kubenswrapper[4724]: E0223 17:32:34.950455 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 17:32:34 crc kubenswrapper[4724]: I0223 17:32:34.961284 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qssx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8179b275-39bb-472c-915f-a02b2a09c88d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5834e144917a9f037152ea525a0b425f1053407ebc5db5a09433bc8950db3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5834e144917a9f037152ea525a0b425f1053407ebc5db5a09433bc8950db3a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ee8d74243fdc2b520c1c145c37246abc0e2b653025c8626f05c62a9c8b20c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee8d74243fdc2b520c1c145c37246abc0e2b653025c8626f05c62a9c8b20c7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92923ccd7509d6f535098f7e46d3231ffaeed8c19edb7ed11c864d17e9489611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92923ccd7509d6f535098f7e46d3231ffaeed8c19edb7ed11c864d17e9489611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f91baa7de890c1792a75725746bacad51e728f8d8bdc1c0ccdc30ff78b4ca77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f91baa7de890c1792a75725746bacad51e728f8d8bdc1c0ccdc30ff78b4ca77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5570859da5ce3899539e8c87dc2ed79e262b6abd82c80893a6a9aa3c26f4e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5570859da5ce3899539e8c87dc2ed79e262b6abd82c80893a6a9aa3c26f4e37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a11e87c5073ca26ea5e71e822a38110312b4381fb1d1e62f52b1f55ff7d1cfdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a11e87c5073ca26ea5e71e822a38110312b4381fb1d1e62f52b1f55ff7d1cfdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qssx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:34Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.166685 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a065b197-b354-4d9b-b2e9-7d4882a3d1a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380efbba5a3d4d1c9997f3f90f6737934b602c7b736796ede2939f205beab4c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cgpd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://716e3c7a8293727fc9315beb9da7fea72ec299ea5d8f15035ab77347c201a7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cgpd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw78r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:34Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.166800 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 11:03:27.09220977 +0000 UTC Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.167050 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:32:35 crc kubenswrapper[4724]: E0223 17:32:35.167224 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:39.167207792 +0000 UTC m=+174.983407392 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.167248 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.167280 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:32:35 crc kubenswrapper[4724]: E0223 17:32:35.167408 4724 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 17:32:35 crc kubenswrapper[4724]: E0223 17:32:35.167451 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 17:33:39.167443648 +0000 UTC m=+174.983643248 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.167629 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:32:35 crc kubenswrapper[4724]: E0223 17:32:35.167738 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 17:32:35 crc kubenswrapper[4724]: E0223 17:32:35.167752 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 17:32:35 crc kubenswrapper[4724]: E0223 17:32:35.167765 4724 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 17:32:35 crc kubenswrapper[4724]: E0223 17:32:35.167791 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-23 17:33:39.167785246 +0000 UTC m=+174.983984836 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 17:32:35 crc kubenswrapper[4724]: E0223 17:32:35.167911 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 17:32:35 crc kubenswrapper[4724]: E0223 17:32:35.167923 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 17:32:35 crc kubenswrapper[4724]: E0223 17:32:35.167930 4724 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 17:32:35 crc kubenswrapper[4724]: E0223 17:32:35.167954 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-23 17:33:39.16794471 +0000 UTC m=+174.984144310 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.168049 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:32:35 crc kubenswrapper[4724]: E0223 17:32:35.168137 4724 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 17:32:35 crc kubenswrapper[4724]: E0223 17:32:35.168166 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 17:33:39.168158186 +0000 UTC m=+174.984357776 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.188994 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.189650 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.189677 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.189777 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.189805 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:35Z","lastTransitionTime":"2026-02-23T17:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.197877 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5334ad9-e4a5-4eec-b535-44664ba43c0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47ad3efcec8d953297d16ec3aa049656f57e69121a9dfc2f83572d1fe0c1f8ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8b4c85243660568a55df3eb0867530e60512a01960c5be951763ccd9459be30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b43f0bff4db1bc23f440c84a11c3018196009ec0fe47a9cb4b4cd136e5fffd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee21f0fcd431e1f400bb68f6a5c7b5b8d38ced5a33133684160a3255ac61ec20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65474f581d37631a7193e3ebc061e9df69206ec245113f16c7930111a5f74d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://200faace6300bc9dc3138b270b3fd029a6580cb06bfdd89eb73f97635abc7ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://200faace6300bc9dc3138b270b3fd029a6580cb06bfdd89eb73f97635abc7ac3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3268af99383f73c85412d56d39515a43362510c7a3a972e83752d5929d7d4935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3268af99383f73c85412d56d39515a43362510c7a3a972e83752d5929d7d4935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5ae1a073932b6e36d2af6015c8cd09df8d031c65d67e2d06c5bcc2fcd6b35407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ae1a073932b6e36d2af6015c8cd09df8d031c65d67e2d06c5bcc2fcd6b35407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:35Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.217537 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d114679fca6bbb2dbaaee0970449fe265c7c931665eb559d015abb61de3ec07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dd3d46856b5d5dece678b672db6485b68b9b7fbe9f029f7d9df37906146674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:35Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.230898 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d85ca9995343f8d309c902c37fc2db6faf539dfde44cc8334e279648ec2ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:35Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.245264 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2dn8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00434a2a-97a5-4d8f-9a6f-9dc5b372cd20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://271eec7ff87c2b2d14aded00b308e9cd801b51352cbb3c7075bdc7037560eddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5pmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2dn8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:35Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.259330 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2df91905-ff3d-4a7d-8e22-337861483a5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9fabd2560e4a3826fb070276626936d120d612d37e2750e417f637b93b1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff1c2b88a730048902a1360cdbcddfb4f82c6a58c427edb24a792b254f55383\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:17Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 17:30:47.305065 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 17:30:47.307934 1 observer_polling.go:159] Starting file observer\\\\nI0223 17:30:47.346927 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 17:30:47.351923 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0223 17:31:17.828320 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:16Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbce4ddd851479ee9b40b8c773a052d5a1d1420df107d8ea0e9fa2b927300910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bfa4a72a4a487fbc7703a3b60a9ef9a18b7fea6e58fd427f50361d8df9731d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2591d81bed9e5dbad670e16b2266367cfb386a3d68ffad9d09e0652130ee2609\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:35Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.273570 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbcb3c1b1daa4328b6e5e049e9f1270ee94da4897df26324f6e4cd898f565de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:35Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.288159 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mmxrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a042db-4057-4913-8091-da7d8c79feba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6df06ac80efb6de6b872bec086095e2b4ba0c13dba6e949a4f7309ef3e4300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wrps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mmxrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:35Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.292657 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.292705 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.292717 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.292738 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.292750 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:35Z","lastTransitionTime":"2026-02-23T17:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.302228 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35021d6d-15bd-4153-9dae-7b002eff8c23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26478861fb89db7eb6b5ba0a2089cad4360893bd95a71883be07831745c5c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f971a2a092b595760de8b7a9b6a0a13fa802d3fcb6a2c2b8922b01c9c57d782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d892cd1f6ea7fabb2e64e3e78d329f6e0efbf8585cc4f72fbcd29f31a035cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:35Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.345542 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a45d3dd297c3d2d4ca5e437f703790082ce9d5429688cc058e989da24daf02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31a45d3dd297c3d2d4ca5e437f703790082ce9d5429688cc058e989da24daf02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78fmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:35Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.368774 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k77s6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"582f9368-9429-4cf2-a78d-8a255fc140a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:34Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:34Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hrjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k77s6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:35Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.395839 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.395876 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.395886 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.395905 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.395917 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:35Z","lastTransitionTime":"2026-02-23T17:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.400021 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5334ad9-e4a5-4eec-b535-44664ba43c0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47ad3efcec8d953297d16ec3aa049656f57e69121a9dfc2f83572d1fe0c1f8ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8b4c85243660568a55df3eb0867530e60512a01960c5be951763ccd9459be30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b43f0bff4db1bc23f440c84a11c3018196009ec0fe47a9cb4b4cd136e5fffd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee21f0fcd431e1f400bb68f6a5c7b5b8d38ced5a33133684160a3255ac61ec20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65474f581d37631a7193e3ebc061e9df69206ec245113f16c7930111a5f74d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://200faace6300bc9dc3138b270b3fd029a6580cb06bfdd89eb73f97635abc7ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://200faace6300bc9dc3138b270b3fd029a6580cb06bfdd89eb73f97635abc7ac3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3268af99383f73c85412d56d39515a43362510c7a3a972e83752d5929d7d4935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3268af99383f73c85412d56d39515a43362510c7a3a972e83752d5929d7d4935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5ae1a073932b6e36d2af6015c8cd09df8d031c65d67e2d06c5bcc2fcd6b35407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ae1a073932b6e36d2af6015c8cd09df8d031c65d67e2d06c5bcc2fcd6b35407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:35Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.414916 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d114679fca6bbb2dbaaee0970449fe265c7c931665eb559d015abb61de3ec07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dd3d46856b5d5dece678b672db6485b68b9b7fbe9f029f7d9df37906146674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:35Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.428019 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d85ca9995343f8d309c902c37fc2db6faf539dfde44cc8334e279648ec2ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:35Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.440435 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2dn8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00434a2a-97a5-4d8f-9a6f-9dc5b372cd20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://271eec7ff87c2b2d14aded00b308e9cd801b51352cbb3c7075bdc7037560eddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5pmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2dn8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:35Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.454520 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qssx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8179b275-39bb-472c-915f-a02b2a09c88d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5834e144917a9f037152ea525a0b425f1053407ebc5db5a09433bc8950db3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5834e144917a9f037152ea525a0b425f1053407ebc5db5a09433bc8950db3a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ee8d74243fdc2b520c1c145c37246abc0e2b653025c8626f05c62a9c8b20c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee8d74243fdc2b520c1c145c37246abc0e2b653025c8626f05c62a9c8b20c7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92923ccd7509d6f535098f7e46d3231ffaeed8c19edb7ed11c864d17e9489611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92923ccd7509d6f535098f7e46d3231ffaeed8c19edb7ed11c864d17e9489611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f91baa7de890c1792a75725746bacad51e728f8d8bdc1c0ccdc30ff78b4ca77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f91baa7de890c1792a75725746bacad51e728f8d8bdc1c0ccdc30ff78b4ca77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5570859da5ce3899539e8c87dc2ed79e262b6abd82c80893a6a9aa3c26f4e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5570859da5ce3899539e8c87dc2ed79e262b6abd82c80893a6a9aa3c26f4e37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a11e87c5073ca26ea5e71e822a38110312b4381fb1d1e62f52b1f55ff7d1cfdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a11e87c5073ca26ea5e71e822a38110312b4381fb1d1e62f52b1f55ff7d1cfdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qssx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:35Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.465287 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a065b197-b354-4d9b-b2e9-7d4882a3d1a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380efbba5a3d4d1c9997f3f90f6737934b602c7b736796ede2939f205beab4c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cgpd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://716e3c7a8293727fc9315beb9da7fea72ec299ea5d8f15035ab77347c201a7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cgpd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw78r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:35Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.477736 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2df91905-ff3d-4a7d-8e22-337861483a5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9fabd2560e4a3826fb070276626936d120d612d37e2750e417f637b93b1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff1c2b88a730048902a1360cdbcddfb4f82c6a58c427edb24a792b254f55383\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:17Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 17:30:47.305065 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 17:30:47.307934 1 observer_polling.go:159] Starting file observer\\\\nI0223 17:30:47.346927 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 17:30:47.351923 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0223 17:31:17.828320 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:16Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbce4ddd851479ee9b40b8c773a052d5a1d1420df107d8ea0e9fa2b927300910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bfa4a72a4a487fbc7703a3b60a9ef9a18b7fea6e58fd427f50361d8df9731d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2591d81bed9e5dbad670e16b2266367cfb386a3d68ffad9d09e0652130ee2609\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:35Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.489733 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbcb3c1b1daa4328b6e5e049e9f1270ee94da4897df26324f6e4cd898f565de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:35Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.498411 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.498463 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.498481 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.498508 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.498526 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:35Z","lastTransitionTime":"2026-02-23T17:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.500323 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mmxrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a042db-4057-4913-8091-da7d8c79feba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6df06ac80efb6de6b872bec086095e2b4ba0c13dba6e949a4f7309ef3e4300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wrps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mmxrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:35Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.518121 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74b84e23-8d8b-47b7-b3d4-269b16e9dfb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e581ef5d1fb9752e7582661dffceba50fc7b408467d9b0f0cca3b29e1838e72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:35Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.532305 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e1d6606-75fc-41fd-9c23-18ee248da2af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe805066e5e61f09a4f5cf1f9e912da6664229c8db86a898cbe670278e4c2530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beda3a9d030f2d4aa9c6a78508a84f5bc122debebb2c5b423233834433b8fd5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e108d74a635da102e857e2ad4f18384861e4e630ae31af9aa05841e6bbc2de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b889ee0ad15cc6ff92c69a44c4bc0bf620e23e3dc7276324662c265ba5c7d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:25Z\\\",\\\"message\\\":\\\"W0223 17:31:24.378840 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 17:31:24.379771 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771867884 cert, and key in /tmp/serving-cert-1953836024/serving-signer.crt, /tmp/serving-cert-1953836024/serving-signer.key\\\\nI0223 17:31:24.846158 1 observer_polling.go:159] Starting file observer\\\\nW0223 17:31:24.858497 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 17:31:24.858706 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 17:31:24.859835 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1953836024/tls.crt::/tmp/serving-cert-1953836024/tls.key\\\\\\\"\\\\nF0223 17:31:25.358711 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:31:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b550694e49325768a2674a4d2b7089dd267b91a273751fb62d060a0663f06619\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:35Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.551641 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:35Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.565730 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:35Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.578480 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:35Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.601465 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.601523 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.601552 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.601578 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.601591 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:35Z","lastTransitionTime":"2026-02-23T17:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.704506 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.704578 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.704602 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.704635 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.704657 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:35Z","lastTransitionTime":"2026-02-23T17:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.806954 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.806998 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.807009 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.807033 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.807045 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:35Z","lastTransitionTime":"2026-02-23T17:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.833691 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" event={"ID":"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1","Type":"ContainerStarted","Data":"9e7f74b3c10580201b3ebc5a550f45e82a80bc951402fac470e2f51f06d63491"} Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.833979 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.841320 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qssx7" event={"ID":"8179b275-39bb-472c-915f-a02b2a09c88d","Type":"ContainerStarted","Data":"c39702ed2e0af737992d239c015f1ec6aa187cbad84f1ad1431c11f179479ee0"} Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.843942 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-k77s6" event={"ID":"582f9368-9429-4cf2-a78d-8a255fc140a8","Type":"ContainerStarted","Data":"6a07900f4330536e3783fb9566c8c52b4208d624f47a662f74ee0ff184fff648"} Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.851193 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d114679fca6bbb2dbaaee0970449fe265c7c931665eb559d015abb61de3ec07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dd3d46856b5d5dece678b672db6485b68b9b7fbe9f029f7d9df37906146674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:35Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.866563 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d85ca9995343f8d309c902c37fc2db6faf539dfde44cc8334e279648ec2ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:35Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.894495 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2dn8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00434a2a-97a5-4d8f-9a6f-9dc5b372cd20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://271eec7ff87c2b2d14aded00b308e9cd801b51352cbb3c7075bdc7037560eddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5pmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2dn8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:35Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.895285 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.910840 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qssx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8179b275-39bb-472c-915f-a02b2a09c88d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5834e144917a9f037152ea525a0b425f1053407ebc5db5a09433bc8950db3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5834e144917a9f037152ea525a0b425f1053407ebc5db5a09433bc8950db3a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ee8d74243fdc2b520c1c145c37246abc0e2b653025c8626f05c62a9c8b20c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee8d74243fdc2b520c1c145c37246abc0e2b653025c8626f05c62a9c8b20c7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92923ccd7509d6f535098f7e46d3231ffaeed8c19edb7ed11c864d17e9489611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92923ccd7509d6f535098f7e46d3231ffaeed8c19edb7ed11c864d17e9489611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f91baa7de890c1792a75725746bacad51e728f8d8bdc1c0ccdc30ff78b4ca77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f91baa7de890c1792a75725746bacad51e728f8d8bdc1c0ccdc30ff78b4ca77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5570859da5ce3899539e8c87dc2ed79e262b6abd82c80893a6a9aa3c26f4e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5570859da5ce3899539e8c87dc2ed79e262b6abd82c80893a6a9aa3c26f4e37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a11e87c5073ca26ea5e71e822a38110312b4381fb1d1e62f52b1f55ff7d1cfdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a11e87c5073ca26ea5e71e822a38110312b4381fb1d1e62f52b1f55ff7d1cfdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qssx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:35Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.911657 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.911691 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.911700 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.911734 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.911746 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:35Z","lastTransitionTime":"2026-02-23T17:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.923361 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a065b197-b354-4d9b-b2e9-7d4882a3d1a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380efbba5a3d4d1c9997f3f90f6737934b602c7b736796ede2939f205beab4c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cgpd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://716e3c7a8293727fc9315beb9da7fea72ec299ea5d8f15035ab77347c201a7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cgpd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw78r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:35Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.943687 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5334ad9-e4a5-4eec-b535-44664ba43c0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47ad3efcec8d953297d16ec3aa049656f57e69121a9dfc2f83572d1fe0c1f8ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8b4c85243660568a55df3eb0867530e60512a01960c5be951763ccd9459be30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b43f0bff4db1bc23f440c84a11c3018196009ec0fe47a9cb4b4cd136e5fffd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee21f0fcd431e1f400bb68f6a5c7b5b8d38ced5a33133684160a3255ac61ec20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65474f581d37631a7193e3ebc061e9df69206ec245113f16c7930111a5f74d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://200faace6300bc9dc3138b270b3fd029a6580cb06bfdd89eb73f97635abc7ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://200faace6300bc9dc3138b270b3fd029a6580cb06bfdd89eb73f97635abc7ac3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3268af99383f73c85412d56d39515a43362510c7a3a972e83752d5929d7d4935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3268af99383f73c85412d56d39515a43362510c7a3a972e83752d5929d7d4935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5ae1a073932b6e36d2af6015c8cd09df8d031c65d67e2d06c5bcc2fcd6b35407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ae1a073932b6e36d2af6015c8cd09df8d031c65d67e2d06c5bcc2fcd6b35407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:35Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.950578 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:32:35 crc kubenswrapper[4724]: E0223 17:32:35.950692 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.960005 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbcb3c1b1daa4328b6e5e049e9f1270ee94da4897df26324f6e4cd898f565de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:35Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.974008 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mmxrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a042db-4057-4913-8091-da7d8c79feba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6df06ac80efb6de6b872bec086095e2b4ba0c13dba6e949a4f7309ef3e4300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wrps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mmxrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:35Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:35 crc kubenswrapper[4724]: I0223 17:32:35.987853 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2df91905-ff3d-4a7d-8e22-337861483a5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9fabd2560e4a3826fb070276626936d120d612d37e2750e417f637b93b1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff1c2b88a730048902a1360cdbcddfb4f82c6a58c427edb24a792b254f55383\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:17Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 17:30:47.305065 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 17:30:47.307934 1 observer_polling.go:159] Starting file observer\\\\nI0223 17:30:47.346927 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 17:30:47.351923 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0223 17:31:17.828320 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:16Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbce4ddd851479ee9b40b8c773a052d5a1d1420df107d8ea0e9fa2b927300910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bfa4a72a4a487fbc7703a3b60a9ef9a18b7fea6e58fd427f50361d8df9731d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2591d81bed9e5dbad670e16b2266367cfb386a3d68ffad9d09e0652130ee2609\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:35Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.002201 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e1d6606-75fc-41fd-9c23-18ee248da2af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe805066e5e61f09a4f5cf1f9e912da6664229c8db86a898cbe670278e4c2530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beda3a9d030f2d4aa9c6a78508a84f5bc122debebb2c5b423233834433b8fd5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e108d74a635da102e857e2ad4f18384861e4e630ae31af9aa05841e6bbc2de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b889ee0ad15cc6ff92c69a44c4bc0bf620e23e3dc7276324662c265ba5c7d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:25Z\\\",\\\"message\\\":\\\"W0223 17:31:24.378840 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 17:31:24.379771 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771867884 cert, and key in /tmp/serving-cert-1953836024/serving-signer.crt, /tmp/serving-cert-1953836024/serving-signer.key\\\\nI0223 17:31:24.846158 1 observer_polling.go:159] Starting file observer\\\\nW0223 17:31:24.858497 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 17:31:24.858706 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 17:31:24.859835 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1953836024/tls.crt::/tmp/serving-cert-1953836024/tls.key\\\\\\\"\\\\nF0223 17:31:25.358711 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:31:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b550694e49325768a2674a4d2b7089dd267b91a273751fb62d060a0663f06619\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:35Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.013999 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:36Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.015013 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.015047 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.015061 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.015080 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.015091 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:36Z","lastTransitionTime":"2026-02-23T17:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.027212 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:36Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.045088 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:36Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.060244 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74b84e23-8d8b-47b7-b3d4-269b16e9dfb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e581ef5d1fb9752e7582661dffceba50fc7b408467d9b0f0cca3b29e1838e72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:36Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.081987 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4cc6f0c20370fbabd53ec282c0d89022adfd7706f939163b00f543f7086aad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9724a1e6d050110c119972cc34cf64fa8dc6c515f8bc0aec4dbf82463c18d3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcecf13d69c640f2efa0b9abefbb13a08c15b82126e11159faed92da3b2addf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16f5feb6e9a7c7f7013f55ac108791b423b2ba7e1e443d1653c7a85e5622cfc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd0bf4870b6f1d6369d4d1a12817188568f572708708271de41e71e46f2002e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6574641e63ac4e0e79550d6b28b2a683f11be521784b19d2fbfc9226dd5b7ef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e7f74b3c10580201b3ebc5a550f45e82a80bc951402fac470e2f51f06d63491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c79e6fcccea909f19d459103ec63aed7de2f0a498ffab190e63a647f2f20bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a45d3dd297c3d2d4ca5e437f703790082ce9d5429688cc058e989da24daf02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31a45d3dd297c3d2d4ca5e437f703790082ce9d5429688cc058e989da24daf02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78fmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:36Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.092970 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k77s6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"582f9368-9429-4cf2-a78d-8a255fc140a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:34Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:34Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hrjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k77s6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:36Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.104920 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35021d6d-15bd-4153-9dae-7b002eff8c23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26478861fb89db7eb6b5ba0a2089cad4360893bd95a71883be07831745c5c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f971a2a092b595760de8b7a9b6a0a13fa802d3fcb6a2c2b8922b01c9c57d782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d892cd1f6ea7fabb2e64e3e78d329f6e0efbf8585cc4f72fbcd29f31a035cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:36Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.117458 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74b84e23-8d8b-47b7-b3d4-269b16e9dfb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e581ef5d1fb9752e7582661dffceba50fc7b408467d9b0f0cca3b29e1838e72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:36Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.117656 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.117671 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.117680 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.117696 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.117707 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:36Z","lastTransitionTime":"2026-02-23T17:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.135367 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e1d6606-75fc-41fd-9c23-18ee248da2af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe805066e5e61f09a4f5cf1f9e912da6664229c8db86a898cbe670278e4c2530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beda3a9d030f2d4aa9c6a78508a84f5bc122debebb2c5b423233834433b8fd5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e108d74a635da102e857e2ad4f18384861e4e630ae31af9aa05841e6bbc2de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b889ee0ad15cc6ff92c69a44c4bc0bf620e23e3dc7276324662c265ba5c7d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:25Z\\\",\\\"message\\\":\\\"W0223 17:31:24.378840 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 17:31:24.379771 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771867884 cert, and key in /tmp/serving-cert-1953836024/serving-signer.crt, /tmp/serving-cert-1953836024/serving-signer.key\\\\nI0223 17:31:24.846158 1 observer_polling.go:159] Starting file observer\\\\nW0223 17:31:24.858497 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 17:31:24.858706 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 17:31:24.859835 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1953836024/tls.crt::/tmp/serving-cert-1953836024/tls.key\\\\\\\"\\\\nF0223 17:31:25.358711 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:31:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b550694e49325768a2674a4d2b7089dd267b91a273751fb62d060a0663f06619\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:36Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.149786 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:36Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.161112 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:36Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.167468 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 20:48:17.510748619 +0000 UTC Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.176884 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:36Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.187718 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35021d6d-15bd-4153-9dae-7b002eff8c23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26478861fb89db7eb6b5ba0a2089cad4360893bd95a71883be07831745c5c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f971a2a092b595760de8b7a9b6a0a13fa802d3fcb6a2c2b8922b01c9c57d782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d892cd1f6ea7fabb2e64e3e78d329f6e0efbf8585cc4f72fbcd29f31a035cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:36Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.207534 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4cc6f0c20370fbabd53ec282c0d89022adfd7706f939163b00f543f7086aad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9724a1e6d050110c119972cc34cf64fa8dc6c515f8bc0aec4dbf82463c18d3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcecf13d69c640f2efa0b9abefbb13a08c15b82126e11159faed92da3b2addf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16f5feb6e9a7c7f7013f55ac108791b423b2ba7e1e443d1653c7a85e5622cfc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd0bf4870b6f1d6369d4d1a12817188568f572708708271de41e71e46f2002e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6574641e63ac4e0e79550d6b28b2a683f11be521784b19d2fbfc9226dd5b7ef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e7f74b3c10580201b3ebc5a550f45e82a80bc951402fac470e2f51f06d63491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c79e6fcccea909f19d459103ec63aed7de2f0a498ffab190e63a647f2f20bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a45d3dd297c3d2d4ca5e437f703790082ce9d5429688cc058e989da24daf02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31a45d3dd297c3d2d4ca5e437f703790082ce9d5429688cc058e989da24daf02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78fmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:36Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.221340 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k77s6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"582f9368-9429-4cf2-a78d-8a255fc140a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a07900f4330536e3783fb9566c8c52b4208d624f47a662f74ee0ff184fff648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hrjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k77s6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:36Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.221928 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.221993 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.222005 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.222034 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.222059 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:36Z","lastTransitionTime":"2026-02-23T17:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.251830 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5334ad9-e4a5-4eec-b535-44664ba43c0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47ad3efcec8d953297d16ec3aa049656f57e69121a9dfc2f83572d1fe0c1f8ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8b4c85243660568a55df3eb0867530e60512a01960c5be951763ccd9459be30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b43f0bff4db1bc23f440c84a11c3018196009ec0fe47a9cb4b4cd136e5fffd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee21f0fcd431e1f400bb68f6a5c7b5b8d38ced5a33133684160a3255ac61ec20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65474f581d37631a7193e3ebc061e9df69206ec245113f16c7930111a5f74d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://200faace6300bc9dc3138b270b3fd029a6580cb06bfdd89eb73f97635abc7ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://200faace6300bc9dc3138b270b3fd029a6580cb06bfdd89eb73f97635abc7ac3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3268af99383f73c85412d56d39515a43362510c7a3a972e83752d5929d7d4935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3268af99383f73c85412d56d39515a43362510c7a3a972e83752d5929d7d4935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5ae1a073932b6e36d2af6015c8cd09df8d031c65d67e2d06c5bcc2fcd6b35407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ae1a073932b6e36d2af6015c8cd09df8d031c65d67e2d06c5bcc2fcd6b35407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:36Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.269982 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d114679fca6bbb2dbaaee0970449fe265c7c931665eb559d015abb61de3ec07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dd3d46856b5d5dece678b672db6485b68b9b7fbe9f029f7d9df37906146674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:36Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.286526 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d85ca9995343f8d309c902c37fc2db6faf539dfde44cc8334e279648ec2ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:36Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.303678 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2dn8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00434a2a-97a5-4d8f-9a6f-9dc5b372cd20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://271eec7ff87c2b2d14aded00b308e9cd801b51352cbb3c7075bdc7037560eddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5pmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2dn8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:36Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.324587 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qssx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8179b275-39bb-472c-915f-a02b2a09c88d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c39702ed2e0af737992d239c015f1ec6aa187cbad84f1ad1431c11f179479ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5834e144917a9f037152ea525a0b425f1053407ebc5db5a09433bc8950db3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5834e144917a9f037152ea525a0b425f1053407ebc5db5a09433bc8950db3a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ee8d74243fdc2b520c1c145c37246abc0e2b653025c8626f05c62a9c8b20c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee8d74243fdc2b520c1c145c37246abc0e2b653025c8626f05c62a9c8b20c7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92923ccd7509d6f535098f7e46d3231ffaeed8c19edb7ed11c864d17e9489611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92923ccd7509d6f535098f7e46d3231ffaeed8c19edb7ed11c864d17e9489611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f91baa7de890c1792a75725746bacad51e728f8d8bdc1c0ccdc30ff78b4ca77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f91baa7de890c1792a75725746bacad51e728f8d8bdc1c0ccdc30ff78b4ca77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5570859da5ce3899539e8c87dc2ed79e262b6abd82c80893a6a9aa3c26f4e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5570859da5ce3899539e8c87dc2ed79e262b6abd82c80893a6a9aa3c26f4e37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a11e87c5073ca26ea5e71e822a38110312b4381fb1d1e62f52b1f55ff7d1cfdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a11e87c5073ca26ea5e71e822a38110312b4381fb1d1e62f52b1f55ff7d1cfdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qssx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:36Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.325056 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.325100 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.325112 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.325129 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.325143 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:36Z","lastTransitionTime":"2026-02-23T17:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.343020 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a065b197-b354-4d9b-b2e9-7d4882a3d1a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380efbba5a3d4d1c9997f3f90f6737934b602c7b736796ede2939f205beab4c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cgpd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://716e3c7a8293727fc9315beb9da7fea72ec299ea5d8f15035ab77347c201a7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cgpd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw78r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:36Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.358585 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2df91905-ff3d-4a7d-8e22-337861483a5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9fabd2560e4a3826fb070276626936d120d612d37e2750e417f637b93b1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff1c2b88a730048902a1360cdbcddfb4f82c6a58c427edb24a792b254f55383\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:17Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 17:30:47.305065 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 17:30:47.307934 1 observer_polling.go:159] Starting file observer\\\\nI0223 17:30:47.346927 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 17:30:47.351923 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0223 17:31:17.828320 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:16Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbce4ddd851479ee9b40b8c773a052d5a1d1420df107d8ea0e9fa2b927300910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bfa4a72a4a487fbc7703a3b60a9ef9a18b7fea6e58fd427f50361d8df9731d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2591d81bed9e5dbad670e16b2266367cfb386a3d68ffad9d09e0652130ee2609\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:36Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.381969 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbcb3c1b1daa4328b6e5e049e9f1270ee94da4897df26324f6e4cd898f565de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:36Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.399943 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mmxrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a042db-4057-4913-8091-da7d8c79feba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6df06ac80efb6de6b872bec086095e2b4ba0c13dba6e949a4f7309ef3e4300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wrps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mmxrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:36Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.428345 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.428413 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.428425 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.428442 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.428455 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:36Z","lastTransitionTime":"2026-02-23T17:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.531378 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.531454 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.531466 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.531490 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.531507 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:36Z","lastTransitionTime":"2026-02-23T17:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.635017 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.635079 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.635092 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.635123 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.635137 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:36Z","lastTransitionTime":"2026-02-23T17:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.738726 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.738779 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.738790 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.738814 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.738829 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:36Z","lastTransitionTime":"2026-02-23T17:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.841195 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.841245 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.841259 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.841279 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.841293 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:36Z","lastTransitionTime":"2026-02-23T17:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.847966 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.848011 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.871991 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.883872 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2dn8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00434a2a-97a5-4d8f-9a6f-9dc5b372cd20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://271eec7ff87c2b2d14aded00b308e9cd801b51352cbb3c7075bdc7037560eddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5pmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2dn8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:36Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.900063 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qssx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8179b275-39bb-472c-915f-a02b2a09c88d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c39702ed2e0af737992d239c015f1ec6aa187cbad84f1ad1431c11f179479ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5834e144917a9f037152ea525a0b425f1053407ebc5db5a09433bc8950db3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5834e144917a9f037152ea525a0b425f1053407ebc5db5a09433bc8950db3a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ee8d74243fdc2b520c1c145c37246abc0e2b653025c8626f05c62a9c8b20c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee8d74243fdc2b520c1c145c37246abc0e2b653025c8626f05c62a9c8b20c7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92923ccd7509d6f535098f7e46d3231ffaeed8c19edb7ed11c864d17e9489611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92923ccd7509d6f535098f7e46d3231ffaeed8c19edb7ed11c864d17e9489611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f91baa7de890c1792a75725746bacad51e728f8d8bdc1c0ccdc30ff78b4ca77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f91baa7de890c1792a75725746bacad51e728f8d8bdc1c0ccdc30ff78b4ca77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5570859da5ce3899539e8c87dc2ed79e262b6abd82c80893a6a9aa3c26f4e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5570859da5ce3899539e8c87dc2ed79e262b6abd82c80893a6a9aa3c26f4e37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a11e87c5073ca26ea5e71e822a38110312b4381fb1d1e62f52b1f55ff7d1cfdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a11e87c5073ca26ea5e71e822a38110312b4381fb1d1e62f52b1f55ff7d1cfdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qssx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:36Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.914767 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a065b197-b354-4d9b-b2e9-7d4882a3d1a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380efbba5a3d4d1c9997f3f90f6737934b602c7b736796ede2939f205beab4c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cgpd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://716e3c7a8293727fc9315beb9da7fea72ec299ea5d8f15035ab77347c201a7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cgpd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw78r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:36Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.934861 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5334ad9-e4a5-4eec-b535-44664ba43c0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47ad3efcec8d953297d16ec3aa049656f57e69121a9dfc2f83572d1fe0c1f8ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8b4c85243660568a55df3eb0867530e60512a01960c5be951763ccd9459be30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b43f0bff4db1bc23f440c84a11c3018196009ec0fe47a9cb4b4cd136e5fffd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee21f0fcd431e1f400bb68f6a5c7b5b8d38ced5a33133684160a3255ac61ec20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65474f581d37631a7193e3ebc061e9df69206ec245113f16c7930111a5f74d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://200faace6300bc9dc3138b270b3fd029a6580cb06bfdd89eb73f97635abc7ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://200faace6300bc9dc3138b270b3fd029a6580cb06bfdd89eb73f97635abc7ac3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3268af99383f73c85412d56d39515a43362510c7a3a972e83752d5929d7d4935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3268af99383f73c85412d56d39515a43362510c7a3a972e83752d5929d7d4935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5ae1a073932b6e36d2af6015c8cd09df8d031c65d67e2d06c5bcc2fcd6b35407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ae1a073932b6e36d2af6015c8cd09df8d031c65d67e2d06c5bcc2fcd6b35407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:36Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.946593 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.946632 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.946643 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.946672 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.946683 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:36Z","lastTransitionTime":"2026-02-23T17:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.952015 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:32:36 crc kubenswrapper[4724]: E0223 17:32:36.952147 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.952299 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:32:36 crc kubenswrapper[4724]: E0223 17:32:36.952355 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.952648 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d114679fca6bbb2dbaaee0970449fe265c7c931665eb559d015abb61de3ec07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dd3d46856b5d5dece678b672db6485b68b9b7fbe9f029f7d9df37906146674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:36Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.968090 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d85ca9995343f8d309c902c37fc2db6faf539dfde44cc8334e279648ec2ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:36Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:36 crc kubenswrapper[4724]: I0223 17:32:36.988905 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2df91905-ff3d-4a7d-8e22-337861483a5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9fabd2560e4a3826fb070276626936d120d612d37e2750e417f637b93b1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff1c2b88a730048902a1360cdbcddfb4f82c6a58c427edb24a792b254f55383\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:17Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 17:30:47.305065 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 17:30:47.307934 1 observer_polling.go:159] Starting file observer\\\\nI0223 17:30:47.346927 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 17:30:47.351923 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0223 17:31:17.828320 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:16Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbce4ddd851479ee9b40b8c773a052d5a1d1420df107d8ea0e9fa2b927300910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bfa4a72a4a487fbc7703a3b60a9ef9a18b7fea6e58fd427f50361d8df9731d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2591d81bed9e5dbad670e16b2266367cfb386a3d68ffad9d09e0652130ee2609\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:36Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:37 crc kubenswrapper[4724]: I0223 17:32:37.008069 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbcb3c1b1daa4328b6e5e049e9f1270ee94da4897df26324f6e4cd898f565de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:37Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:37 crc kubenswrapper[4724]: I0223 17:32:37.023900 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mmxrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a042db-4057-4913-8091-da7d8c79feba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6df06ac80efb6de6b872bec086095e2b4ba0c13dba6e949a4f7309ef3e4300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wrps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mmxrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:37Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:37 crc kubenswrapper[4724]: I0223 17:32:37.037470 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:37Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:37 crc kubenswrapper[4724]: I0223 17:32:37.049734 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:37 crc kubenswrapper[4724]: I0223 17:32:37.049856 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:37 crc kubenswrapper[4724]: I0223 17:32:37.049873 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:37 crc kubenswrapper[4724]: I0223 17:32:37.050046 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:37 crc kubenswrapper[4724]: I0223 17:32:37.050064 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:37Z","lastTransitionTime":"2026-02-23T17:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:37 crc kubenswrapper[4724]: I0223 17:32:37.054888 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:37Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:37 crc kubenswrapper[4724]: I0223 17:32:37.065272 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74b84e23-8d8b-47b7-b3d4-269b16e9dfb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e581ef5d1fb9752e7582661dffceba50fc7b408467d9b0f0cca3b29e1838e72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:37Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:37 crc kubenswrapper[4724]: I0223 17:32:37.084426 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e1d6606-75fc-41fd-9c23-18ee248da2af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe805066e5e61f09a4f5cf1f9e912da6664229c8db86a898cbe670278e4c2530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beda3a9d030f2d4aa9c6a78508a84f5bc122debebb2c5b423233834433b8fd5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e108d74a635da102e857e2ad4f18384861e4e630ae31af9aa05841e6bbc2de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b889ee0ad15cc6ff92c69a44c4bc0bf620e23e3dc7276324662c265ba5c7d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:25Z\\\",\\\"message\\\":\\\"W0223 17:31:24.378840 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 17:31:24.379771 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771867884 cert, and key in /tmp/serving-cert-1953836024/serving-signer.crt, /tmp/serving-cert-1953836024/serving-signer.key\\\\nI0223 17:31:24.846158 1 observer_polling.go:159] Starting file observer\\\\nW0223 17:31:24.858497 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 17:31:24.858706 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 17:31:24.859835 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1953836024/tls.crt::/tmp/serving-cert-1953836024/tls.key\\\\\\\"\\\\nF0223 17:31:25.358711 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:31:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b550694e49325768a2674a4d2b7089dd267b91a273751fb62d060a0663f06619\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:37Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:37 crc kubenswrapper[4724]: I0223 17:32:37.098494 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:37Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:37 crc kubenswrapper[4724]: I0223 17:32:37.110290 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35021d6d-15bd-4153-9dae-7b002eff8c23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26478861fb89db7eb6b5ba0a2089cad4360893bd95a71883be07831745c5c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f971a2a092b595760de8b7a9b6a0a13fa802d3fcb6a2c2b8922b01c9c57d782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d892cd1f6ea7fabb2e64e3e78d329f6e0efbf8585cc4f72fbcd29f31a035cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:37Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:37 crc kubenswrapper[4724]: I0223 17:32:37.138755 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4cc6f0c20370fbabd53ec282c0d89022adfd7706f939163b00f543f7086aad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9724a1e6d050110c119972cc34cf64fa8dc6c515f8bc0aec4dbf82463c18d3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcecf13d69c640f2efa0b9abefbb13a08c15b82126e11159faed92da3b2addf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16f5feb6e9a7c7f7013f55ac108791b423b2ba7e1e443d1653c7a85e5622cfc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd0bf4870b6f1d6369d4d1a12817188568f572708708271de41e71e46f2002e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6574641e63ac4e0e79550d6b28b2a683f11be521784b19d2fbfc9226dd5b7ef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e7f74b3c10580201b3ebc5a550f45e82a80bc951402fac470e2f51f06d63491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c79e6fcccea909f19d459103ec63aed7de2f0a498ffab190e63a647f2f20bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a45d3dd297c3d2d4ca5e437f703790082ce9d5429688cc058e989da24daf02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31a45d3dd297c3d2d4ca5e437f703790082ce9d5429688cc058e989da24daf02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78fmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:37Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:37 crc kubenswrapper[4724]: I0223 17:32:37.150273 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k77s6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"582f9368-9429-4cf2-a78d-8a255fc140a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a07900f4330536e3783fb9566c8c52b4208d624f47a662f74ee0ff184fff648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hrjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k77s6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:37Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:37 crc kubenswrapper[4724]: I0223 17:32:37.152523 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:37 crc kubenswrapper[4724]: I0223 17:32:37.152666 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:37 crc kubenswrapper[4724]: I0223 17:32:37.152754 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:37 crc kubenswrapper[4724]: I0223 17:32:37.152844 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:37 crc kubenswrapper[4724]: I0223 17:32:37.152930 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:37Z","lastTransitionTime":"2026-02-23T17:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:37 crc kubenswrapper[4724]: I0223 17:32:37.167762 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 02:41:50.349463402 +0000 UTC Feb 23 17:32:37 crc kubenswrapper[4724]: I0223 17:32:37.255518 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:37 crc kubenswrapper[4724]: I0223 17:32:37.255572 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:37 crc kubenswrapper[4724]: I0223 17:32:37.255584 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:37 crc kubenswrapper[4724]: I0223 17:32:37.255601 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:37 crc kubenswrapper[4724]: I0223 17:32:37.255613 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:37Z","lastTransitionTime":"2026-02-23T17:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:37 crc kubenswrapper[4724]: I0223 17:32:37.358752 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:37 crc kubenswrapper[4724]: I0223 17:32:37.358798 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:37 crc kubenswrapper[4724]: I0223 17:32:37.358809 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:37 crc kubenswrapper[4724]: I0223 17:32:37.358827 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:37 crc kubenswrapper[4724]: I0223 17:32:37.358837 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:37Z","lastTransitionTime":"2026-02-23T17:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:37 crc kubenswrapper[4724]: I0223 17:32:37.461175 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:37 crc kubenswrapper[4724]: I0223 17:32:37.461229 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:37 crc kubenswrapper[4724]: I0223 17:32:37.461240 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:37 crc kubenswrapper[4724]: I0223 17:32:37.461257 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:37 crc kubenswrapper[4724]: I0223 17:32:37.461271 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:37Z","lastTransitionTime":"2026-02-23T17:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:37 crc kubenswrapper[4724]: I0223 17:32:37.564217 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:37 crc kubenswrapper[4724]: I0223 17:32:37.564304 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:37 crc kubenswrapper[4724]: I0223 17:32:37.564318 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:37 crc kubenswrapper[4724]: I0223 17:32:37.564335 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:37 crc kubenswrapper[4724]: I0223 17:32:37.564346 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:37Z","lastTransitionTime":"2026-02-23T17:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:37 crc kubenswrapper[4724]: I0223 17:32:37.667710 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:37 crc kubenswrapper[4724]: I0223 17:32:37.667770 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:37 crc kubenswrapper[4724]: I0223 17:32:37.667783 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:37 crc kubenswrapper[4724]: I0223 17:32:37.667807 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:37 crc kubenswrapper[4724]: I0223 17:32:37.667829 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:37Z","lastTransitionTime":"2026-02-23T17:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:37 crc kubenswrapper[4724]: I0223 17:32:37.771551 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:37 crc kubenswrapper[4724]: I0223 17:32:37.771631 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:37 crc kubenswrapper[4724]: I0223 17:32:37.771652 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:37 crc kubenswrapper[4724]: I0223 17:32:37.771682 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:37 crc kubenswrapper[4724]: I0223 17:32:37.771701 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:37Z","lastTransitionTime":"2026-02-23T17:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:37 crc kubenswrapper[4724]: I0223 17:32:37.874851 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:37 crc kubenswrapper[4724]: I0223 17:32:37.874901 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:37 crc kubenswrapper[4724]: I0223 17:32:37.874910 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:37 crc kubenswrapper[4724]: I0223 17:32:37.874929 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:37 crc kubenswrapper[4724]: I0223 17:32:37.874940 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:37Z","lastTransitionTime":"2026-02-23T17:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:37 crc kubenswrapper[4724]: I0223 17:32:37.950829 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:32:37 crc kubenswrapper[4724]: E0223 17:32:37.951007 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 17:32:37 crc kubenswrapper[4724]: I0223 17:32:37.977847 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:37 crc kubenswrapper[4724]: I0223 17:32:37.977893 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:37 crc kubenswrapper[4724]: I0223 17:32:37.977902 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:37 crc kubenswrapper[4724]: I0223 17:32:37.977919 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:37 crc kubenswrapper[4724]: I0223 17:32:37.977932 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:37Z","lastTransitionTime":"2026-02-23T17:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:38 crc kubenswrapper[4724]: I0223 17:32:38.080703 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:38 crc kubenswrapper[4724]: I0223 17:32:38.080749 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:38 crc kubenswrapper[4724]: I0223 17:32:38.080759 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:38 crc kubenswrapper[4724]: I0223 17:32:38.080778 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:38 crc kubenswrapper[4724]: I0223 17:32:38.080789 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:38Z","lastTransitionTime":"2026-02-23T17:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:38 crc kubenswrapper[4724]: I0223 17:32:38.168433 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 04:09:09.80382477 +0000 UTC Feb 23 17:32:38 crc kubenswrapper[4724]: I0223 17:32:38.183990 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:38 crc kubenswrapper[4724]: I0223 17:32:38.184038 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:38 crc kubenswrapper[4724]: I0223 17:32:38.184050 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:38 crc kubenswrapper[4724]: I0223 17:32:38.184070 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:38 crc kubenswrapper[4724]: I0223 17:32:38.184085 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:38Z","lastTransitionTime":"2026-02-23T17:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:38 crc kubenswrapper[4724]: I0223 17:32:38.287167 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:38 crc kubenswrapper[4724]: I0223 17:32:38.287236 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:38 crc kubenswrapper[4724]: I0223 17:32:38.287255 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:38 crc kubenswrapper[4724]: I0223 17:32:38.287284 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:38 crc kubenswrapper[4724]: I0223 17:32:38.287302 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:38Z","lastTransitionTime":"2026-02-23T17:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:38 crc kubenswrapper[4724]: I0223 17:32:38.389529 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:38 crc kubenswrapper[4724]: I0223 17:32:38.389574 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:38 crc kubenswrapper[4724]: I0223 17:32:38.389585 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:38 crc kubenswrapper[4724]: I0223 17:32:38.389605 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:38 crc kubenswrapper[4724]: I0223 17:32:38.389616 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:38Z","lastTransitionTime":"2026-02-23T17:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:38 crc kubenswrapper[4724]: I0223 17:32:38.492331 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:38 crc kubenswrapper[4724]: I0223 17:32:38.492386 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:38 crc kubenswrapper[4724]: I0223 17:32:38.492417 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:38 crc kubenswrapper[4724]: I0223 17:32:38.492444 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:38 crc kubenswrapper[4724]: I0223 17:32:38.492459 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:38Z","lastTransitionTime":"2026-02-23T17:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:38 crc kubenswrapper[4724]: I0223 17:32:38.595515 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:38 crc kubenswrapper[4724]: I0223 17:32:38.595556 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:38 crc kubenswrapper[4724]: I0223 17:32:38.595565 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:38 crc kubenswrapper[4724]: I0223 17:32:38.595581 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:38 crc kubenswrapper[4724]: I0223 17:32:38.595591 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:38Z","lastTransitionTime":"2026-02-23T17:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:38 crc kubenswrapper[4724]: I0223 17:32:38.698708 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:38 crc kubenswrapper[4724]: I0223 17:32:38.698808 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:38 crc kubenswrapper[4724]: I0223 17:32:38.698827 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:38 crc kubenswrapper[4724]: I0223 17:32:38.698848 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:38 crc kubenswrapper[4724]: I0223 17:32:38.698861 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:38Z","lastTransitionTime":"2026-02-23T17:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:38 crc kubenswrapper[4724]: I0223 17:32:38.801978 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:38 crc kubenswrapper[4724]: I0223 17:32:38.802025 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:38 crc kubenswrapper[4724]: I0223 17:32:38.802035 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:38 crc kubenswrapper[4724]: I0223 17:32:38.802052 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:38 crc kubenswrapper[4724]: I0223 17:32:38.802064 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:38Z","lastTransitionTime":"2026-02-23T17:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:38 crc kubenswrapper[4724]: I0223 17:32:38.857724 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-78fmj_8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1/ovnkube-controller/0.log" Feb 23 17:32:38 crc kubenswrapper[4724]: I0223 17:32:38.862072 4724 generic.go:334] "Generic (PLEG): container finished" podID="8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" containerID="9e7f74b3c10580201b3ebc5a550f45e82a80bc951402fac470e2f51f06d63491" exitCode=1 Feb 23 17:32:38 crc kubenswrapper[4724]: I0223 17:32:38.862128 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" event={"ID":"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1","Type":"ContainerDied","Data":"9e7f74b3c10580201b3ebc5a550f45e82a80bc951402fac470e2f51f06d63491"} Feb 23 17:32:38 crc kubenswrapper[4724]: I0223 17:32:38.863382 4724 scope.go:117] "RemoveContainer" containerID="9e7f74b3c10580201b3ebc5a550f45e82a80bc951402fac470e2f51f06d63491" Feb 23 17:32:38 crc kubenswrapper[4724]: I0223 17:32:38.878227 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2df91905-ff3d-4a7d-8e22-337861483a5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9fabd2560e4a3826fb070276626936d120d612d37e2750e417f637b93b1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff1c2b88a730048902a1360cdbcddfb4f82c6a58c427edb24a792b254f55383\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:17Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 17:30:47.305065 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 17:30:47.307934 1 observer_polling.go:159] Starting file observer\\\\nI0223 17:30:47.346927 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 17:30:47.351923 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0223 17:31:17.828320 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:16Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbce4ddd851479ee9b40b8c773a052d5a1d1420df107d8ea0e9fa2b927300910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bfa4a72a4a487fbc7703a3b60a9ef9a18b7fea6e58fd427f50361d8df9731d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2591d81bed9e5dbad670e16b2266367cfb386a3d68ffad9d09e0652130ee2609\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:38Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:38 crc kubenswrapper[4724]: I0223 17:32:38.896735 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbcb3c1b1daa4328b6e5e049e9f1270ee94da4897df26324f6e4cd898f565de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:38Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:38 crc kubenswrapper[4724]: I0223 17:32:38.909434 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:38 crc kubenswrapper[4724]: I0223 17:32:38.909490 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:38 crc kubenswrapper[4724]: I0223 17:32:38.909504 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:38 crc kubenswrapper[4724]: I0223 17:32:38.909525 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:38 crc kubenswrapper[4724]: I0223 17:32:38.909547 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:38Z","lastTransitionTime":"2026-02-23T17:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:38 crc kubenswrapper[4724]: I0223 17:32:38.918107 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mmxrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a042db-4057-4913-8091-da7d8c79feba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6df06ac80efb6de6b872bec086095e2b4ba0c13dba6e949a4f7309ef3e4300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wrps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mmxrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:38Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:38 crc kubenswrapper[4724]: I0223 17:32:38.932225 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74b84e23-8d8b-47b7-b3d4-269b16e9dfb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e581ef5d1fb9752e7582661dffceba50fc7b408467d9b0f0cca3b29e1838e72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:38Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:38 crc kubenswrapper[4724]: I0223 17:32:38.949987 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:32:38 crc kubenswrapper[4724]: E0223 17:32:38.950129 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 17:32:38 crc kubenswrapper[4724]: I0223 17:32:38.950427 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:32:38 crc kubenswrapper[4724]: E0223 17:32:38.950491 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 17:32:38 crc kubenswrapper[4724]: I0223 17:32:38.955701 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e1d6606-75fc-41fd-9c23-18ee248da2af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe805066e5e61f09a4f5cf1f9e912da6664229c8db86a898cbe670278e4c2530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beda3a9d030f2d4aa9c6a78508a84f5bc122debebb2c5b423233834433b8fd5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e108d74a635da102e857e2ad4f18384861e4e630ae31af9aa05841e6bbc2de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b889ee0ad15cc6ff92c69a44c4bc0bf620e23e3dc7276324662c265ba5c7d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:25Z\\\",\\\"message\\\":\\\"W0223 17:31:24.378840 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 17:31:24.379771 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771867884 cert, and key in /tmp/serving-cert-1953836024/serving-signer.crt, /tmp/serving-cert-1953836024/serving-signer.key\\\\nI0223 17:31:24.846158 1 observer_polling.go:159] Starting file observer\\\\nW0223 17:31:24.858497 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 17:31:24.858706 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 17:31:24.859835 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1953836024/tls.crt::/tmp/serving-cert-1953836024/tls.key\\\\\\\"\\\\nF0223 17:31:25.358711 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:31:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b550694e49325768a2674a4d2b7089dd267b91a273751fb62d060a0663f06619\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:38Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:38 crc kubenswrapper[4724]: I0223 17:32:38.970655 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:38Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:38 crc kubenswrapper[4724]: I0223 17:32:38.985496 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:38Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:38 crc kubenswrapper[4724]: I0223 17:32:38.999196 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:38Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.011383 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35021d6d-15bd-4153-9dae-7b002eff8c23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26478861fb89db7eb6b5ba0a2089cad4360893bd95a71883be07831745c5c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f971a2a092b595760de8b7a9b6a0a13fa802d3fcb6a2c2b8922b01c9c57d782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d892cd1f6ea7fabb2e64e3e78d329f6e0efbf8585cc4f72fbcd29f31a035cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:39Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.014355 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.014414 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.014430 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.014450 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.014464 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:39Z","lastTransitionTime":"2026-02-23T17:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.030502 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4cc6f0c20370fbabd53ec282c0d89022adfd7706f939163b00f543f7086aad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9724a1e6d050110c119972cc34cf64fa8dc6c515f8bc0aec4dbf82463c18d3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcecf13d69c640f2efa0b9abefbb13a08c15b82126e11159faed92da3b2addf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16f5feb6e9a7c7f7013f55ac108791b423b2ba7e1e443d1653c7a85e5622cfc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd0bf4870b6f1d6369d4d1a12817188568f572708708271de41e71e46f2002e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6574641e63ac4e0e79550d6b28b2a683f11be521784b19d2fbfc9226dd5b7ef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e7f74b3c10580201b3ebc5a550f45e82a80bc951402fac470e2f51f06d63491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7f74b3c10580201b3ebc5a550f45e82a80bc951402fac470e2f51f06d63491\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T17:32:38Z\\\",\\\"message\\\":\\\"topping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0223 17:32:38.552144 6370 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0223 17:32:38.552186 6370 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0223 17:32:38.552203 6370 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0223 17:32:38.552210 6370 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0223 17:32:38.552231 6370 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0223 17:32:38.552251 6370 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0223 17:32:38.552261 6370 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0223 17:32:38.552264 6370 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0223 17:32:38.552276 6370 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0223 17:32:38.552279 6370 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0223 17:32:38.552276 6370 factory.go:656] Stopping watch factory\\\\nI0223 17:32:38.552289 6370 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0223 17:32:38.552300 6370 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0223 17:32:38.552303 6370 ovnkube.go:599] Stopped ovnkube\\\\nI0223 17:32:3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c79e6fcccea909f19d459103ec63aed7de2f0a498ffab190e63a647f2f20bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a45d3dd297c3d2d4ca5e437f703790082ce9d5429688cc058e989da24daf02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31a45d3dd297c3d2d4ca5e437f703790082ce9d5429688cc058e989da24daf02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78fmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:39Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.041109 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k77s6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"582f9368-9429-4cf2-a78d-8a255fc140a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a07900f4330536e3783fb9566c8c52b4208d624f47a662f74ee0ff184fff648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hrjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k77s6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:39Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.064500 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5334ad9-e4a5-4eec-b535-44664ba43c0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47ad3efcec8d953297d16ec3aa049656f57e69121a9dfc2f83572d1fe0c1f8ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8b4c85243660568a55df3eb0867530e60512a01960c5be951763ccd9459be30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b43f0bff4db1bc23f440c84a11c3018196009ec0fe47a9cb4b4cd136e5fffd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee21f0fcd431e1f400bb68f6a5c7b5b8d38ced5a33133684160a3255ac61ec20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65474f581d37631a7193e3ebc061e9df69206ec245113f16c7930111a5f74d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://200faace6300bc9dc3138b270b3fd029a6580cb06bfdd89eb73f97635abc7ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://200faace6300bc9dc3138b270b3fd029a6580cb06bfdd89eb73f97635abc7ac3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3268af99383f73c85412d56d39515a43362510c7a3a972e83752d5929d7d4935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3268af99383f73c85412d56d39515a43362510c7a3a972e83752d5929d7d4935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5ae1a073932b6e36d2af6015c8cd09df8d031c65d67e2d06c5bcc2fcd6b35407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ae1a073932b6e36d2af6015c8cd09df8d031c65d67e2d06c5bcc2fcd6b35407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:39Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.078856 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d114679fca6bbb2dbaaee0970449fe265c7c931665eb559d015abb61de3ec07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dd3d46856b5d5dece678b672db6485b68b9b7fbe9f029f7d9df37906146674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:39Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.091854 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d85ca9995343f8d309c902c37fc2db6faf539dfde44cc8334e279648ec2ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:39Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.117003 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.117047 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.117056 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.117075 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.117104 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:39Z","lastTransitionTime":"2026-02-23T17:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.142094 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2dn8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00434a2a-97a5-4d8f-9a6f-9dc5b372cd20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://271eec7ff87c2b2d14aded00b308e9cd801b51352cbb3c7075bdc7037560eddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5pmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2dn8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:39Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.157849 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qssx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8179b275-39bb-472c-915f-a02b2a09c88d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c39702ed2e0af737992d239c015f1ec6aa187cbad84f1ad1431c11f179479ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5834e144917a9f037152ea525a0b425f1053407ebc5db5a09433bc8950db3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5834e144917a9f037152ea525a0b425f1053407ebc5db5a09433bc8950db3a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ee8d74243fdc2b520c1c145c37246abc0e2b653025c8626f05c62a9c8b20c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee8d74243fdc2b520c1c145c37246abc0e2b653025c8626f05c62a9c8b20c7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92923ccd7509d6f535098f7e46d3231ffaeed8c19edb7ed11c864d17e9489611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92923ccd7509d6f535098f7e46d3231ffaeed8c19edb7ed11c864d17e9489611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f91baa7de890c1792a75725746bacad51e728f8d8bdc1c0ccdc30ff78b4ca77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f91baa7de890c1792a75725746bacad51e728f8d8bdc1c0ccdc30ff78b4ca77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5570859da5ce3899539e8c87dc2ed79e262b6abd82c80893a6a9aa3c26f4e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5570859da5ce3899539e8c87dc2ed79e262b6abd82c80893a6a9aa3c26f4e37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a11e87c5073ca26ea5e71e822a38110312b4381fb1d1e62f52b1f55ff7d1cfdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a11e87c5073ca26ea5e71e822a38110312b4381fb1d1e62f52b1f55ff7d1cfdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qssx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:39Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.169133 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 08:04:17.060869236 +0000 UTC Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.169957 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a065b197-b354-4d9b-b2e9-7d4882a3d1a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380efbba5a3d4d1c9997f3f90f6737934b602c7b736796ede2939f205beab4c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cgpd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://716e3c7a8293727fc9315beb9da7fea72ec299ea5d8f15035ab77347c201a7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cgpd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw78r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:39Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.219502 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.219559 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.219576 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.219600 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.219617 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:39Z","lastTransitionTime":"2026-02-23T17:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.322725 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.322771 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.322788 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.322817 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.322836 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:39Z","lastTransitionTime":"2026-02-23T17:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.425969 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.426022 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.426043 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.426071 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.426091 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:39Z","lastTransitionTime":"2026-02-23T17:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.529213 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.529293 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.529316 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.529349 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.529370 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:39Z","lastTransitionTime":"2026-02-23T17:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.632743 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.632852 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.632878 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.632905 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.632926 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:39Z","lastTransitionTime":"2026-02-23T17:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.736453 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.736530 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.736550 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.736581 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.736603 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:39Z","lastTransitionTime":"2026-02-23T17:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.839982 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.840029 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.840038 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.840059 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.840072 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:39Z","lastTransitionTime":"2026-02-23T17:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.869193 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-78fmj_8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1/ovnkube-controller/0.log" Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.873335 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" event={"ID":"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1","Type":"ContainerStarted","Data":"836900e86b8f55ee19369e12f0da55378195919b6fe800304ca8123ba9cf40e8"} Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.873905 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.896656 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74b84e23-8d8b-47b7-b3d4-269b16e9dfb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e581ef5d1fb9752e7582661dffceba50fc7b408467d9b0f0cca3b29e1838e72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:39Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.922224 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e1d6606-75fc-41fd-9c23-18ee248da2af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe805066e5e61f09a4f5cf1f9e912da6664229c8db86a898cbe670278e4c2530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beda3a9d030f2d4aa9c6a78508a84f5bc122debebb2c5b423233834433b8fd5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e108d74a635da102e857e2ad4f18384861e4e630ae31af9aa05841e6bbc2de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b889ee0ad15cc6ff92c69a44c4bc0bf620e23e3dc7276324662c265ba5c7d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:25Z\\\",\\\"message\\\":\\\"W0223 17:31:24.378840 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 17:31:24.379771 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771867884 cert, and key in /tmp/serving-cert-1953836024/serving-signer.crt, /tmp/serving-cert-1953836024/serving-signer.key\\\\nI0223 17:31:24.846158 1 observer_polling.go:159] Starting file observer\\\\nW0223 17:31:24.858497 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 17:31:24.858706 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 17:31:24.859835 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1953836024/tls.crt::/tmp/serving-cert-1953836024/tls.key\\\\\\\"\\\\nF0223 17:31:25.358711 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:31:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b550694e49325768a2674a4d2b7089dd267b91a273751fb62d060a0663f06619\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:39Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.944594 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.944667 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.944690 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.944725 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.944754 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:39Z","lastTransitionTime":"2026-02-23T17:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.950998 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:32:39 crc kubenswrapper[4724]: E0223 17:32:39.951227 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.952661 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:39Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:39 crc kubenswrapper[4724]: I0223 17:32:39.981638 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:39Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.000539 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:39Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.034292 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35021d6d-15bd-4153-9dae-7b002eff8c23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26478861fb89db7eb6b5ba0a2089cad4360893bd95a71883be07831745c5c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f971a2a092b595760de8b7a9b6a0a13fa802d3fcb6a2c2b8922b01c9c57d782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d892cd1f6ea7fabb2e64e3e78d329f6e0efbf8585cc4f72fbcd29f31a035cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:40Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.047924 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.047977 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.047992 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.048013 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.048026 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:40Z","lastTransitionTime":"2026-02-23T17:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.061904 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4cc6f0c20370fbabd53ec282c0d89022adfd7706f939163b00f543f7086aad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9724a1e6d050110c119972cc34cf64fa8dc6c515f8bc0aec4dbf82463c18d3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcecf13d69c640f2efa0b9abefbb13a08c15b82126e11159faed92da3b2addf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16f5feb6e9a7c7f7013f55ac108791b423b2ba7e1e443d1653c7a85e5622cfc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd0bf4870b6f1d6369d4d1a12817188568f572708708271de41e71e46f2002e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6574641e63ac4e0e79550d6b28b2a683f11be521784b19d2fbfc9226dd5b7ef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://836900e86b8f55ee19369e12f0da55378195919b6fe800304ca8123ba9cf40e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7f74b3c10580201b3ebc5a550f45e82a80bc951402fac470e2f51f06d63491\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T17:32:38Z\\\",\\\"message\\\":\\\"topping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0223 17:32:38.552144 6370 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0223 17:32:38.552186 6370 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0223 17:32:38.552203 6370 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0223 17:32:38.552210 6370 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0223 17:32:38.552231 6370 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0223 17:32:38.552251 6370 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0223 17:32:38.552261 6370 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0223 17:32:38.552264 6370 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0223 17:32:38.552276 6370 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0223 17:32:38.552279 6370 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0223 17:32:38.552276 6370 factory.go:656] Stopping watch factory\\\\nI0223 17:32:38.552289 6370 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0223 17:32:38.552300 6370 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0223 17:32:38.552303 6370 ovnkube.go:599] Stopped ovnkube\\\\nI0223 17:32:3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c79e6fcccea909f19d459103ec63aed7de2f0a498ffab190e63a647f2f20bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a45d3dd297c3d2d4ca5e437f703790082ce9d5429688cc058e989da24daf02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31a45d3dd297c3d2d4ca5e437f703790082ce9d5429688cc058e989da24daf02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78fmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:40Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.084052 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k77s6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"582f9368-9429-4cf2-a78d-8a255fc140a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a07900f4330536e3783fb9566c8c52b4208d624f47a662f74ee0ff184fff648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hrjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k77s6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:40Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.107594 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5334ad9-e4a5-4eec-b535-44664ba43c0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47ad3efcec8d953297d16ec3aa049656f57e69121a9dfc2f83572d1fe0c1f8ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8b4c85243660568a55df3eb0867530e60512a01960c5be951763ccd9459be30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b43f0bff4db1bc23f440c84a11c3018196009ec0fe47a9cb4b4cd136e5fffd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee21f0fcd431e1f400bb68f6a5c7b5b8d38ced5a33133684160a3255ac61ec20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65474f581d37631a7193e3ebc061e9df69206ec245113f16c7930111a5f74d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://200faace6300bc9dc3138b270b3fd029a6580cb06bfdd89eb73f97635abc7ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://200faace6300bc9dc3138b270b3fd029a6580cb06bfdd89eb73f97635abc7ac3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3268af99383f73c85412d56d39515a43362510c7a3a972e83752d5929d7d4935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3268af99383f73c85412d56d39515a43362510c7a3a972e83752d5929d7d4935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5ae1a073932b6e36d2af6015c8cd09df8d031c65d67e2d06c5bcc2fcd6b35407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ae1a073932b6e36d2af6015c8cd09df8d031c65d67e2d06c5bcc2fcd6b35407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:40Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.122469 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d114679fca6bbb2dbaaee0970449fe265c7c931665eb559d015abb61de3ec07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dd3d46856b5d5dece678b672db6485b68b9b7fbe9f029f7d9df37906146674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:40Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.134591 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d85ca9995343f8d309c902c37fc2db6faf539dfde44cc8334e279648ec2ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:40Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.156348 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2dn8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00434a2a-97a5-4d8f-9a6f-9dc5b372cd20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://271eec7ff87c2b2d14aded00b308e9cd801b51352cbb3c7075bdc7037560eddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5pmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2dn8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:40Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.158003 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.158058 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.158074 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.158098 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.158114 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:40Z","lastTransitionTime":"2026-02-23T17:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.169733 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 23:43:05.617630854 +0000 UTC Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.174337 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qssx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8179b275-39bb-472c-915f-a02b2a09c88d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c39702ed2e0af737992d239c015f1ec6aa187cbad84f1ad1431c11f179479ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5834e144917a9f037152ea525a0b425f1053407ebc5db5a09433bc8950db3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5834e144917a9f037152ea525a0b425f1053407ebc5db5a09433bc8950db3a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ee8d74243fdc2b520c1c145c37246abc0e2b653025c8626f05c62a9c8b20c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee8d74243fdc2b520c1c145c37246abc0e2b653025c8626f05c62a9c8b20c7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92923ccd7509d6f535098f7e46d3231ffaeed8c19edb7ed11c864d17e9489611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92923ccd7509d6f535098f7e46d3231ffaeed8c19edb7ed11c864d17e9489611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f91baa7de890c1792a75725746bacad51e728f8d8bdc1c0ccdc30ff78b4ca77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f91baa7de890c1792a75725746bacad51e728f8d8bdc1c0ccdc30ff78b4ca77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5570859da5ce3899539e8c87dc2ed79e262b6abd82c80893a6a9aa3c26f4e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5570859da5ce3899539e8c87dc2ed79e262b6abd82c80893a6a9aa3c26f4e37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a11e87c5073ca26ea5e71e822a38110312b4381fb1d1e62f52b1f55ff7d1cfdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a11e87c5073ca26ea5e71e822a38110312b4381fb1d1e62f52b1f55ff7d1cfdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qssx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:40Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.189268 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a065b197-b354-4d9b-b2e9-7d4882a3d1a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380efbba5a3d4d1c9997f3f90f6737934b602c7b736796ede2939f205beab4c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cgpd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://716e3c7a8293727fc9315beb9da7fea72ec299ea5d8f15035ab77347c201a7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cgpd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw78r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:40Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.205060 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2df91905-ff3d-4a7d-8e22-337861483a5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9fabd2560e4a3826fb070276626936d120d612d37e2750e417f637b93b1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff1c2b88a730048902a1360cdbcddfb4f82c6a58c427edb24a792b254f55383\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:17Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 17:30:47.305065 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 17:30:47.307934 1 observer_polling.go:159] Starting file observer\\\\nI0223 17:30:47.346927 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 17:30:47.351923 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0223 17:31:17.828320 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:16Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbce4ddd851479ee9b40b8c773a052d5a1d1420df107d8ea0e9fa2b927300910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bfa4a72a4a487fbc7703a3b60a9ef9a18b7fea6e58fd427f50361d8df9731d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2591d81bed9e5dbad670e16b2266367cfb386a3d68ffad9d09e0652130ee2609\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:40Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.220285 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbcb3c1b1daa4328b6e5e049e9f1270ee94da4897df26324f6e4cd898f565de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:40Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.233564 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mmxrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a042db-4057-4913-8091-da7d8c79feba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6df06ac80efb6de6b872bec086095e2b4ba0c13dba6e949a4f7309ef3e4300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wrps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mmxrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:40Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.261305 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.261344 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.261356 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.261371 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.261382 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:40Z","lastTransitionTime":"2026-02-23T17:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.363893 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.363941 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.363952 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.363970 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.363981 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:40Z","lastTransitionTime":"2026-02-23T17:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.413757 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmvqp"] Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.414527 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmvqp" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.417988 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.419313 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.419591 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1b0504b6-05e5-451b-af95-1745052b85f1-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-zmvqp\" (UID: \"1b0504b6-05e5-451b-af95-1745052b85f1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmvqp" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.419687 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mwgs\" (UniqueName: \"kubernetes.io/projected/1b0504b6-05e5-451b-af95-1745052b85f1-kube-api-access-5mwgs\") pod \"ovnkube-control-plane-749d76644c-zmvqp\" (UID: \"1b0504b6-05e5-451b-af95-1745052b85f1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmvqp" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.419729 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1b0504b6-05e5-451b-af95-1745052b85f1-env-overrides\") pod \"ovnkube-control-plane-749d76644c-zmvqp\" (UID: \"1b0504b6-05e5-451b-af95-1745052b85f1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmvqp" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.419855 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1b0504b6-05e5-451b-af95-1745052b85f1-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-zmvqp\" (UID: \"1b0504b6-05e5-451b-af95-1745052b85f1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmvqp" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.440422 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4cc6f0c20370fbabd53ec282c0d89022adfd7706f939163b00f543f7086aad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9724a1e6d050110c119972cc34cf64fa8dc6c515f8bc0aec4dbf82463c18d3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcecf13d69c640f2efa0b9abefbb13a08c15b82126e11159faed92da3b2addf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16f5feb6e9a7c7f7013f55ac108791b423b2ba7e1e443d1653c7a85e5622cfc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd0bf4870b6f1d6369d4d1a12817188568f572708708271de41e71e46f2002e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6574641e63ac4e0e79550d6b28b2a683f11be521784b19d2fbfc9226dd5b7ef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://836900e86b8f55ee19369e12f0da55378195919b6fe800304ca8123ba9cf40e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7f74b3c10580201b3ebc5a550f45e82a80bc951402fac470e2f51f06d63491\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T17:32:38Z\\\",\\\"message\\\":\\\"topping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0223 17:32:38.552144 6370 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0223 17:32:38.552186 6370 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0223 17:32:38.552203 6370 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0223 17:32:38.552210 6370 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0223 17:32:38.552231 6370 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0223 17:32:38.552251 6370 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0223 17:32:38.552261 6370 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0223 17:32:38.552264 6370 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0223 17:32:38.552276 6370 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0223 17:32:38.552279 6370 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0223 17:32:38.552276 6370 factory.go:656] Stopping watch factory\\\\nI0223 17:32:38.552289 6370 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0223 17:32:38.552300 6370 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0223 17:32:38.552303 6370 ovnkube.go:599] Stopped ovnkube\\\\nI0223 17:32:3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c79e6fcccea909f19d459103ec63aed7de2f0a498ffab190e63a647f2f20bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a45d3dd297c3d2d4ca5e437f703790082ce9d5429688cc058e989da24daf02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31a45d3dd297c3d2d4ca5e437f703790082ce9d5429688cc058e989da24daf02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78fmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:40Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.456073 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k77s6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"582f9368-9429-4cf2-a78d-8a255fc140a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a07900f4330536e3783fb9566c8c52b4208d624f47a662f74ee0ff184fff648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hrjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k77s6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:40Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.466809 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.466859 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.466872 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.466893 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.466909 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:40Z","lastTransitionTime":"2026-02-23T17:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.470698 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35021d6d-15bd-4153-9dae-7b002eff8c23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26478861fb89db7eb6b5ba0a2089cad4360893bd95a71883be07831745c5c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f971a2a092b595760de8b7a9b6a0a13fa802d3fcb6a2c2b8922b01c9c57d782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d892cd1f6ea7fabb2e64e3e78d329f6e0efbf8585cc4f72fbcd29f31a035cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:40Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.484708 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d114679fca6bbb2dbaaee0970449fe265c7c931665eb559d015abb61de3ec07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dd3d46856b5d5dece678b672db6485b68b9b7fbe9f029f7d9df37906146674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:40Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.499000 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d85ca9995343f8d309c902c37fc2db6faf539dfde44cc8334e279648ec2ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:40Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.512468 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2dn8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00434a2a-97a5-4d8f-9a6f-9dc5b372cd20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://271eec7ff87c2b2d14aded00b308e9cd801b51352cbb3c7075bdc7037560eddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5pmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2dn8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:40Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.521542 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1b0504b6-05e5-451b-af95-1745052b85f1-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-zmvqp\" (UID: \"1b0504b6-05e5-451b-af95-1745052b85f1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmvqp" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.521594 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mwgs\" (UniqueName: \"kubernetes.io/projected/1b0504b6-05e5-451b-af95-1745052b85f1-kube-api-access-5mwgs\") pod \"ovnkube-control-plane-749d76644c-zmvqp\" (UID: \"1b0504b6-05e5-451b-af95-1745052b85f1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmvqp" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.521627 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1b0504b6-05e5-451b-af95-1745052b85f1-env-overrides\") pod \"ovnkube-control-plane-749d76644c-zmvqp\" (UID: \"1b0504b6-05e5-451b-af95-1745052b85f1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmvqp" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.521651 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1b0504b6-05e5-451b-af95-1745052b85f1-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-zmvqp\" (UID: \"1b0504b6-05e5-451b-af95-1745052b85f1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmvqp" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.522254 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1b0504b6-05e5-451b-af95-1745052b85f1-env-overrides\") pod \"ovnkube-control-plane-749d76644c-zmvqp\" (UID: \"1b0504b6-05e5-451b-af95-1745052b85f1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmvqp" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.522636 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1b0504b6-05e5-451b-af95-1745052b85f1-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-zmvqp\" (UID: \"1b0504b6-05e5-451b-af95-1745052b85f1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmvqp" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.529437 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qssx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8179b275-39bb-472c-915f-a02b2a09c88d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c39702ed2e0af737992d239c015f1ec6aa187cbad84f1ad1431c11f179479ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5834e144917a9f037152ea525a0b425f1053407ebc5db5a09433bc8950db3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5834e144917a9f037152ea525a0b425f1053407ebc5db5a09433bc8950db3a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ee8d74243fdc2b520c1c145c37246abc0e2b653025c8626f05c62a9c8b20c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee8d74243fdc2b520c1c145c37246abc0e2b653025c8626f05c62a9c8b20c7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92923ccd7509d6f535098f7e46d3231ffaeed8c19edb7ed11c864d17e9489611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92923ccd7509d6f535098f7e46d3231ffaeed8c19edb7ed11c864d17e9489611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f91baa7de890c1792a75725746bacad51e728f8d8bdc1c0ccdc30ff78b4ca77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f91baa7de890c1792a75725746bacad51e728f8d8bdc1c0ccdc30ff78b4ca77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5570859da5ce3899539e8c87dc2ed79e262b6abd82c80893a6a9aa3c26f4e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5570859da5ce3899539e8c87dc2ed79e262b6abd82c80893a6a9aa3c26f4e37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a11e87c5073ca26ea5e71e822a38110312b4381fb1d1e62f52b1f55ff7d1cfdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a11e87c5073ca26ea5e71e822a38110312b4381fb1d1e62f52b1f55ff7d1cfdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qssx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:40Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.529590 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1b0504b6-05e5-451b-af95-1745052b85f1-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-zmvqp\" (UID: \"1b0504b6-05e5-451b-af95-1745052b85f1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmvqp" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.539001 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mwgs\" (UniqueName: \"kubernetes.io/projected/1b0504b6-05e5-451b-af95-1745052b85f1-kube-api-access-5mwgs\") pod \"ovnkube-control-plane-749d76644c-zmvqp\" (UID: \"1b0504b6-05e5-451b-af95-1745052b85f1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmvqp" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.541467 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a065b197-b354-4d9b-b2e9-7d4882a3d1a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380efbba5a3d4d1c9997f3f90f6737934b602c7b736796ede2939f205beab4c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cgpd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://716e3c7a8293727fc9315beb9da7fea72ec299ea5d8f15035ab77347c201a7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cgpd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw78r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:40Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.552087 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmvqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b0504b6-05e5-451b-af95-1745052b85f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mwgs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mwgs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zmvqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:40Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.569578 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5334ad9-e4a5-4eec-b535-44664ba43c0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47ad3efcec8d953297d16ec3aa049656f57e69121a9dfc2f83572d1fe0c1f8ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8b4c85243660568a55df3eb0867530e60512a01960c5be951763ccd9459be30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b43f0bff4db1bc23f440c84a11c3018196009ec0fe47a9cb4b4cd136e5fffd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee21f0fcd431e1f400bb68f6a5c7b5b8d38ced5a33133684160a3255ac61ec20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65474f581d37631a7193e3ebc061e9df69206ec245113f16c7930111a5f74d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://200faace6300bc9dc3138b270b3fd029a6580cb06bfdd89eb73f97635abc7ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://200faace6300bc9dc3138b270b3fd029a6580cb06bfdd89eb73f97635abc7ac3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3268af99383f73c85412d56d39515a43362510c7a3a972e83752d5929d7d4935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3268af99383f73c85412d56d39515a43362510c7a3a972e83752d5929d7d4935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5ae1a073932b6e36d2af6015c8cd09df8d031c65d67e2d06c5bcc2fcd6b35407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ae1a073932b6e36d2af6015c8cd09df8d031c65d67e2d06c5bcc2fcd6b35407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:40Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.570413 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.570470 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.570484 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.570509 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.570525 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:40Z","lastTransitionTime":"2026-02-23T17:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.580960 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbcb3c1b1daa4328b6e5e049e9f1270ee94da4897df26324f6e4cd898f565de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:40Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.592793 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mmxrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a042db-4057-4913-8091-da7d8c79feba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6df06ac80efb6de6b872bec086095e2b4ba0c13dba6e949a4f7309ef3e4300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wrps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mmxrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:40Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.604516 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2df91905-ff3d-4a7d-8e22-337861483a5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9fabd2560e4a3826fb070276626936d120d612d37e2750e417f637b93b1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff1c2b88a730048902a1360cdbcddfb4f82c6a58c427edb24a792b254f55383\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:17Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 17:30:47.305065 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 17:30:47.307934 1 observer_polling.go:159] Starting file observer\\\\nI0223 17:30:47.346927 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 17:30:47.351923 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0223 17:31:17.828320 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:16Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbce4ddd851479ee9b40b8c773a052d5a1d1420df107d8ea0e9fa2b927300910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bfa4a72a4a487fbc7703a3b60a9ef9a18b7fea6e58fd427f50361d8df9731d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2591d81bed9e5dbad670e16b2266367cfb386a3d68ffad9d09e0652130ee2609\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:40Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.618924 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e1d6606-75fc-41fd-9c23-18ee248da2af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe805066e5e61f09a4f5cf1f9e912da6664229c8db86a898cbe670278e4c2530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beda3a9d030f2d4aa9c6a78508a84f5bc122debebb2c5b423233834433b8fd5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e108d74a635da102e857e2ad4f18384861e4e630ae31af9aa05841e6bbc2de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b889ee0ad15cc6ff92c69a44c4bc0bf620e23e3dc7276324662c265ba5c7d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:25Z\\\",\\\"message\\\":\\\"W0223 17:31:24.378840 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 17:31:24.379771 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771867884 cert, and key in /tmp/serving-cert-1953836024/serving-signer.crt, /tmp/serving-cert-1953836024/serving-signer.key\\\\nI0223 17:31:24.846158 1 observer_polling.go:159] Starting file observer\\\\nW0223 17:31:24.858497 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 17:31:24.858706 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 17:31:24.859835 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1953836024/tls.crt::/tmp/serving-cert-1953836024/tls.key\\\\\\\"\\\\nF0223 17:31:25.358711 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:31:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b550694e49325768a2674a4d2b7089dd267b91a273751fb62d060a0663f06619\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:40Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.632017 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:40Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.643212 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:40Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.655312 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:40Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.665917 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74b84e23-8d8b-47b7-b3d4-269b16e9dfb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e581ef5d1fb9752e7582661dffceba50fc7b408467d9b0f0cca3b29e1838e72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:40Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.672537 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.672604 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.672617 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.672641 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.672653 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:40Z","lastTransitionTime":"2026-02-23T17:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.728477 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmvqp" Feb 23 17:32:40 crc kubenswrapper[4724]: W0223 17:32:40.749126 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b0504b6_05e5_451b_af95_1745052b85f1.slice/crio-93ec0a17b7d5a245f68cfbef00b9c9cd3ea8ef54eaaae7594d9f1348c8a91eab WatchSource:0}: Error finding container 93ec0a17b7d5a245f68cfbef00b9c9cd3ea8ef54eaaae7594d9f1348c8a91eab: Status 404 returned error can't find the container with id 93ec0a17b7d5a245f68cfbef00b9c9cd3ea8ef54eaaae7594d9f1348c8a91eab Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.775929 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.775987 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.775997 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.776012 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.776021 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:40Z","lastTransitionTime":"2026-02-23T17:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.878424 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.878484 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.878504 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.878538 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.878559 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:40Z","lastTransitionTime":"2026-02-23T17:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.879728 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-78fmj_8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1/ovnkube-controller/1.log" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.880575 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-78fmj_8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1/ovnkube-controller/0.log" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.883658 4724 generic.go:334] "Generic (PLEG): container finished" podID="8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" containerID="836900e86b8f55ee19369e12f0da55378195919b6fe800304ca8123ba9cf40e8" exitCode=1 Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.883739 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" event={"ID":"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1","Type":"ContainerDied","Data":"836900e86b8f55ee19369e12f0da55378195919b6fe800304ca8123ba9cf40e8"} Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.883792 4724 scope.go:117] "RemoveContainer" containerID="9e7f74b3c10580201b3ebc5a550f45e82a80bc951402fac470e2f51f06d63491" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.884504 4724 scope.go:117] "RemoveContainer" containerID="836900e86b8f55ee19369e12f0da55378195919b6fe800304ca8123ba9cf40e8" Feb 23 17:32:40 crc kubenswrapper[4724]: E0223 17:32:40.884684 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-78fmj_openshift-ovn-kubernetes(8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1)\"" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" podUID="8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.884865 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmvqp" event={"ID":"1b0504b6-05e5-451b-af95-1745052b85f1","Type":"ContainerStarted","Data":"93ec0a17b7d5a245f68cfbef00b9c9cd3ea8ef54eaaae7594d9f1348c8a91eab"} Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.899672 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmvqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b0504b6-05e5-451b-af95-1745052b85f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mwgs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mwgs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zmvqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:40Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.922311 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5334ad9-e4a5-4eec-b535-44664ba43c0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47ad3efcec8d953297d16ec3aa049656f57e69121a9dfc2f83572d1fe0c1f8ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8b4c85243660568a55df3eb0867530e60512a01960c5be951763ccd9459be30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b43f0bff4db1bc23f440c84a11c3018196009ec0fe47a9cb4b4cd136e5fffd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee21f0fcd431e1f400bb68f6a5c7b5b8d38ced5a33133684160a3255ac61ec20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65474f581d37631a7193e3ebc061e9df69206ec245113f16c7930111a5f74d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://200faace6300bc9dc3138b270b3fd029a6580cb06bfdd89eb73f97635abc7ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://200faace6300bc9dc3138b270b3fd029a6580cb06bfdd89eb73f97635abc7ac3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3268af99383f73c85412d56d39515a43362510c7a3a972e83752d5929d7d4935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3268af99383f73c85412d56d39515a43362510c7a3a972e83752d5929d7d4935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5ae1a073932b6e36d2af6015c8cd09df8d031c65d67e2d06c5bcc2fcd6b35407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ae1a073932b6e36d2af6015c8cd09df8d031c65d67e2d06c5bcc2fcd6b35407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:40Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.943809 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d114679fca6bbb2dbaaee0970449fe265c7c931665eb559d015abb61de3ec07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dd3d46856b5d5dece678b672db6485b68b9b7fbe9f029f7d9df37906146674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:40Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.950954 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:32:40 crc kubenswrapper[4724]: E0223 17:32:40.951095 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.951443 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:32:40 crc kubenswrapper[4724]: E0223 17:32:40.951503 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.959509 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d85ca9995343f8d309c902c37fc2db6faf539dfde44cc8334e279648ec2ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:40Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.969908 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2dn8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00434a2a-97a5-4d8f-9a6f-9dc5b372cd20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://271eec7ff87c2b2d14aded00b308e9cd801b51352cbb3c7075bdc7037560eddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5pmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2dn8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:40Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.982417 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.982473 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.982487 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.982510 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.982526 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:40Z","lastTransitionTime":"2026-02-23T17:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:40 crc kubenswrapper[4724]: I0223 17:32:40.985834 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qssx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8179b275-39bb-472c-915f-a02b2a09c88d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c39702ed2e0af737992d239c015f1ec6aa187cbad84f1ad1431c11f179479ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5834e144917a9f037152ea525a0b425f1053407ebc5db5a09433bc8950db3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5834e144917a9f037152ea525a0b425f1053407ebc5db5a09433bc8950db3a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ee8d74243fdc2b520c1c145c37246abc0e2b653025c8626f05c62a9c8b20c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee8d74243fdc2b520c1c145c37246abc0e2b653025c8626f05c62a9c8b20c7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92923ccd7509d6f535098f7e46d3231ffaeed8c19edb7ed11c864d17e9489611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92923ccd7509d6f535098f7e46d3231ffaeed8c19edb7ed11c864d17e9489611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f91baa7de890c1792a75725746bacad51e728f8d8bdc1c0ccdc30ff78b4ca77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f91baa7de890c1792a75725746bacad51e728f8d8bdc1c0ccdc30ff78b4ca77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5570859da5ce3899539e8c87dc2ed79e262b6abd82c80893a6a9aa3c26f4e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5570859da5ce3899539e8c87dc2ed79e262b6abd82c80893a6a9aa3c26f4e37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a11e87c5073ca26ea5e71e822a38110312b4381fb1d1e62f52b1f55ff7d1cfdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a11e87c5073ca26ea5e71e822a38110312b4381fb1d1e62f52b1f55ff7d1cfdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qssx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:40Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.032285 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a065b197-b354-4d9b-b2e9-7d4882a3d1a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380efbba5a3d4d1c9997f3f90f6737934b602c7b736796ede2939f205beab4c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cgpd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://716e3c7a8293727fc9315beb9da7fea72ec299ea5d8f15035ab77347c201a7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cgpd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw78r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:40Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.057994 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2df91905-ff3d-4a7d-8e22-337861483a5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9fabd2560e4a3826fb070276626936d120d612d37e2750e417f637b93b1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff1c2b88a730048902a1360cdbcddfb4f82c6a58c427edb24a792b254f55383\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:17Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 17:30:47.305065 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 17:30:47.307934 1 observer_polling.go:159] Starting file observer\\\\nI0223 17:30:47.346927 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 17:30:47.351923 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0223 17:31:17.828320 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:16Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbce4ddd851479ee9b40b8c773a052d5a1d1420df107d8ea0e9fa2b927300910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bfa4a72a4a487fbc7703a3b60a9ef9a18b7fea6e58fd427f50361d8df9731d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2591d81bed9e5dbad670e16b2266367cfb386a3d68ffad9d09e0652130ee2609\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:41Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.081185 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbcb3c1b1daa4328b6e5e049e9f1270ee94da4897df26324f6e4cd898f565de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:41Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.085166 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.085208 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.085219 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.085237 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.085250 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:41Z","lastTransitionTime":"2026-02-23T17:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.099534 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mmxrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a042db-4057-4913-8091-da7d8c79feba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6df06ac80efb6de6b872bec086095e2b4ba0c13dba6e949a4f7309ef3e4300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wrps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mmxrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:41Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.111515 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74b84e23-8d8b-47b7-b3d4-269b16e9dfb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e581ef5d1fb9752e7582661dffceba50fc7b408467d9b0f0cca3b29e1838e72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:41Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.127304 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e1d6606-75fc-41fd-9c23-18ee248da2af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe805066e5e61f09a4f5cf1f9e912da6664229c8db86a898cbe670278e4c2530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beda3a9d030f2d4aa9c6a78508a84f5bc122debebb2c5b423233834433b8fd5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e108d74a635da102e857e2ad4f18384861e4e630ae31af9aa05841e6bbc2de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b889ee0ad15cc6ff92c69a44c4bc0bf620e23e3dc7276324662c265ba5c7d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:25Z\\\",\\\"message\\\":\\\"W0223 17:31:24.378840 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 17:31:24.379771 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771867884 cert, and key in /tmp/serving-cert-1953836024/serving-signer.crt, /tmp/serving-cert-1953836024/serving-signer.key\\\\nI0223 17:31:24.846158 1 observer_polling.go:159] Starting file observer\\\\nW0223 17:31:24.858497 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 17:31:24.858706 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 17:31:24.859835 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1953836024/tls.crt::/tmp/serving-cert-1953836024/tls.key\\\\\\\"\\\\nF0223 17:31:25.358711 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:31:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b550694e49325768a2674a4d2b7089dd267b91a273751fb62d060a0663f06619\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:41Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.141524 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:41Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.150649 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-q2jvs"] Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.151372 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q2jvs" Feb 23 17:32:41 crc kubenswrapper[4724]: E0223 17:32:41.151523 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q2jvs" podUID="e106e1ec-19f4-4d6b-b71f-dc04dcc437b4" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.157948 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:41Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.169877 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 15:33:04.923374602 +0000 UTC Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.171536 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:41Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.185749 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35021d6d-15bd-4153-9dae-7b002eff8c23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26478861fb89db7eb6b5ba0a2089cad4360893bd95a71883be07831745c5c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f971a2a092b595760de8b7a9b6a0a13fa802d3fcb6a2c2b8922b01c9c57d782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d892cd1f6ea7fabb2e64e3e78d329f6e0efbf8585cc4f72fbcd29f31a035cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:41Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.187545 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.187690 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.187791 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.187882 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.187965 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:41Z","lastTransitionTime":"2026-02-23T17:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.207804 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4cc6f0c20370fbabd53ec282c0d89022adfd7706f939163b00f543f7086aad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9724a1e6d050110c119972cc34cf64fa8dc6c515f8bc0aec4dbf82463c18d3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcecf13d69c640f2efa0b9abefbb13a08c15b82126e11159faed92da3b2addf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16f5feb6e9a7c7f7013f55ac108791b423b2ba7e1e443d1653c7a85e5622cfc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd0bf4870b6f1d6369d4d1a12817188568f572708708271de41e71e46f2002e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6574641e63ac4e0e79550d6b28b2a683f11be521784b19d2fbfc9226dd5b7ef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://836900e86b8f55ee19369e12f0da55378195919b6fe800304ca8123ba9cf40e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7f74b3c10580201b3ebc5a550f45e82a80bc951402fac470e2f51f06d63491\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T17:32:38Z\\\",\\\"message\\\":\\\"topping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0223 17:32:38.552144 6370 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0223 17:32:38.552186 6370 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0223 17:32:38.552203 6370 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0223 17:32:38.552210 6370 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0223 17:32:38.552231 6370 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0223 17:32:38.552251 6370 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0223 17:32:38.552261 6370 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0223 17:32:38.552264 6370 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0223 17:32:38.552276 6370 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0223 17:32:38.552279 6370 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0223 17:32:38.552276 6370 factory.go:656] Stopping watch factory\\\\nI0223 17:32:38.552289 6370 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0223 17:32:38.552300 6370 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0223 17:32:38.552303 6370 ovnkube.go:599] Stopped ovnkube\\\\nI0223 17:32:3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://836900e86b8f55ee19369e12f0da55378195919b6fe800304ca8123ba9cf40e8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T17:32:40Z\\\",\\\"message\\\":\\\":}]\\\\nI0223 17:32:40.517340 6585 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd-operator/metrics]} name:Service_openshift-etcd-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.188:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {53c717ca-2174-4315-bb03-c937a9c0d9b6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0223 17:32:40.517333 6585 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/community-operators]} name:Service_openshift-marketplace/community-operators_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.189:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d389393c-7ba9-422c-b3f5-06e391d537d2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0223 17:32:40.517405 6585 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c79e6fcccea909f19d459103ec63aed7de2f0a498ffab190e63a647f2f20bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a45d3dd297c3d2d4ca5e437f703790082ce9d5429688cc058e989da24daf02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31a45d3dd297c3d2d4ca5e437f703790082ce9d5429688cc058e989da24daf02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78fmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:41Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.220343 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k77s6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"582f9368-9429-4cf2-a78d-8a255fc140a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a07900f4330536e3783fb9566c8c52b4208d624f47a662f74ee0ff184fff648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hrjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k77s6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:41Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.229069 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdl5g\" (UniqueName: \"kubernetes.io/projected/e106e1ec-19f4-4d6b-b71f-dc04dcc437b4-kube-api-access-mdl5g\") pod \"network-metrics-daemon-q2jvs\" (UID: \"e106e1ec-19f4-4d6b-b71f-dc04dcc437b4\") " pod="openshift-multus/network-metrics-daemon-q2jvs" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.229149 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e106e1ec-19f4-4d6b-b71f-dc04dcc437b4-metrics-certs\") pod \"network-metrics-daemon-q2jvs\" (UID: \"e106e1ec-19f4-4d6b-b71f-dc04dcc437b4\") " pod="openshift-multus/network-metrics-daemon-q2jvs" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.237750 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q2jvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e106e1ec-19f4-4d6b-b71f-dc04dcc437b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdl5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdl5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q2jvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:41Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.264275 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2df91905-ff3d-4a7d-8e22-337861483a5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9fabd2560e4a3826fb070276626936d120d612d37e2750e417f637b93b1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff1c2b88a730048902a1360cdbcddfb4f82c6a58c427edb24a792b254f55383\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:17Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 17:30:47.305065 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 17:30:47.307934 1 observer_polling.go:159] Starting file observer\\\\nI0223 17:30:47.346927 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 17:30:47.351923 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0223 17:31:17.828320 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:16Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbce4ddd851479ee9b40b8c773a052d5a1d1420df107d8ea0e9fa2b927300910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bfa4a72a4a487fbc7703a3b60a9ef9a18b7fea6e58fd427f50361d8df9731d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2591d81bed9e5dbad670e16b2266367cfb386a3d68ffad9d09e0652130ee2609\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:41Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.281074 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbcb3c1b1daa4328b6e5e049e9f1270ee94da4897df26324f6e4cd898f565de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:41Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.290275 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.290316 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.290325 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.290340 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.290352 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:41Z","lastTransitionTime":"2026-02-23T17:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.293835 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mmxrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a042db-4057-4913-8091-da7d8c79feba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6df06ac80efb6de6b872bec086095e2b4ba0c13dba6e949a4f7309ef3e4300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wrps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mmxrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:41Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.306657 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:41Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.318355 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:41Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.326758 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74b84e23-8d8b-47b7-b3d4-269b16e9dfb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e581ef5d1fb9752e7582661dffceba50fc7b408467d9b0f0cca3b29e1838e72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:41Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.329626 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdl5g\" (UniqueName: \"kubernetes.io/projected/e106e1ec-19f4-4d6b-b71f-dc04dcc437b4-kube-api-access-mdl5g\") pod \"network-metrics-daemon-q2jvs\" (UID: \"e106e1ec-19f4-4d6b-b71f-dc04dcc437b4\") " pod="openshift-multus/network-metrics-daemon-q2jvs" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.329794 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e106e1ec-19f4-4d6b-b71f-dc04dcc437b4-metrics-certs\") pod \"network-metrics-daemon-q2jvs\" (UID: \"e106e1ec-19f4-4d6b-b71f-dc04dcc437b4\") " pod="openshift-multus/network-metrics-daemon-q2jvs" Feb 23 17:32:41 crc kubenswrapper[4724]: E0223 17:32:41.329990 4724 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 17:32:41 crc kubenswrapper[4724]: E0223 17:32:41.330057 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e106e1ec-19f4-4d6b-b71f-dc04dcc437b4-metrics-certs podName:e106e1ec-19f4-4d6b-b71f-dc04dcc437b4 nodeName:}" failed. No retries permitted until 2026-02-23 17:32:41.830035137 +0000 UTC m=+117.646234737 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e106e1ec-19f4-4d6b-b71f-dc04dcc437b4-metrics-certs") pod "network-metrics-daemon-q2jvs" (UID: "e106e1ec-19f4-4d6b-b71f-dc04dcc437b4") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.341658 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e1d6606-75fc-41fd-9c23-18ee248da2af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe805066e5e61f09a4f5cf1f9e912da6664229c8db86a898cbe670278e4c2530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beda3a9d030f2d4aa9c6a78508a84f5bc122debebb2c5b423233834433b8fd5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e108d74a635da102e857e2ad4f18384861e4e630ae31af9aa05841e6bbc2de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b889ee0ad15cc6ff92c69a44c4bc0bf620e23e3dc7276324662c265ba5c7d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:25Z\\\",\\\"message\\\":\\\"W0223 17:31:24.378840 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 17:31:24.379771 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771867884 cert, and key in /tmp/serving-cert-1953836024/serving-signer.crt, /tmp/serving-cert-1953836024/serving-signer.key\\\\nI0223 17:31:24.846158 1 observer_polling.go:159] Starting file observer\\\\nW0223 17:31:24.858497 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 17:31:24.858706 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 17:31:24.859835 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1953836024/tls.crt::/tmp/serving-cert-1953836024/tls.key\\\\\\\"\\\\nF0223 17:31:25.358711 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:31:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b550694e49325768a2674a4d2b7089dd267b91a273751fb62d060a0663f06619\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:41Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.347697 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdl5g\" (UniqueName: \"kubernetes.io/projected/e106e1ec-19f4-4d6b-b71f-dc04dcc437b4-kube-api-access-mdl5g\") pod \"network-metrics-daemon-q2jvs\" (UID: \"e106e1ec-19f4-4d6b-b71f-dc04dcc437b4\") " pod="openshift-multus/network-metrics-daemon-q2jvs" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.358051 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:41Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.373659 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35021d6d-15bd-4153-9dae-7b002eff8c23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26478861fb89db7eb6b5ba0a2089cad4360893bd95a71883be07831745c5c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f971a2a092b595760de8b7a9b6a0a13fa802d3fcb6a2c2b8922b01c9c57d782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d892cd1f6ea7fabb2e64e3e78d329f6e0efbf8585cc4f72fbcd29f31a035cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:41Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.392873 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.393252 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.393320 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.393384 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.393466 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:41Z","lastTransitionTime":"2026-02-23T17:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.395844 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4cc6f0c20370fbabd53ec282c0d89022adfd7706f939163b00f543f7086aad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9724a1e6d050110c119972cc34cf64fa8dc6c515f8bc0aec4dbf82463c18d3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcecf13d69c640f2efa0b9abefbb13a08c15b82126e11159faed92da3b2addf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16f5feb6e9a7c7f7013f55ac108791b423b2ba7e1e443d1653c7a85e5622cfc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd0bf4870b6f1d6369d4d1a12817188568f572708708271de41e71e46f2002e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6574641e63ac4e0e79550d6b28b2a683f11be521784b19d2fbfc9226dd5b7ef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://836900e86b8f55ee19369e12f0da55378195919b6fe800304ca8123ba9cf40e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7f74b3c10580201b3ebc5a550f45e82a80bc951402fac470e2f51f06d63491\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T17:32:38Z\\\",\\\"message\\\":\\\"topping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0223 17:32:38.552144 6370 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0223 17:32:38.552186 6370 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0223 17:32:38.552203 6370 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0223 17:32:38.552210 6370 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0223 17:32:38.552231 6370 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0223 17:32:38.552251 6370 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0223 17:32:38.552261 6370 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0223 17:32:38.552264 6370 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0223 17:32:38.552276 6370 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0223 17:32:38.552279 6370 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0223 17:32:38.552276 6370 factory.go:656] Stopping watch factory\\\\nI0223 17:32:38.552289 6370 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0223 17:32:38.552300 6370 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0223 17:32:38.552303 6370 ovnkube.go:599] Stopped ovnkube\\\\nI0223 17:32:3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://836900e86b8f55ee19369e12f0da55378195919b6fe800304ca8123ba9cf40e8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T17:32:40Z\\\",\\\"message\\\":\\\":}]\\\\nI0223 17:32:40.517340 6585 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd-operator/metrics]} name:Service_openshift-etcd-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.188:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {53c717ca-2174-4315-bb03-c937a9c0d9b6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0223 17:32:40.517333 6585 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/community-operators]} name:Service_openshift-marketplace/community-operators_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.189:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d389393c-7ba9-422c-b3f5-06e391d537d2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0223 17:32:40.517405 6585 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c79e6fcccea909f19d459103ec63aed7de2f0a498ffab190e63a647f2f20bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a45d3dd297c3d2d4ca5e437f703790082ce9d5429688cc058e989da24daf02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31a45d3dd297c3d2d4ca5e437f703790082ce9d5429688cc058e989da24daf02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78fmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:41Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.407302 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k77s6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"582f9368-9429-4cf2-a78d-8a255fc140a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a07900f4330536e3783fb9566c8c52b4208d624f47a662f74ee0ff184fff648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hrjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k77s6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:41Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.416428 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2dn8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00434a2a-97a5-4d8f-9a6f-9dc5b372cd20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://271eec7ff87c2b2d14aded00b308e9cd801b51352cbb3c7075bdc7037560eddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5pmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2dn8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:41Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.430478 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qssx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8179b275-39bb-472c-915f-a02b2a09c88d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c39702ed2e0af737992d239c015f1ec6aa187cbad84f1ad1431c11f179479ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5834e144917a9f037152ea525a0b425f1053407ebc5db5a09433bc8950db3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5834e144917a9f037152ea525a0b425f1053407ebc5db5a09433bc8950db3a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ee8d74243fdc2b520c1c145c37246abc0e2b653025c8626f05c62a9c8b20c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee8d74243fdc2b520c1c145c37246abc0e2b653025c8626f05c62a9c8b20c7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92923ccd7509d6f535098f7e46d3231ffaeed8c19edb7ed11c864d17e9489611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92923ccd7509d6f535098f7e46d3231ffaeed8c19edb7ed11c864d17e9489611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f91baa7de890c1792a75725746bacad51e728f8d8bdc1c0ccdc30ff78b4ca77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f91baa7de890c1792a75725746bacad51e728f8d8bdc1c0ccdc30ff78b4ca77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5570859da5ce3899539e8c87dc2ed79e262b6abd82c80893a6a9aa3c26f4e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5570859da5ce3899539e8c87dc2ed79e262b6abd82c80893a6a9aa3c26f4e37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a11e87c5073ca26ea5e71e822a38110312b4381fb1d1e62f52b1f55ff7d1cfdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a11e87c5073ca26ea5e71e822a38110312b4381fb1d1e62f52b1f55ff7d1cfdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qssx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:41Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.443750 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a065b197-b354-4d9b-b2e9-7d4882a3d1a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380efbba5a3d4d1c9997f3f90f6737934b602c7b736796ede2939f205beab4c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cgpd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://716e3c7a8293727fc9315beb9da7fea72ec299ea5d8f15035ab77347c201a7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cgpd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw78r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:41Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.457188 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmvqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b0504b6-05e5-451b-af95-1745052b85f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mwgs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mwgs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zmvqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:41Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.482431 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5334ad9-e4a5-4eec-b535-44664ba43c0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47ad3efcec8d953297d16ec3aa049656f57e69121a9dfc2f83572d1fe0c1f8ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8b4c85243660568a55df3eb0867530e60512a01960c5be951763ccd9459be30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b43f0bff4db1bc23f440c84a11c3018196009ec0fe47a9cb4b4cd136e5fffd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee21f0fcd431e1f400bb68f6a5c7b5b8d38ced5a33133684160a3255ac61ec20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65474f581d37631a7193e3ebc061e9df69206ec245113f16c7930111a5f74d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://200faace6300bc9dc3138b270b3fd029a6580cb06bfdd89eb73f97635abc7ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://200faace6300bc9dc3138b270b3fd029a6580cb06bfdd89eb73f97635abc7ac3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3268af99383f73c85412d56d39515a43362510c7a3a972e83752d5929d7d4935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3268af99383f73c85412d56d39515a43362510c7a3a972e83752d5929d7d4935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5ae1a073932b6e36d2af6015c8cd09df8d031c65d67e2d06c5bcc2fcd6b35407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ae1a073932b6e36d2af6015c8cd09df8d031c65d67e2d06c5bcc2fcd6b35407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:41Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.495441 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.495487 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.495499 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.495520 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.495535 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:41Z","lastTransitionTime":"2026-02-23T17:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.498707 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d114679fca6bbb2dbaaee0970449fe265c7c931665eb559d015abb61de3ec07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dd3d46856b5d5dece678b672db6485b68b9b7fbe9f029f7d9df37906146674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:41Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.511222 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d85ca9995343f8d309c902c37fc2db6faf539dfde44cc8334e279648ec2ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:41Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.598814 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.598854 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.598866 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.598886 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.598898 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:41Z","lastTransitionTime":"2026-02-23T17:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.701550 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.701964 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.702160 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.702366 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.702628 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:41Z","lastTransitionTime":"2026-02-23T17:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.805921 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.805987 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.806012 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.806109 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.806137 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:41Z","lastTransitionTime":"2026-02-23T17:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.836140 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e106e1ec-19f4-4d6b-b71f-dc04dcc437b4-metrics-certs\") pod \"network-metrics-daemon-q2jvs\" (UID: \"e106e1ec-19f4-4d6b-b71f-dc04dcc437b4\") " pod="openshift-multus/network-metrics-daemon-q2jvs" Feb 23 17:32:41 crc kubenswrapper[4724]: E0223 17:32:41.836518 4724 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 17:32:41 crc kubenswrapper[4724]: E0223 17:32:41.836638 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e106e1ec-19f4-4d6b-b71f-dc04dcc437b4-metrics-certs podName:e106e1ec-19f4-4d6b-b71f-dc04dcc437b4 nodeName:}" failed. No retries permitted until 2026-02-23 17:32:42.836609995 +0000 UTC m=+118.652809635 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e106e1ec-19f4-4d6b-b71f-dc04dcc437b4-metrics-certs") pod "network-metrics-daemon-q2jvs" (UID: "e106e1ec-19f4-4d6b-b71f-dc04dcc437b4") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.892711 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-78fmj_8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1/ovnkube-controller/1.log" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.906031 4724 scope.go:117] "RemoveContainer" containerID="836900e86b8f55ee19369e12f0da55378195919b6fe800304ca8123ba9cf40e8" Feb 23 17:32:41 crc kubenswrapper[4724]: E0223 17:32:41.906426 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-78fmj_openshift-ovn-kubernetes(8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1)\"" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" podUID="8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.906680 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmvqp" event={"ID":"1b0504b6-05e5-451b-af95-1745052b85f1","Type":"ContainerStarted","Data":"16654edfe25baa03a90ae1b93623bedb0e673bba605f08ae8f117631e8c34989"} Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.906746 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmvqp" event={"ID":"1b0504b6-05e5-451b-af95-1745052b85f1","Type":"ContainerStarted","Data":"c5b7f52e2edfb544d2097b36591085d3305c7a33b691947a3dcbd50f4dd0f849"} Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.908212 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.908277 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.908291 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.908306 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.908319 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:41Z","lastTransitionTime":"2026-02-23T17:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.922128 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbcb3c1b1daa4328b6e5e049e9f1270ee94da4897df26324f6e4cd898f565de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:41Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.942617 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mmxrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a042db-4057-4913-8091-da7d8c79feba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6df06ac80efb6de6b872bec086095e2b4ba0c13dba6e949a4f7309ef3e4300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wrps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mmxrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:41Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.957460 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:32:41 crc kubenswrapper[4724]: E0223 17:32:41.957750 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.965987 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q2jvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e106e1ec-19f4-4d6b-b71f-dc04dcc437b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdl5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdl5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q2jvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:41Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:41 crc kubenswrapper[4724]: I0223 17:32:41.982926 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2df91905-ff3d-4a7d-8e22-337861483a5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9fabd2560e4a3826fb070276626936d120d612d37e2750e417f637b93b1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff1c2b88a730048902a1360cdbcddfb4f82c6a58c427edb24a792b254f55383\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:17Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 17:30:47.305065 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 17:30:47.307934 1 observer_polling.go:159] Starting file observer\\\\nI0223 17:30:47.346927 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 17:30:47.351923 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0223 17:31:17.828320 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:16Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbce4ddd851479ee9b40b8c773a052d5a1d1420df107d8ea0e9fa2b927300910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bfa4a72a4a487fbc7703a3b60a9ef9a18b7fea6e58fd427f50361d8df9731d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2591d81bed9e5dbad670e16b2266367cfb386a3d68ffad9d09e0652130ee2609\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:41Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.003496 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e1d6606-75fc-41fd-9c23-18ee248da2af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe805066e5e61f09a4f5cf1f9e912da6664229c8db86a898cbe670278e4c2530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beda3a9d030f2d4aa9c6a78508a84f5bc122debebb2c5b423233834433b8fd5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e108d74a635da102e857e2ad4f18384861e4e630ae31af9aa05841e6bbc2de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b889ee0ad15cc6ff92c69a44c4bc0bf620e23e3dc7276324662c265ba5c7d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:25Z\\\",\\\"message\\\":\\\"W0223 17:31:24.378840 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 17:31:24.379771 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771867884 cert, and key in /tmp/serving-cert-1953836024/serving-signer.crt, /tmp/serving-cert-1953836024/serving-signer.key\\\\nI0223 17:31:24.846158 1 observer_polling.go:159] Starting file observer\\\\nW0223 17:31:24.858497 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 17:31:24.858706 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 17:31:24.859835 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1953836024/tls.crt::/tmp/serving-cert-1953836024/tls.key\\\\\\\"\\\\nF0223 17:31:25.358711 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:31:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b550694e49325768a2674a4d2b7089dd267b91a273751fb62d060a0663f06619\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:42Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.011546 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.011592 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.011604 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.011622 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.011634 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:42Z","lastTransitionTime":"2026-02-23T17:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.026093 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:42Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.042152 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:42Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.058301 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:42Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.072256 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74b84e23-8d8b-47b7-b3d4-269b16e9dfb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e581ef5d1fb9752e7582661dffceba50fc7b408467d9b0f0cca3b29e1838e72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:42Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.095897 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4cc6f0c20370fbabd53ec282c0d89022adfd7706f939163b00f543f7086aad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9724a1e6d050110c119972cc34cf64fa8dc6c515f8bc0aec4dbf82463c18d3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcecf13d69c640f2efa0b9abefbb13a08c15b82126e11159faed92da3b2addf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16f5feb6e9a7c7f7013f55ac108791b423b2ba7e1e443d1653c7a85e5622cfc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd0bf4870b6f1d6369d4d1a12817188568f572708708271de41e71e46f2002e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6574641e63ac4e0e79550d6b28b2a683f11be521784b19d2fbfc9226dd5b7ef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://836900e86b8f55ee19369e12f0da55378195919b6fe800304ca8123ba9cf40e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://836900e86b8f55ee19369e12f0da55378195919b6fe800304ca8123ba9cf40e8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T17:32:40Z\\\",\\\"message\\\":\\\":}]\\\\nI0223 17:32:40.517340 6585 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd-operator/metrics]} name:Service_openshift-etcd-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.188:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {53c717ca-2174-4315-bb03-c937a9c0d9b6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0223 17:32:40.517333 6585 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/community-operators]} name:Service_openshift-marketplace/community-operators_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.189:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d389393c-7ba9-422c-b3f5-06e391d537d2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0223 17:32:40.517405 6585 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-78fmj_openshift-ovn-kubernetes(8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c79e6fcccea909f19d459103ec63aed7de2f0a498ffab190e63a647f2f20bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a45d3dd297c3d2d4ca5e437f703790082ce9d5429688cc058e989da24daf02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31a45d3dd297c3d2d4ca5e437f703790082ce9d5429688cc058e989da24daf02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78fmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:42Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.109867 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k77s6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"582f9368-9429-4cf2-a78d-8a255fc140a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a07900f4330536e3783fb9566c8c52b4208d624f47a662f74ee0ff184fff648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hrjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k77s6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:42Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.114734 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.114779 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.114792 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.114810 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.114823 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:42Z","lastTransitionTime":"2026-02-23T17:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.122444 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35021d6d-15bd-4153-9dae-7b002eff8c23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26478861fb89db7eb6b5ba0a2089cad4360893bd95a71883be07831745c5c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f971a2a092b595760de8b7a9b6a0a13fa802d3fcb6a2c2b8922b01c9c57d782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d892cd1f6ea7fabb2e64e3e78d329f6e0efbf8585cc4f72fbcd29f31a035cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:42Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.137518 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d114679fca6bbb2dbaaee0970449fe265c7c931665eb559d015abb61de3ec07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dd3d46856b5d5dece678b672db6485b68b9b7fbe9f029f7d9df37906146674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:42Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.157783 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d85ca9995343f8d309c902c37fc2db6faf539dfde44cc8334e279648ec2ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:42Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.170166 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 06:39:49.387518193 +0000 UTC Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.171053 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2dn8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00434a2a-97a5-4d8f-9a6f-9dc5b372cd20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://271eec7ff87c2b2d14aded00b308e9cd801b51352cbb3c7075bdc7037560eddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5pmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2dn8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:42Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.191080 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qssx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8179b275-39bb-472c-915f-a02b2a09c88d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c39702ed2e0af737992d239c015f1ec6aa187cbad84f1ad1431c11f179479ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5834e144917a9f037152ea525a0b425f1053407ebc5db5a09433bc8950db3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5834e144917a9f037152ea525a0b425f1053407ebc5db5a09433bc8950db3a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ee8d74243fdc2b520c1c145c37246abc0e2b653025c8626f05c62a9c8b20c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee8d74243fdc2b520c1c145c37246abc0e2b653025c8626f05c62a9c8b20c7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92923ccd7509d6f535098f7e46d3231ffaeed8c19edb7ed11c864d17e9489611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92923ccd7509d6f535098f7e46d3231ffaeed8c19edb7ed11c864d17e9489611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f91baa7de890c1792a75725746bacad51e728f8d8bdc1c0ccdc30ff78b4ca77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f91baa7de890c1792a75725746bacad51e728f8d8bdc1c0ccdc30ff78b4ca77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5570859da5ce3899539e8c87dc2ed79e262b6abd82c80893a6a9aa3c26f4e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5570859da5ce3899539e8c87dc2ed79e262b6abd82c80893a6a9aa3c26f4e37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a11e87c5073ca26ea5e71e822a38110312b4381fb1d1e62f52b1f55ff7d1cfdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a11e87c5073ca26ea5e71e822a38110312b4381fb1d1e62f52b1f55ff7d1cfdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qssx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:42Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.208371 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a065b197-b354-4d9b-b2e9-7d4882a3d1a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380efbba5a3d4d1c9997f3f90f6737934b602c7b736796ede2939f205beab4c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cgpd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://716e3c7a8293727fc9315beb9da7fea72ec299ea5d8f15035ab77347c201a7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cgpd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw78r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:42Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.217764 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.217794 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.217805 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.217824 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.217838 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:42Z","lastTransitionTime":"2026-02-23T17:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.224533 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmvqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b0504b6-05e5-451b-af95-1745052b85f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mwgs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mwgs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zmvqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:42Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.248718 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5334ad9-e4a5-4eec-b535-44664ba43c0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47ad3efcec8d953297d16ec3aa049656f57e69121a9dfc2f83572d1fe0c1f8ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8b4c85243660568a55df3eb0867530e60512a01960c5be951763ccd9459be30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b43f0bff4db1bc23f440c84a11c3018196009ec0fe47a9cb4b4cd136e5fffd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee21f0fcd431e1f400bb68f6a5c7b5b8d38ced5a33133684160a3255ac61ec20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65474f581d37631a7193e3ebc061e9df69206ec245113f16c7930111a5f74d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://200faace6300bc9dc3138b270b3fd029a6580cb06bfdd89eb73f97635abc7ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://200faace6300bc9dc3138b270b3fd029a6580cb06bfdd89eb73f97635abc7ac3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3268af99383f73c85412d56d39515a43362510c7a3a972e83752d5929d7d4935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3268af99383f73c85412d56d39515a43362510c7a3a972e83752d5929d7d4935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5ae1a073932b6e36d2af6015c8cd09df8d031c65d67e2d06c5bcc2fcd6b35407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ae1a073932b6e36d2af6015c8cd09df8d031c65d67e2d06c5bcc2fcd6b35407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:42Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.260588 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74b84e23-8d8b-47b7-b3d4-269b16e9dfb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e581ef5d1fb9752e7582661dffceba50fc7b408467d9b0f0cca3b29e1838e72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:42Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.281967 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e1d6606-75fc-41fd-9c23-18ee248da2af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe805066e5e61f09a4f5cf1f9e912da6664229c8db86a898cbe670278e4c2530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beda3a9d030f2d4aa9c6a78508a84f5bc122debebb2c5b423233834433b8fd5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e108d74a635da102e857e2ad4f18384861e4e630ae31af9aa05841e6bbc2de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b889ee0ad15cc6ff92c69a44c4bc0bf620e23e3dc7276324662c265ba5c7d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:25Z\\\",\\\"message\\\":\\\"W0223 17:31:24.378840 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 17:31:24.379771 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771867884 cert, and key in /tmp/serving-cert-1953836024/serving-signer.crt, /tmp/serving-cert-1953836024/serving-signer.key\\\\nI0223 17:31:24.846158 1 observer_polling.go:159] Starting file observer\\\\nW0223 17:31:24.858497 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 17:31:24.858706 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 17:31:24.859835 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1953836024/tls.crt::/tmp/serving-cert-1953836024/tls.key\\\\\\\"\\\\nF0223 17:31:25.358711 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:31:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b550694e49325768a2674a4d2b7089dd267b91a273751fb62d060a0663f06619\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:42Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.297732 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:42Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.313581 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:42Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.320227 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.320278 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.320298 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.320335 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.320360 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:42Z","lastTransitionTime":"2026-02-23T17:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.328753 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:42Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.342985 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35021d6d-15bd-4153-9dae-7b002eff8c23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26478861fb89db7eb6b5ba0a2089cad4360893bd95a71883be07831745c5c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f971a2a092b595760de8b7a9b6a0a13fa802d3fcb6a2c2b8922b01c9c57d782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d892cd1f6ea7fabb2e64e3e78d329f6e0efbf8585cc4f72fbcd29f31a035cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:42Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.366325 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4cc6f0c20370fbabd53ec282c0d89022adfd7706f939163b00f543f7086aad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9724a1e6d050110c119972cc34cf64fa8dc6c515f8bc0aec4dbf82463c18d3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcecf13d69c640f2efa0b9abefbb13a08c15b82126e11159faed92da3b2addf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16f5feb6e9a7c7f7013f55ac108791b423b2ba7e1e443d1653c7a85e5622cfc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd0bf4870b6f1d6369d4d1a12817188568f572708708271de41e71e46f2002e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6574641e63ac4e0e79550d6b28b2a683f11be521784b19d2fbfc9226dd5b7ef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://836900e86b8f55ee19369e12f0da55378195919b6fe800304ca8123ba9cf40e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://836900e86b8f55ee19369e12f0da55378195919b6fe800304ca8123ba9cf40e8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T17:32:40Z\\\",\\\"message\\\":\\\":}]\\\\nI0223 17:32:40.517340 6585 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd-operator/metrics]} name:Service_openshift-etcd-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.188:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {53c717ca-2174-4315-bb03-c937a9c0d9b6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0223 17:32:40.517333 6585 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/community-operators]} name:Service_openshift-marketplace/community-operators_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.189:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d389393c-7ba9-422c-b3f5-06e391d537d2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0223 17:32:40.517405 6585 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-78fmj_openshift-ovn-kubernetes(8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c79e6fcccea909f19d459103ec63aed7de2f0a498ffab190e63a647f2f20bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a45d3dd297c3d2d4ca5e437f703790082ce9d5429688cc058e989da24daf02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31a45d3dd297c3d2d4ca5e437f703790082ce9d5429688cc058e989da24daf02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78fmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:42Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.383593 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k77s6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"582f9368-9429-4cf2-a78d-8a255fc140a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a07900f4330536e3783fb9566c8c52b4208d624f47a662f74ee0ff184fff648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hrjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k77s6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:42Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.411150 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5334ad9-e4a5-4eec-b535-44664ba43c0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47ad3efcec8d953297d16ec3aa049656f57e69121a9dfc2f83572d1fe0c1f8ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8b4c85243660568a55df3eb0867530e60512a01960c5be951763ccd9459be30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b43f0bff4db1bc23f440c84a11c3018196009ec0fe47a9cb4b4cd136e5fffd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee21f0fcd431e1f400bb68f6a5c7b5b8d38ced5a33133684160a3255ac61ec20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65474f581d37631a7193e3ebc061e9df69206ec245113f16c7930111a5f74d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://200faace6300bc9dc3138b270b3fd029a6580cb06bfdd89eb73f97635abc7ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://200faace6300bc9dc3138b270b3fd029a6580cb06bfdd89eb73f97635abc7ac3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3268af99383f73c85412d56d39515a43362510c7a3a972e83752d5929d7d4935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3268af99383f73c85412d56d39515a43362510c7a3a972e83752d5929d7d4935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5ae1a073932b6e36d2af6015c8cd09df8d031c65d67e2d06c5bcc2fcd6b35407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ae1a073932b6e36d2af6015c8cd09df8d031c65d67e2d06c5bcc2fcd6b35407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:42Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.422803 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.422876 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.422899 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.422929 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.422948 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:42Z","lastTransitionTime":"2026-02-23T17:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.428263 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d114679fca6bbb2dbaaee0970449fe265c7c931665eb559d015abb61de3ec07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dd3d46856b5d5dece678b672db6485b68b9b7fbe9f029f7d9df37906146674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:42Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.443973 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d85ca9995343f8d309c902c37fc2db6faf539dfde44cc8334e279648ec2ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:42Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.457513 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2dn8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00434a2a-97a5-4d8f-9a6f-9dc5b372cd20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://271eec7ff87c2b2d14aded00b308e9cd801b51352cbb3c7075bdc7037560eddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5pmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2dn8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:42Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.472129 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qssx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8179b275-39bb-472c-915f-a02b2a09c88d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c39702ed2e0af737992d239c015f1ec6aa187cbad84f1ad1431c11f179479ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5834e144917a9f037152ea525a0b425f1053407ebc5db5a09433bc8950db3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5834e144917a9f037152ea525a0b425f1053407ebc5db5a09433bc8950db3a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ee8d74243fdc2b520c1c145c37246abc0e2b653025c8626f05c62a9c8b20c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee8d74243fdc2b520c1c145c37246abc0e2b653025c8626f05c62a9c8b20c7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92923ccd7509d6f535098f7e46d3231ffaeed8c19edb7ed11c864d17e9489611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92923ccd7509d6f535098f7e46d3231ffaeed8c19edb7ed11c864d17e9489611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f91baa7de890c1792a75725746bacad51e728f8d8bdc1c0ccdc30ff78b4ca77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f91baa7de890c1792a75725746bacad51e728f8d8bdc1c0ccdc30ff78b4ca77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5570859da5ce3899539e8c87dc2ed79e262b6abd82c80893a6a9aa3c26f4e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5570859da5ce3899539e8c87dc2ed79e262b6abd82c80893a6a9aa3c26f4e37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a11e87c5073ca26ea5e71e822a38110312b4381fb1d1e62f52b1f55ff7d1cfdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a11e87c5073ca26ea5e71e822a38110312b4381fb1d1e62f52b1f55ff7d1cfdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qssx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:42Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.484637 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a065b197-b354-4d9b-b2e9-7d4882a3d1a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380efbba5a3d4d1c9997f3f90f6737934b602c7b736796ede2939f205beab4c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cgpd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://716e3c7a8293727fc9315beb9da7fea72ec299ea5d8f15035ab77347c201a7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cgpd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw78r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:42Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.497869 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmvqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b0504b6-05e5-451b-af95-1745052b85f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5b7f52e2edfb544d2097b36591085d3305c7a33b691947a3dcbd50f4dd0f849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mwgs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16654edfe25baa03a90ae1b93623bedb0e673bba605f08ae8f117631e8c34989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mwgs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zmvqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:42Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.517723 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2df91905-ff3d-4a7d-8e22-337861483a5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9fabd2560e4a3826fb070276626936d120d612d37e2750e417f637b93b1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff1c2b88a730048902a1360cdbcddfb4f82c6a58c427edb24a792b254f55383\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:17Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 17:30:47.305065 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 17:30:47.307934 1 observer_polling.go:159] Starting file observer\\\\nI0223 17:30:47.346927 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 17:30:47.351923 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0223 17:31:17.828320 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:16Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbce4ddd851479ee9b40b8c773a052d5a1d1420df107d8ea0e9fa2b927300910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bfa4a72a4a487fbc7703a3b60a9ef9a18b7fea6e58fd427f50361d8df9731d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2591d81bed9e5dbad670e16b2266367cfb386a3d68ffad9d09e0652130ee2609\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:42Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.525134 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.525182 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.525196 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.525220 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.525234 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:42Z","lastTransitionTime":"2026-02-23T17:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.535089 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbcb3c1b1daa4328b6e5e049e9f1270ee94da4897df26324f6e4cd898f565de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:42Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.551345 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mmxrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a042db-4057-4913-8091-da7d8c79feba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6df06ac80efb6de6b872bec086095e2b4ba0c13dba6e949a4f7309ef3e4300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wrps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mmxrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:42Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.564045 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q2jvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e106e1ec-19f4-4d6b-b71f-dc04dcc437b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdl5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdl5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q2jvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:42Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.628277 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.628365 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.628419 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.628452 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.628477 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:42Z","lastTransitionTime":"2026-02-23T17:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.731901 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.731976 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.731989 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.732009 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.732023 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:42Z","lastTransitionTime":"2026-02-23T17:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.835130 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.835181 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.835192 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.835211 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.835222 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:42Z","lastTransitionTime":"2026-02-23T17:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.848453 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e106e1ec-19f4-4d6b-b71f-dc04dcc437b4-metrics-certs\") pod \"network-metrics-daemon-q2jvs\" (UID: \"e106e1ec-19f4-4d6b-b71f-dc04dcc437b4\") " pod="openshift-multus/network-metrics-daemon-q2jvs" Feb 23 17:32:42 crc kubenswrapper[4724]: E0223 17:32:42.848693 4724 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 17:32:42 crc kubenswrapper[4724]: E0223 17:32:42.848813 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e106e1ec-19f4-4d6b-b71f-dc04dcc437b4-metrics-certs podName:e106e1ec-19f4-4d6b-b71f-dc04dcc437b4 nodeName:}" failed. No retries permitted until 2026-02-23 17:32:44.848778196 +0000 UTC m=+120.664977846 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e106e1ec-19f4-4d6b-b71f-dc04dcc437b4-metrics-certs") pod "network-metrics-daemon-q2jvs" (UID: "e106e1ec-19f4-4d6b-b71f-dc04dcc437b4") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.938537 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.938612 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.938624 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.938644 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.938660 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:42Z","lastTransitionTime":"2026-02-23T17:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.950081 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.950130 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q2jvs" Feb 23 17:32:42 crc kubenswrapper[4724]: E0223 17:32:42.950252 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 17:32:42 crc kubenswrapper[4724]: I0223 17:32:42.950160 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:32:42 crc kubenswrapper[4724]: E0223 17:32:42.950433 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q2jvs" podUID="e106e1ec-19f4-4d6b-b71f-dc04dcc437b4" Feb 23 17:32:42 crc kubenswrapper[4724]: E0223 17:32:42.950546 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 17:32:43 crc kubenswrapper[4724]: I0223 17:32:43.041608 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:43 crc kubenswrapper[4724]: I0223 17:32:43.041677 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:43 crc kubenswrapper[4724]: I0223 17:32:43.041699 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:43 crc kubenswrapper[4724]: I0223 17:32:43.041731 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:43 crc kubenswrapper[4724]: I0223 17:32:43.041752 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:43Z","lastTransitionTime":"2026-02-23T17:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:43 crc kubenswrapper[4724]: I0223 17:32:43.144725 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:43 crc kubenswrapper[4724]: I0223 17:32:43.144777 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:43 crc kubenswrapper[4724]: I0223 17:32:43.144787 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:43 crc kubenswrapper[4724]: I0223 17:32:43.144805 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:43 crc kubenswrapper[4724]: I0223 17:32:43.144818 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:43Z","lastTransitionTime":"2026-02-23T17:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:43 crc kubenswrapper[4724]: I0223 17:32:43.170615 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 14:02:14.202309948 +0000 UTC Feb 23 17:32:43 crc kubenswrapper[4724]: I0223 17:32:43.248092 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:43 crc kubenswrapper[4724]: I0223 17:32:43.248171 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:43 crc kubenswrapper[4724]: I0223 17:32:43.248192 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:43 crc kubenswrapper[4724]: I0223 17:32:43.248224 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:43 crc kubenswrapper[4724]: I0223 17:32:43.248244 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:43Z","lastTransitionTime":"2026-02-23T17:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:43 crc kubenswrapper[4724]: I0223 17:32:43.350837 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:43 crc kubenswrapper[4724]: I0223 17:32:43.350888 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:43 crc kubenswrapper[4724]: I0223 17:32:43.350900 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:43 crc kubenswrapper[4724]: I0223 17:32:43.350919 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:43 crc kubenswrapper[4724]: I0223 17:32:43.350974 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:43Z","lastTransitionTime":"2026-02-23T17:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:43 crc kubenswrapper[4724]: I0223 17:32:43.454793 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:43 crc kubenswrapper[4724]: I0223 17:32:43.454841 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:43 crc kubenswrapper[4724]: I0223 17:32:43.454855 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:43 crc kubenswrapper[4724]: I0223 17:32:43.454877 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:43 crc kubenswrapper[4724]: I0223 17:32:43.454894 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:43Z","lastTransitionTime":"2026-02-23T17:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:43 crc kubenswrapper[4724]: I0223 17:32:43.558027 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:43 crc kubenswrapper[4724]: I0223 17:32:43.558105 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:43 crc kubenswrapper[4724]: I0223 17:32:43.558129 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:43 crc kubenswrapper[4724]: I0223 17:32:43.558158 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:43 crc kubenswrapper[4724]: I0223 17:32:43.558177 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:43Z","lastTransitionTime":"2026-02-23T17:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:43 crc kubenswrapper[4724]: I0223 17:32:43.662038 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:43 crc kubenswrapper[4724]: I0223 17:32:43.662112 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:43 crc kubenswrapper[4724]: I0223 17:32:43.662131 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:43 crc kubenswrapper[4724]: I0223 17:32:43.662160 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:43 crc kubenswrapper[4724]: I0223 17:32:43.662179 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:43Z","lastTransitionTime":"2026-02-23T17:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:43 crc kubenswrapper[4724]: I0223 17:32:43.765535 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:43 crc kubenswrapper[4724]: I0223 17:32:43.765602 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:43 crc kubenswrapper[4724]: I0223 17:32:43.765619 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:43 crc kubenswrapper[4724]: I0223 17:32:43.765646 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:43 crc kubenswrapper[4724]: I0223 17:32:43.765665 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:43Z","lastTransitionTime":"2026-02-23T17:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:43 crc kubenswrapper[4724]: I0223 17:32:43.868951 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:43 crc kubenswrapper[4724]: I0223 17:32:43.869047 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:43 crc kubenswrapper[4724]: I0223 17:32:43.869081 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:43 crc kubenswrapper[4724]: I0223 17:32:43.869120 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:43 crc kubenswrapper[4724]: I0223 17:32:43.869144 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:43Z","lastTransitionTime":"2026-02-23T17:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:43 crc kubenswrapper[4724]: I0223 17:32:43.950949 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:32:43 crc kubenswrapper[4724]: E0223 17:32:43.951156 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 17:32:43 crc kubenswrapper[4724]: I0223 17:32:43.975539 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:43 crc kubenswrapper[4724]: I0223 17:32:43.975600 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:43 crc kubenswrapper[4724]: I0223 17:32:43.975620 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:43 crc kubenswrapper[4724]: I0223 17:32:43.975655 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:43 crc kubenswrapper[4724]: I0223 17:32:43.975675 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:43Z","lastTransitionTime":"2026-02-23T17:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.079756 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.080341 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.080365 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.080459 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.080488 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:44Z","lastTransitionTime":"2026-02-23T17:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.087525 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.087597 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.087621 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.087649 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.087671 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:44Z","lastTransitionTime":"2026-02-23T17:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:44 crc kubenswrapper[4724]: E0223 17:32:44.114651 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aaac6a71-65af-4ded-9945-71c01ce15653\\\",\\\"systemUUID\\\":\\\"883aa43b-ee67-45aa-9f6b-7760dc931d5e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:44Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.121060 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.121126 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.121138 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.121160 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.121174 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:44Z","lastTransitionTime":"2026-02-23T17:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:44 crc kubenswrapper[4724]: E0223 17:32:44.139147 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aaac6a71-65af-4ded-9945-71c01ce15653\\\",\\\"systemUUID\\\":\\\"883aa43b-ee67-45aa-9f6b-7760dc931d5e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:44Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.144655 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.144707 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.144723 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.144749 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.144766 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:44Z","lastTransitionTime":"2026-02-23T17:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:44 crc kubenswrapper[4724]: E0223 17:32:44.162182 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aaac6a71-65af-4ded-9945-71c01ce15653\\\",\\\"systemUUID\\\":\\\"883aa43b-ee67-45aa-9f6b-7760dc931d5e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:44Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.166079 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.166119 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.166131 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.166167 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.166181 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:44Z","lastTransitionTime":"2026-02-23T17:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.171463 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 10:18:52.348746056 +0000 UTC Feb 23 17:32:44 crc kubenswrapper[4724]: E0223 17:32:44.185010 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aaac6a71-65af-4ded-9945-71c01ce15653\\\",\\\"systemUUID\\\":\\\"883aa43b-ee67-45aa-9f6b-7760dc931d5e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:44Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.189832 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.189994 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.190204 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.190294 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.190357 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:44Z","lastTransitionTime":"2026-02-23T17:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:44 crc kubenswrapper[4724]: E0223 17:32:44.206479 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aaac6a71-65af-4ded-9945-71c01ce15653\\\",\\\"systemUUID\\\":\\\"883aa43b-ee67-45aa-9f6b-7760dc931d5e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:44Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:44 crc kubenswrapper[4724]: E0223 17:32:44.206916 4724 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.209229 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.209365 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.209448 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.209530 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.209597 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:44Z","lastTransitionTime":"2026-02-23T17:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.313092 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.313157 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.313178 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.313207 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.313227 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:44Z","lastTransitionTime":"2026-02-23T17:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.417193 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.417286 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.417310 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.417343 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.417362 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:44Z","lastTransitionTime":"2026-02-23T17:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.520487 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.520546 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.520561 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.520582 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.520596 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:44Z","lastTransitionTime":"2026-02-23T17:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.624796 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.624862 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.624877 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.624902 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.624921 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:44Z","lastTransitionTime":"2026-02-23T17:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.727899 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.728306 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.728500 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.728655 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.728808 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:44Z","lastTransitionTime":"2026-02-23T17:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:44 crc kubenswrapper[4724]: E0223 17:32:44.830247 4724 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.876642 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e106e1ec-19f4-4d6b-b71f-dc04dcc437b4-metrics-certs\") pod \"network-metrics-daemon-q2jvs\" (UID: \"e106e1ec-19f4-4d6b-b71f-dc04dcc437b4\") " pod="openshift-multus/network-metrics-daemon-q2jvs" Feb 23 17:32:44 crc kubenswrapper[4724]: E0223 17:32:44.877186 4724 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 17:32:44 crc kubenswrapper[4724]: E0223 17:32:44.877456 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e106e1ec-19f4-4d6b-b71f-dc04dcc437b4-metrics-certs podName:e106e1ec-19f4-4d6b-b71f-dc04dcc437b4 nodeName:}" failed. No retries permitted until 2026-02-23 17:32:48.877340832 +0000 UTC m=+124.693540472 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e106e1ec-19f4-4d6b-b71f-dc04dcc437b4-metrics-certs") pod "network-metrics-daemon-q2jvs" (UID: "e106e1ec-19f4-4d6b-b71f-dc04dcc437b4") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.950065 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q2jvs" Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.950149 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.950347 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:32:44 crc kubenswrapper[4724]: E0223 17:32:44.950778 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 17:32:44 crc kubenswrapper[4724]: E0223 17:32:44.950920 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 17:32:44 crc kubenswrapper[4724]: E0223 17:32:44.951797 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q2jvs" podUID="e106e1ec-19f4-4d6b-b71f-dc04dcc437b4" Feb 23 17:32:44 crc kubenswrapper[4724]: I0223 17:32:44.977773 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:44Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:45 crc kubenswrapper[4724]: I0223 17:32:44.999878 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74b84e23-8d8b-47b7-b3d4-269b16e9dfb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e581ef5d1fb9752e7582661dffceba50fc7b408467d9b0f0cca3b29e1838e72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:44Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:45 crc kubenswrapper[4724]: I0223 17:32:45.019951 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e1d6606-75fc-41fd-9c23-18ee248da2af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe805066e5e61f09a4f5cf1f9e912da6664229c8db86a898cbe670278e4c2530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beda3a9d030f2d4aa9c6a78508a84f5bc122debebb2c5b423233834433b8fd5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e108d74a635da102e857e2ad4f18384861e4e630ae31af9aa05841e6bbc2de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b889ee0ad15cc6ff92c69a44c4bc0bf620e23e3dc7276324662c265ba5c7d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:25Z\\\",\\\"message\\\":\\\"W0223 17:31:24.378840 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 17:31:24.379771 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771867884 cert, and key in /tmp/serving-cert-1953836024/serving-signer.crt, /tmp/serving-cert-1953836024/serving-signer.key\\\\nI0223 17:31:24.846158 1 observer_polling.go:159] Starting file observer\\\\nW0223 17:31:24.858497 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 17:31:24.858706 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 17:31:24.859835 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1953836024/tls.crt::/tmp/serving-cert-1953836024/tls.key\\\\\\\"\\\\nF0223 17:31:25.358711 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:31:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b550694e49325768a2674a4d2b7089dd267b91a273751fb62d060a0663f06619\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:45Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:45 crc kubenswrapper[4724]: I0223 17:32:45.040130 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:45Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:45 crc kubenswrapper[4724]: I0223 17:32:45.061232 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:45Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:45 crc kubenswrapper[4724]: I0223 17:32:45.075949 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35021d6d-15bd-4153-9dae-7b002eff8c23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26478861fb89db7eb6b5ba0a2089cad4360893bd95a71883be07831745c5c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f971a2a092b595760de8b7a9b6a0a13fa802d3fcb6a2c2b8922b01c9c57d782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d892cd1f6ea7fabb2e64e3e78d329f6e0efbf8585cc4f72fbcd29f31a035cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:45Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:45 crc kubenswrapper[4724]: I0223 17:32:45.103555 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4cc6f0c20370fbabd53ec282c0d89022adfd7706f939163b00f543f7086aad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9724a1e6d050110c119972cc34cf64fa8dc6c515f8bc0aec4dbf82463c18d3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcecf13d69c640f2efa0b9abefbb13a08c15b82126e11159faed92da3b2addf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16f5feb6e9a7c7f7013f55ac108791b423b2ba7e1e443d1653c7a85e5622cfc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd0bf4870b6f1d6369d4d1a12817188568f572708708271de41e71e46f2002e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6574641e63ac4e0e79550d6b28b2a683f11be521784b19d2fbfc9226dd5b7ef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://836900e86b8f55ee19369e12f0da55378195919b6fe800304ca8123ba9cf40e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://836900e86b8f55ee19369e12f0da55378195919b6fe800304ca8123ba9cf40e8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T17:32:40Z\\\",\\\"message\\\":\\\":}]\\\\nI0223 17:32:40.517340 6585 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd-operator/metrics]} name:Service_openshift-etcd-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.188:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {53c717ca-2174-4315-bb03-c937a9c0d9b6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0223 17:32:40.517333 6585 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/community-operators]} name:Service_openshift-marketplace/community-operators_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.189:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d389393c-7ba9-422c-b3f5-06e391d537d2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0223 17:32:40.517405 6585 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-78fmj_openshift-ovn-kubernetes(8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c79e6fcccea909f19d459103ec63aed7de2f0a498ffab190e63a647f2f20bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a45d3dd297c3d2d4ca5e437f703790082ce9d5429688cc058e989da24daf02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31a45d3dd297c3d2d4ca5e437f703790082ce9d5429688cc058e989da24daf02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78fmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:45Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:45 crc kubenswrapper[4724]: I0223 17:32:45.117266 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k77s6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"582f9368-9429-4cf2-a78d-8a255fc140a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a07900f4330536e3783fb9566c8c52b4208d624f47a662f74ee0ff184fff648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hrjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k77s6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:45Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:45 crc kubenswrapper[4724]: I0223 17:32:45.133465 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qssx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8179b275-39bb-472c-915f-a02b2a09c88d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c39702ed2e0af737992d239c015f1ec6aa187cbad84f1ad1431c11f179479ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5834e144917a9f037152ea525a0b425f1053407ebc5db5a09433bc8950db3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5834e144917a9f037152ea525a0b425f1053407ebc5db5a09433bc8950db3a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ee8d74243fdc2b520c1c145c37246abc0e2b653025c8626f05c62a9c8b20c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee8d74243fdc2b520c1c145c37246abc0e2b653025c8626f05c62a9c8b20c7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92923ccd7509d6f535098f7e46d3231ffaeed8c19edb7ed11c864d17e9489611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92923ccd7509d6f535098f7e46d3231ffaeed8c19edb7ed11c864d17e9489611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f91baa7de890c1792a75725746bacad51e728f8d8bdc1c0ccdc30ff78b4ca77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f91baa7de890c1792a75725746bacad51e728f8d8bdc1c0ccdc30ff78b4ca77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5570859da5ce3899539e8c87dc2ed79e262b6abd82c80893a6a9aa3c26f4e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5570859da5ce3899539e8c87dc2ed79e262b6abd82c80893a6a9aa3c26f4e37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a11e87c5073ca26ea5e71e822a38110312b4381fb1d1e62f52b1f55ff7d1cfdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a11e87c5073ca26ea5e71e822a38110312b4381fb1d1e62f52b1f55ff7d1cfdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qssx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:45Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:45 crc kubenswrapper[4724]: I0223 17:32:45.147862 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a065b197-b354-4d9b-b2e9-7d4882a3d1a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380efbba5a3d4d1c9997f3f90f6737934b602c7b736796ede2939f205beab4c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cgpd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://716e3c7a8293727fc9315beb9da7fea72ec299ea5d8f15035ab77347c201a7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cgpd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw78r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:45Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:45 crc kubenswrapper[4724]: I0223 17:32:45.158114 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmvqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b0504b6-05e5-451b-af95-1745052b85f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5b7f52e2edfb544d2097b36591085d3305c7a33b691947a3dcbd50f4dd0f849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mwgs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16654edfe25baa03a90ae1b93623bedb0e673bba605f08ae8f117631e8c34989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mwgs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zmvqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:45Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:45 crc kubenswrapper[4724]: I0223 17:32:45.171738 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 13:34:55.713687486 +0000 UTC Feb 23 17:32:45 crc kubenswrapper[4724]: I0223 17:32:45.181405 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5334ad9-e4a5-4eec-b535-44664ba43c0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47ad3efcec8d953297d16ec3aa049656f57e69121a9dfc2f83572d1fe0c1f8ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8b4c85243660568a55df3eb0867530e60512a01960c5be951763ccd9459be30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b43f0bff4db1bc23f440c84a11c3018196009ec0fe47a9cb4b4cd136e5fffd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee21f0fcd431e1f400bb68f6a5c7b5b8d38ced5a33133684160a3255ac61ec20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65474f581d37631a7193e3ebc061e9df69206ec245113f16c7930111a5f74d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://200faace6300bc9dc3138b270b3fd029a6580cb06bfdd89eb73f97635abc7ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://200faace6300bc9dc3138b270b3fd029a6580cb06bfdd89eb73f97635abc7ac3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3268af99383f73c85412d56d39515a43362510c7a3a972e83752d5929d7d4935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3268af99383f73c85412d56d39515a43362510c7a3a972e83752d5929d7d4935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5ae1a073932b6e36d2af6015c8cd09df8d031c65d67e2d06c5bcc2fcd6b35407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ae1a073932b6e36d2af6015c8cd09df8d031c65d67e2d06c5bcc2fcd6b35407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:45Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:45 crc kubenswrapper[4724]: E0223 17:32:45.184122 4724 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 17:32:45 crc kubenswrapper[4724]: I0223 17:32:45.200215 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d114679fca6bbb2dbaaee0970449fe265c7c931665eb559d015abb61de3ec07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dd3d46856b5d5dece678b672db6485b68b9b7fbe9f029f7d9df37906146674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:45Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:45 crc kubenswrapper[4724]: I0223 17:32:45.214153 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d85ca9995343f8d309c902c37fc2db6faf539dfde44cc8334e279648ec2ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:45Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:45 crc kubenswrapper[4724]: I0223 17:32:45.231800 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2dn8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00434a2a-97a5-4d8f-9a6f-9dc5b372cd20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://271eec7ff87c2b2d14aded00b308e9cd801b51352cbb3c7075bdc7037560eddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5pmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2dn8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:45Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:45 crc kubenswrapper[4724]: I0223 17:32:45.251629 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2df91905-ff3d-4a7d-8e22-337861483a5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9fabd2560e4a3826fb070276626936d120d612d37e2750e417f637b93b1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff1c2b88a730048902a1360cdbcddfb4f82c6a58c427edb24a792b254f55383\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:17Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 17:30:47.305065 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 17:30:47.307934 1 observer_polling.go:159] Starting file observer\\\\nI0223 17:30:47.346927 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 17:30:47.351923 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0223 17:31:17.828320 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:16Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbce4ddd851479ee9b40b8c773a052d5a1d1420df107d8ea0e9fa2b927300910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bfa4a72a4a487fbc7703a3b60a9ef9a18b7fea6e58fd427f50361d8df9731d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2591d81bed9e5dbad670e16b2266367cfb386a3d68ffad9d09e0652130ee2609\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:45Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:45 crc kubenswrapper[4724]: I0223 17:32:45.269191 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbcb3c1b1daa4328b6e5e049e9f1270ee94da4897df26324f6e4cd898f565de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:45Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:45 crc kubenswrapper[4724]: I0223 17:32:45.287438 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mmxrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a042db-4057-4913-8091-da7d8c79feba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6df06ac80efb6de6b872bec086095e2b4ba0c13dba6e949a4f7309ef3e4300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wrps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mmxrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:45Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:45 crc kubenswrapper[4724]: I0223 17:32:45.302690 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q2jvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e106e1ec-19f4-4d6b-b71f-dc04dcc437b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdl5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdl5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q2jvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:45Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:45 crc kubenswrapper[4724]: I0223 17:32:45.950463 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:32:45 crc kubenswrapper[4724]: E0223 17:32:45.950672 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 17:32:46 crc kubenswrapper[4724]: I0223 17:32:46.172584 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 17:37:37.551868694 +0000 UTC Feb 23 17:32:46 crc kubenswrapper[4724]: I0223 17:32:46.950660 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:32:46 crc kubenswrapper[4724]: I0223 17:32:46.950660 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:32:46 crc kubenswrapper[4724]: I0223 17:32:46.950683 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q2jvs" Feb 23 17:32:46 crc kubenswrapper[4724]: E0223 17:32:46.950924 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 17:32:46 crc kubenswrapper[4724]: E0223 17:32:46.951172 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q2jvs" podUID="e106e1ec-19f4-4d6b-b71f-dc04dcc437b4" Feb 23 17:32:46 crc kubenswrapper[4724]: E0223 17:32:46.951316 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 17:32:47 crc kubenswrapper[4724]: I0223 17:32:47.172822 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 20:15:37.5203564 +0000 UTC Feb 23 17:32:47 crc kubenswrapper[4724]: I0223 17:32:47.950882 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:32:47 crc kubenswrapper[4724]: E0223 17:32:47.951069 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 17:32:48 crc kubenswrapper[4724]: I0223 17:32:48.173495 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 18:08:20.188372114 +0000 UTC Feb 23 17:32:48 crc kubenswrapper[4724]: I0223 17:32:48.916435 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e106e1ec-19f4-4d6b-b71f-dc04dcc437b4-metrics-certs\") pod \"network-metrics-daemon-q2jvs\" (UID: \"e106e1ec-19f4-4d6b-b71f-dc04dcc437b4\") " pod="openshift-multus/network-metrics-daemon-q2jvs" Feb 23 17:32:48 crc kubenswrapper[4724]: E0223 17:32:48.916615 4724 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 17:32:48 crc kubenswrapper[4724]: E0223 17:32:48.916683 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e106e1ec-19f4-4d6b-b71f-dc04dcc437b4-metrics-certs podName:e106e1ec-19f4-4d6b-b71f-dc04dcc437b4 nodeName:}" failed. No retries permitted until 2026-02-23 17:32:56.916664513 +0000 UTC m=+132.732864113 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e106e1ec-19f4-4d6b-b71f-dc04dcc437b4-metrics-certs") pod "network-metrics-daemon-q2jvs" (UID: "e106e1ec-19f4-4d6b-b71f-dc04dcc437b4") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 17:32:48 crc kubenswrapper[4724]: I0223 17:32:48.950001 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q2jvs" Feb 23 17:32:48 crc kubenswrapper[4724]: I0223 17:32:48.950028 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:32:48 crc kubenswrapper[4724]: I0223 17:32:48.950251 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:32:48 crc kubenswrapper[4724]: E0223 17:32:48.950322 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 17:32:48 crc kubenswrapper[4724]: E0223 17:32:48.950424 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 17:32:48 crc kubenswrapper[4724]: E0223 17:32:48.950203 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q2jvs" podUID="e106e1ec-19f4-4d6b-b71f-dc04dcc437b4" Feb 23 17:32:49 crc kubenswrapper[4724]: I0223 17:32:49.174117 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 12:25:59.799987994 +0000 UTC Feb 23 17:32:49 crc kubenswrapper[4724]: I0223 17:32:49.950732 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:32:49 crc kubenswrapper[4724]: E0223 17:32:49.950889 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 17:32:50 crc kubenswrapper[4724]: I0223 17:32:50.174720 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 20:13:21.302328478 +0000 UTC Feb 23 17:32:50 crc kubenswrapper[4724]: E0223 17:32:50.186001 4724 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 17:32:50 crc kubenswrapper[4724]: I0223 17:32:50.950244 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q2jvs" Feb 23 17:32:50 crc kubenswrapper[4724]: I0223 17:32:50.950331 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:32:50 crc kubenswrapper[4724]: I0223 17:32:50.950340 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:32:50 crc kubenswrapper[4724]: E0223 17:32:50.950486 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q2jvs" podUID="e106e1ec-19f4-4d6b-b71f-dc04dcc437b4" Feb 23 17:32:50 crc kubenswrapper[4724]: E0223 17:32:50.950666 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 17:32:50 crc kubenswrapper[4724]: E0223 17:32:50.950797 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 17:32:51 crc kubenswrapper[4724]: I0223 17:32:51.175578 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 13:30:11.515219512 +0000 UTC Feb 23 17:32:51 crc kubenswrapper[4724]: I0223 17:32:51.950898 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:32:51 crc kubenswrapper[4724]: E0223 17:32:51.951084 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 17:32:52 crc kubenswrapper[4724]: I0223 17:32:52.175928 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 19:30:43.615136278 +0000 UTC Feb 23 17:32:52 crc kubenswrapper[4724]: I0223 17:32:52.950468 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q2jvs" Feb 23 17:32:52 crc kubenswrapper[4724]: I0223 17:32:52.950593 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:32:52 crc kubenswrapper[4724]: E0223 17:32:52.950718 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q2jvs" podUID="e106e1ec-19f4-4d6b-b71f-dc04dcc437b4" Feb 23 17:32:52 crc kubenswrapper[4724]: I0223 17:32:52.950850 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:32:52 crc kubenswrapper[4724]: E0223 17:32:52.951073 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 17:32:52 crc kubenswrapper[4724]: E0223 17:32:52.951487 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 17:32:53 crc kubenswrapper[4724]: I0223 17:32:53.176825 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 20:13:04.924148873 +0000 UTC Feb 23 17:32:53 crc kubenswrapper[4724]: I0223 17:32:53.949993 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:32:53 crc kubenswrapper[4724]: E0223 17:32:53.950270 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 17:32:53 crc kubenswrapper[4724]: I0223 17:32:53.951052 4724 scope.go:117] "RemoveContainer" containerID="836900e86b8f55ee19369e12f0da55378195919b6fe800304ca8123ba9cf40e8" Feb 23 17:32:54 crc kubenswrapper[4724]: I0223 17:32:54.178268 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 21:38:26.095350409 +0000 UTC Feb 23 17:32:54 crc kubenswrapper[4724]: I0223 17:32:54.270193 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:54 crc kubenswrapper[4724]: I0223 17:32:54.270238 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:54 crc kubenswrapper[4724]: I0223 17:32:54.270250 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:54 crc kubenswrapper[4724]: I0223 17:32:54.270269 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:54 crc kubenswrapper[4724]: I0223 17:32:54.270281 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:54Z","lastTransitionTime":"2026-02-23T17:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:54 crc kubenswrapper[4724]: E0223 17:32:54.283940 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aaac6a71-65af-4ded-9945-71c01ce15653\\\",\\\"systemUUID\\\":\\\"883aa43b-ee67-45aa-9f6b-7760dc931d5e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:54Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:54 crc kubenswrapper[4724]: I0223 17:32:54.288540 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:54 crc kubenswrapper[4724]: I0223 17:32:54.288582 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:54 crc kubenswrapper[4724]: I0223 17:32:54.288597 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:54 crc kubenswrapper[4724]: I0223 17:32:54.288617 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:54 crc kubenswrapper[4724]: I0223 17:32:54.288628 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:54Z","lastTransitionTime":"2026-02-23T17:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:54 crc kubenswrapper[4724]: E0223 17:32:54.308263 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aaac6a71-65af-4ded-9945-71c01ce15653\\\",\\\"systemUUID\\\":\\\"883aa43b-ee67-45aa-9f6b-7760dc931d5e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:54Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:54 crc kubenswrapper[4724]: I0223 17:32:54.314187 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:54 crc kubenswrapper[4724]: I0223 17:32:54.314251 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:54 crc kubenswrapper[4724]: I0223 17:32:54.314264 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:54 crc kubenswrapper[4724]: I0223 17:32:54.314286 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:54 crc kubenswrapper[4724]: I0223 17:32:54.314300 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:54Z","lastTransitionTime":"2026-02-23T17:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:54 crc kubenswrapper[4724]: E0223 17:32:54.334078 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aaac6a71-65af-4ded-9945-71c01ce15653\\\",\\\"systemUUID\\\":\\\"883aa43b-ee67-45aa-9f6b-7760dc931d5e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:54Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:54 crc kubenswrapper[4724]: I0223 17:32:54.340616 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:54 crc kubenswrapper[4724]: I0223 17:32:54.340700 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:54 crc kubenswrapper[4724]: I0223 17:32:54.340721 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:54 crc kubenswrapper[4724]: I0223 17:32:54.340755 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:54 crc kubenswrapper[4724]: I0223 17:32:54.340779 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:54Z","lastTransitionTime":"2026-02-23T17:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:54 crc kubenswrapper[4724]: E0223 17:32:54.363767 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aaac6a71-65af-4ded-9945-71c01ce15653\\\",\\\"systemUUID\\\":\\\"883aa43b-ee67-45aa-9f6b-7760dc931d5e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:54Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:54 crc kubenswrapper[4724]: I0223 17:32:54.370083 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:32:54 crc kubenswrapper[4724]: I0223 17:32:54.370139 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:32:54 crc kubenswrapper[4724]: I0223 17:32:54.370154 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:32:54 crc kubenswrapper[4724]: I0223 17:32:54.370181 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:32:54 crc kubenswrapper[4724]: I0223 17:32:54.370196 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:32:54Z","lastTransitionTime":"2026-02-23T17:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:32:54 crc kubenswrapper[4724]: E0223 17:32:54.382852 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T17:32:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aaac6a71-65af-4ded-9945-71c01ce15653\\\",\\\"systemUUID\\\":\\\"883aa43b-ee67-45aa-9f6b-7760dc931d5e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:54Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:54 crc kubenswrapper[4724]: E0223 17:32:54.383012 4724 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 17:32:54 crc kubenswrapper[4724]: I0223 17:32:54.950225 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:32:54 crc kubenswrapper[4724]: I0223 17:32:54.950276 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:32:54 crc kubenswrapper[4724]: I0223 17:32:54.950486 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q2jvs" Feb 23 17:32:54 crc kubenswrapper[4724]: E0223 17:32:54.950598 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 17:32:54 crc kubenswrapper[4724]: E0223 17:32:54.950737 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q2jvs" podUID="e106e1ec-19f4-4d6b-b71f-dc04dcc437b4" Feb 23 17:32:54 crc kubenswrapper[4724]: E0223 17:32:54.950890 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 17:32:54 crc kubenswrapper[4724]: I0223 17:32:54.959176 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-78fmj_8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1/ovnkube-controller/2.log" Feb 23 17:32:54 crc kubenswrapper[4724]: I0223 17:32:54.960101 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-78fmj_8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1/ovnkube-controller/1.log" Feb 23 17:32:54 crc kubenswrapper[4724]: I0223 17:32:54.963887 4724 generic.go:334] "Generic (PLEG): container finished" podID="8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" containerID="087c3ef86529801a82bd5e2a43ee86c6b7c0edee49efeae580674a4f19e47d26" exitCode=1 Feb 23 17:32:54 crc kubenswrapper[4724]: I0223 17:32:54.963935 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" event={"ID":"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1","Type":"ContainerDied","Data":"087c3ef86529801a82bd5e2a43ee86c6b7c0edee49efeae580674a4f19e47d26"} Feb 23 17:32:54 crc kubenswrapper[4724]: I0223 17:32:54.963977 4724 scope.go:117] "RemoveContainer" containerID="836900e86b8f55ee19369e12f0da55378195919b6fe800304ca8123ba9cf40e8" Feb 23 17:32:54 crc kubenswrapper[4724]: I0223 17:32:54.964788 4724 scope.go:117] "RemoveContainer" containerID="087c3ef86529801a82bd5e2a43ee86c6b7c0edee49efeae580674a4f19e47d26" Feb 23 17:32:54 crc kubenswrapper[4724]: E0223 17:32:54.965120 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-78fmj_openshift-ovn-kubernetes(8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1)\"" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" podUID="8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" Feb 23 17:32:54 crc kubenswrapper[4724]: I0223 17:32:54.979860 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2df91905-ff3d-4a7d-8e22-337861483a5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9fabd2560e4a3826fb070276626936d120d612d37e2750e417f637b93b1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff1c2b88a730048902a1360cdbcddfb4f82c6a58c427edb24a792b254f55383\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:17Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 17:30:47.305065 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 17:30:47.307934 1 observer_polling.go:159] Starting file observer\\\\nI0223 17:30:47.346927 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 17:30:47.351923 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0223 17:31:17.828320 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:16Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbce4ddd851479ee9b40b8c773a052d5a1d1420df107d8ea0e9fa2b927300910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bfa4a72a4a487fbc7703a3b60a9ef9a18b7fea6e58fd427f50361d8df9731d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2591d81bed9e5dbad670e16b2266367cfb386a3d68ffad9d09e0652130ee2609\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:54Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:54 crc kubenswrapper[4724]: I0223 17:32:54.997838 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbcb3c1b1daa4328b6e5e049e9f1270ee94da4897df26324f6e4cd898f565de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:54Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:55 crc kubenswrapper[4724]: I0223 17:32:55.014598 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mmxrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a042db-4057-4913-8091-da7d8c79feba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6df06ac80efb6de6b872bec086095e2b4ba0c13dba6e949a4f7309ef3e4300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wrps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mmxrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:55Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:55 crc kubenswrapper[4724]: I0223 17:32:55.027040 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q2jvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e106e1ec-19f4-4d6b-b71f-dc04dcc437b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdl5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdl5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q2jvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:55Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:55 crc kubenswrapper[4724]: I0223 17:32:55.039163 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74b84e23-8d8b-47b7-b3d4-269b16e9dfb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e581ef5d1fb9752e7582661dffceba50fc7b408467d9b0f0cca3b29e1838e72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:55Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:55 crc kubenswrapper[4724]: I0223 17:32:55.053662 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e1d6606-75fc-41fd-9c23-18ee248da2af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe805066e5e61f09a4f5cf1f9e912da6664229c8db86a898cbe670278e4c2530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beda3a9d030f2d4aa9c6a78508a84f5bc122debebb2c5b423233834433b8fd5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e108d74a635da102e857e2ad4f18384861e4e630ae31af9aa05841e6bbc2de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b889ee0ad15cc6ff92c69a44c4bc0bf620e23e3dc7276324662c265ba5c7d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:25Z\\\",\\\"message\\\":\\\"W0223 17:31:24.378840 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 17:31:24.379771 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771867884 cert, and key in /tmp/serving-cert-1953836024/serving-signer.crt, /tmp/serving-cert-1953836024/serving-signer.key\\\\nI0223 17:31:24.846158 1 observer_polling.go:159] Starting file observer\\\\nW0223 17:31:24.858497 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 17:31:24.858706 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 17:31:24.859835 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1953836024/tls.crt::/tmp/serving-cert-1953836024/tls.key\\\\\\\"\\\\nF0223 17:31:25.358711 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:31:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b550694e49325768a2674a4d2b7089dd267b91a273751fb62d060a0663f06619\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:55Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:55 crc kubenswrapper[4724]: I0223 17:32:55.069872 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:55Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:55 crc kubenswrapper[4724]: I0223 17:32:55.092597 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:55Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:55 crc kubenswrapper[4724]: I0223 17:32:55.109257 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:55Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:55 crc kubenswrapper[4724]: I0223 17:32:55.125232 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35021d6d-15bd-4153-9dae-7b002eff8c23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26478861fb89db7eb6b5ba0a2089cad4360893bd95a71883be07831745c5c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f971a2a092b595760de8b7a9b6a0a13fa802d3fcb6a2c2b8922b01c9c57d782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d892cd1f6ea7fabb2e64e3e78d329f6e0efbf8585cc4f72fbcd29f31a035cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:55Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:55 crc kubenswrapper[4724]: I0223 17:32:55.143929 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4cc6f0c20370fbabd53ec282c0d89022adfd7706f939163b00f543f7086aad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9724a1e6d050110c119972cc34cf64fa8dc6c515f8bc0aec4dbf82463c18d3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcecf13d69c640f2efa0b9abefbb13a08c15b82126e11159faed92da3b2addf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16f5feb6e9a7c7f7013f55ac108791b423b2ba7e1e443d1653c7a85e5622cfc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd0bf4870b6f1d6369d4d1a12817188568f572708708271de41e71e46f2002e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6574641e63ac4e0e79550d6b28b2a683f11be521784b19d2fbfc9226dd5b7ef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://836900e86b8f55ee19369e12f0da55378195919b6fe800304ca8123ba9cf40e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://836900e86b8f55ee19369e12f0da55378195919b6fe800304ca8123ba9cf40e8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T17:32:40Z\\\",\\\"message\\\":\\\":}]\\\\nI0223 17:32:40.517340 6585 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd-operator/metrics]} name:Service_openshift-etcd-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.188:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {53c717ca-2174-4315-bb03-c937a9c0d9b6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0223 17:32:40.517333 6585 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/community-operators]} name:Service_openshift-marketplace/community-operators_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.189:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d389393c-7ba9-422c-b3f5-06e391d537d2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0223 17:32:40.517405 6585 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-78fmj_openshift-ovn-kubernetes(8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c79e6fcccea909f19d459103ec63aed7de2f0a498ffab190e63a647f2f20bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a45d3dd297c3d2d4ca5e437f703790082ce9d5429688cc058e989da24daf02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31a45d3dd297c3d2d4ca5e437f703790082ce9d5429688cc058e989da24daf02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78fmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:55Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:55 crc kubenswrapper[4724]: I0223 17:32:55.155046 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k77s6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"582f9368-9429-4cf2-a78d-8a255fc140a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a07900f4330536e3783fb9566c8c52b4208d624f47a662f74ee0ff184fff648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hrjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k77s6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:55Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:55 crc kubenswrapper[4724]: I0223 17:32:55.176945 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5334ad9-e4a5-4eec-b535-44664ba43c0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47ad3efcec8d953297d16ec3aa049656f57e69121a9dfc2f83572d1fe0c1f8ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8b4c85243660568a55df3eb0867530e60512a01960c5be951763ccd9459be30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b43f0bff4db1bc23f440c84a11c3018196009ec0fe47a9cb4b4cd136e5fffd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee21f0fcd431e1f400bb68f6a5c7b5b8d38ced5a33133684160a3255ac61ec20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65474f581d37631a7193e3ebc061e9df69206ec245113f16c7930111a5f74d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://200faace6300bc9dc3138b270b3fd029a6580cb06bfdd89eb73f97635abc7ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://200faace6300bc9dc3138b270b3fd029a6580cb06bfdd89eb73f97635abc7ac3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3268af99383f73c85412d56d39515a43362510c7a3a972e83752d5929d7d4935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3268af99383f73c85412d56d39515a43362510c7a3a972e83752d5929d7d4935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5ae1a073932b6e36d2af6015c8cd09df8d031c65d67e2d06c5bcc2fcd6b35407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ae1a073932b6e36d2af6015c8cd09df8d031c65d67e2d06c5bcc2fcd6b35407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:55Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:55 crc kubenswrapper[4724]: I0223 17:32:55.178418 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 04:24:07.398614048 +0000 UTC Feb 23 17:32:55 crc kubenswrapper[4724]: E0223 17:32:55.186728 4724 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 17:32:55 crc kubenswrapper[4724]: I0223 17:32:55.200739 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d114679fca6bbb2dbaaee0970449fe265c7c931665eb559d015abb61de3ec07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dd3d46856b5d5dece678b672db6485b68b9b7fbe9f029f7d9df37906146674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:55Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:55 crc kubenswrapper[4724]: I0223 17:32:55.217816 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d85ca9995343f8d309c902c37fc2db6faf539dfde44cc8334e279648ec2ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:55Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:55 crc kubenswrapper[4724]: I0223 17:32:55.228217 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2dn8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00434a2a-97a5-4d8f-9a6f-9dc5b372cd20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://271eec7ff87c2b2d14aded00b308e9cd801b51352cbb3c7075bdc7037560eddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5pmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2dn8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:55Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:55 crc kubenswrapper[4724]: I0223 17:32:55.242462 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qssx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8179b275-39bb-472c-915f-a02b2a09c88d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c39702ed2e0af737992d239c015f1ec6aa187cbad84f1ad1431c11f179479ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5834e144917a9f037152ea525a0b425f1053407ebc5db5a09433bc8950db3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5834e144917a9f037152ea525a0b425f1053407ebc5db5a09433bc8950db3a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ee8d74243fdc2b520c1c145c37246abc0e2b653025c8626f05c62a9c8b20c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee8d74243fdc2b520c1c145c37246abc0e2b653025c8626f05c62a9c8b20c7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92923ccd7509d6f535098f7e46d3231ffaeed8c19edb7ed11c864d17e9489611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92923ccd7509d6f535098f7e46d3231ffaeed8c19edb7ed11c864d17e9489611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f91baa7de890c1792a75725746bacad51e728f8d8bdc1c0ccdc30ff78b4ca77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f91baa7de890c1792a75725746bacad51e728f8d8bdc1c0ccdc30ff78b4ca77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5570859da5ce3899539e8c87dc2ed79e262b6abd82c80893a6a9aa3c26f4e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5570859da5ce3899539e8c87dc2ed79e262b6abd82c80893a6a9aa3c26f4e37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a11e87c5073ca26ea5e71e822a38110312b4381fb1d1e62f52b1f55ff7d1cfdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a11e87c5073ca26ea5e71e822a38110312b4381fb1d1e62f52b1f55ff7d1cfdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qssx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:55Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:55 crc kubenswrapper[4724]: I0223 17:32:55.254808 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a065b197-b354-4d9b-b2e9-7d4882a3d1a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380efbba5a3d4d1c9997f3f90f6737934b602c7b736796ede2939f205beab4c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cgpd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://716e3c7a8293727fc9315beb9da7fea72ec299ea5d8f15035ab77347c201a7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cgpd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw78r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:55Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:55 crc kubenswrapper[4724]: I0223 17:32:55.266117 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmvqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b0504b6-05e5-451b-af95-1745052b85f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5b7f52e2edfb544d2097b36591085d3305c7a33b691947a3dcbd50f4dd0f849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mwgs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16654edfe25baa03a90ae1b93623bedb0e673bba605f08ae8f117631e8c34989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mwgs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zmvqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:55Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:55 crc kubenswrapper[4724]: I0223 17:32:55.280836 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mmxrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a042db-4057-4913-8091-da7d8c79feba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6df06ac80efb6de6b872bec086095e2b4ba0c13dba6e949a4f7309ef3e4300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wrps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mmxrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:55Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:55 crc kubenswrapper[4724]: I0223 17:32:55.292894 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q2jvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e106e1ec-19f4-4d6b-b71f-dc04dcc437b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdl5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdl5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q2jvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:55Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:55 crc kubenswrapper[4724]: I0223 17:32:55.304722 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2df91905-ff3d-4a7d-8e22-337861483a5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9fabd2560e4a3826fb070276626936d120d612d37e2750e417f637b93b1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff1c2b88a730048902a1360cdbcddfb4f82c6a58c427edb24a792b254f55383\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:17Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 17:30:47.305065 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 17:30:47.307934 1 observer_polling.go:159] Starting file observer\\\\nI0223 17:30:47.346927 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 17:30:47.351923 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0223 17:31:17.828320 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:16Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbce4ddd851479ee9b40b8c773a052d5a1d1420df107d8ea0e9fa2b927300910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bfa4a72a4a487fbc7703a3b60a9ef9a18b7fea6e58fd427f50361d8df9731d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2591d81bed9e5dbad670e16b2266367cfb386a3d68ffad9d09e0652130ee2609\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:55Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:55 crc kubenswrapper[4724]: I0223 17:32:55.318731 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbcb3c1b1daa4328b6e5e049e9f1270ee94da4897df26324f6e4cd898f565de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:55Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:55 crc kubenswrapper[4724]: I0223 17:32:55.330971 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:55Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:55 crc kubenswrapper[4724]: I0223 17:32:55.342418 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:55Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:55 crc kubenswrapper[4724]: I0223 17:32:55.355521 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:55Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:55 crc kubenswrapper[4724]: I0223 17:32:55.370422 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74b84e23-8d8b-47b7-b3d4-269b16e9dfb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e581ef5d1fb9752e7582661dffceba50fc7b408467d9b0f0cca3b29e1838e72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:55Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:55 crc kubenswrapper[4724]: I0223 17:32:55.384588 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e1d6606-75fc-41fd-9c23-18ee248da2af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe805066e5e61f09a4f5cf1f9e912da6664229c8db86a898cbe670278e4c2530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beda3a9d030f2d4aa9c6a78508a84f5bc122debebb2c5b423233834433b8fd5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e108d74a635da102e857e2ad4f18384861e4e630ae31af9aa05841e6bbc2de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b889ee0ad15cc6ff92c69a44c4bc0bf620e23e3dc7276324662c265ba5c7d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:25Z\\\",\\\"message\\\":\\\"W0223 17:31:24.378840 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 17:31:24.379771 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771867884 cert, and key in /tmp/serving-cert-1953836024/serving-signer.crt, /tmp/serving-cert-1953836024/serving-signer.key\\\\nI0223 17:31:24.846158 1 observer_polling.go:159] Starting file observer\\\\nW0223 17:31:24.858497 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 17:31:24.858706 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 17:31:24.859835 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1953836024/tls.crt::/tmp/serving-cert-1953836024/tls.key\\\\\\\"\\\\nF0223 17:31:25.358711 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:31:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b550694e49325768a2674a4d2b7089dd267b91a273751fb62d060a0663f06619\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:55Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:55 crc kubenswrapper[4724]: I0223 17:32:55.396418 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k77s6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"582f9368-9429-4cf2-a78d-8a255fc140a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a07900f4330536e3783fb9566c8c52b4208d624f47a662f74ee0ff184fff648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hrjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k77s6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:55Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:55 crc kubenswrapper[4724]: I0223 17:32:55.410000 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35021d6d-15bd-4153-9dae-7b002eff8c23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26478861fb89db7eb6b5ba0a2089cad4360893bd95a71883be07831745c5c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f971a2a092b595760de8b7a9b6a0a13fa802d3fcb6a2c2b8922b01c9c57d782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d892cd1f6ea7fabb2e64e3e78d329f6e0efbf8585cc4f72fbcd29f31a035cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:55Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:55 crc kubenswrapper[4724]: I0223 17:32:55.432260 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4cc6f0c20370fbabd53ec282c0d89022adfd7706f939163b00f543f7086aad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9724a1e6d050110c119972cc34cf64fa8dc6c515f8bc0aec4dbf82463c18d3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcecf13d69c640f2efa0b9abefbb13a08c15b82126e11159faed92da3b2addf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16f5feb6e9a7c7f7013f55ac108791b423b2ba7e1e443d1653c7a85e5622cfc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd0bf4870b6f1d6369d4d1a12817188568f572708708271de41e71e46f2002e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6574641e63ac4e0e79550d6b28b2a683f11be521784b19d2fbfc9226dd5b7ef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://087c3ef86529801a82bd5e2a43ee86c6b7c0edee49efeae580674a4f19e47d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://836900e86b8f55ee19369e12f0da55378195919b6fe800304ca8123ba9cf40e8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T17:32:40Z\\\",\\\"message\\\":\\\":}]\\\\nI0223 17:32:40.517340 6585 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd-operator/metrics]} name:Service_openshift-etcd-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.188:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {53c717ca-2174-4315-bb03-c937a9c0d9b6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0223 17:32:40.517333 6585 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/community-operators]} name:Service_openshift-marketplace/community-operators_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.189:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d389393c-7ba9-422c-b3f5-06e391d537d2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0223 17:32:40.517405 6585 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://087c3ef86529801a82bd5e2a43ee86c6b7c0edee49efeae580674a4f19e47d26\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T17:32:54Z\\\",\\\"message\\\":\\\"rIPs:[10.217.4.174],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0223 17:32:54.773087 6820 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:54Z is after 2025-08-24T17:21:41Z]\\\\nI0223 17:32:54.773107 6820 lb_config.go:1031] Cluster endpoints for openshift-dns-operator/metrics for network=default are: map[]\\\\nI0223 17:32:54.773116 6820 services_controller.go:443] Built service openshift-dns-operator/metrics LB cluster-wi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c79e6fcccea909f19d459103ec63aed7de2f0a498ffab190e63a647f2f20bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a45d3dd297c3d2d4ca5e437f703790082ce9d5429688cc058e989da24daf02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31a45d3dd297c3d2d4ca5e437f703790082ce9d5429688cc058e989da24daf02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78fmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:55Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:55 crc kubenswrapper[4724]: I0223 17:32:55.445226 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d85ca9995343f8d309c902c37fc2db6faf539dfde44cc8334e279648ec2ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:55Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:55 crc kubenswrapper[4724]: I0223 17:32:55.457108 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2dn8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00434a2a-97a5-4d8f-9a6f-9dc5b372cd20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://271eec7ff87c2b2d14aded00b308e9cd801b51352cbb3c7075bdc7037560eddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5pmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2dn8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:55Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:55 crc kubenswrapper[4724]: I0223 17:32:55.472152 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qssx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8179b275-39bb-472c-915f-a02b2a09c88d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c39702ed2e0af737992d239c015f1ec6aa187cbad84f1ad1431c11f179479ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5834e144917a9f037152ea525a0b425f1053407ebc5db5a09433bc8950db3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5834e144917a9f037152ea525a0b425f1053407ebc5db5a09433bc8950db3a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ee8d74243fdc2b520c1c145c37246abc0e2b653025c8626f05c62a9c8b20c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee8d74243fdc2b520c1c145c37246abc0e2b653025c8626f05c62a9c8b20c7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92923ccd7509d6f535098f7e46d3231ffaeed8c19edb7ed11c864d17e9489611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92923ccd7509d6f535098f7e46d3231ffaeed8c19edb7ed11c864d17e9489611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f91baa7de890c1792a75725746bacad51e728f8d8bdc1c0ccdc30ff78b4ca77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f91baa7de890c1792a75725746bacad51e728f8d8bdc1c0ccdc30ff78b4ca77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5570859da5ce3899539e8c87dc2ed79e262b6abd82c80893a6a9aa3c26f4e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5570859da5ce3899539e8c87dc2ed79e262b6abd82c80893a6a9aa3c26f4e37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a11e87c5073ca26ea5e71e822a38110312b4381fb1d1e62f52b1f55ff7d1cfdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a11e87c5073ca26ea5e71e822a38110312b4381fb1d1e62f52b1f55ff7d1cfdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qssx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:55Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:55 crc kubenswrapper[4724]: I0223 17:32:55.485455 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a065b197-b354-4d9b-b2e9-7d4882a3d1a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380efbba5a3d4d1c9997f3f90f6737934b602c7b736796ede2939f205beab4c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cgpd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://716e3c7a8293727fc9315beb9da7fea72ec299ea5d8f15035ab77347c201a7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cgpd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw78r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:55Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:55 crc kubenswrapper[4724]: I0223 17:32:55.500981 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmvqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b0504b6-05e5-451b-af95-1745052b85f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5b7f52e2edfb544d2097b36591085d3305c7a33b691947a3dcbd50f4dd0f849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mwgs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16654edfe25baa03a90ae1b93623bedb0e673bba605f08ae8f117631e8c34989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mwgs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zmvqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:55Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:55 crc kubenswrapper[4724]: I0223 17:32:55.521711 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5334ad9-e4a5-4eec-b535-44664ba43c0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47ad3efcec8d953297d16ec3aa049656f57e69121a9dfc2f83572d1fe0c1f8ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8b4c85243660568a55df3eb0867530e60512a01960c5be951763ccd9459be30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b43f0bff4db1bc23f440c84a11c3018196009ec0fe47a9cb4b4cd136e5fffd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee21f0fcd431e1f400bb68f6a5c7b5b8d38ced5a33133684160a3255ac61ec20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65474f581d37631a7193e3ebc061e9df69206ec245113f16c7930111a5f74d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://200faace6300bc9dc3138b270b3fd029a6580cb06bfdd89eb73f97635abc7ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://200faace6300bc9dc3138b270b3fd029a6580cb06bfdd89eb73f97635abc7ac3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3268af99383f73c85412d56d39515a43362510c7a3a972e83752d5929d7d4935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3268af99383f73c85412d56d39515a43362510c7a3a972e83752d5929d7d4935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5ae1a073932b6e36d2af6015c8cd09df8d031c65d67e2d06c5bcc2fcd6b35407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ae1a073932b6e36d2af6015c8cd09df8d031c65d67e2d06c5bcc2fcd6b35407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:55Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:55 crc kubenswrapper[4724]: I0223 17:32:55.539055 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d114679fca6bbb2dbaaee0970449fe265c7c931665eb559d015abb61de3ec07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dd3d46856b5d5dece678b672db6485b68b9b7fbe9f029f7d9df37906146674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:55Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:55 crc kubenswrapper[4724]: I0223 17:32:55.950689 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:32:55 crc kubenswrapper[4724]: E0223 17:32:55.950893 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 17:32:55 crc kubenswrapper[4724]: I0223 17:32:55.970641 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-78fmj_8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1/ovnkube-controller/2.log" Feb 23 17:32:56 crc kubenswrapper[4724]: I0223 17:32:56.179477 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 12:08:13.080486357 +0000 UTC Feb 23 17:32:56 crc kubenswrapper[4724]: I0223 17:32:56.942999 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e106e1ec-19f4-4d6b-b71f-dc04dcc437b4-metrics-certs\") pod \"network-metrics-daemon-q2jvs\" (UID: \"e106e1ec-19f4-4d6b-b71f-dc04dcc437b4\") " pod="openshift-multus/network-metrics-daemon-q2jvs" Feb 23 17:32:56 crc kubenswrapper[4724]: E0223 17:32:56.943172 4724 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 17:32:56 crc kubenswrapper[4724]: E0223 17:32:56.943236 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e106e1ec-19f4-4d6b-b71f-dc04dcc437b4-metrics-certs podName:e106e1ec-19f4-4d6b-b71f-dc04dcc437b4 nodeName:}" failed. No retries permitted until 2026-02-23 17:33:12.943217684 +0000 UTC m=+148.759417284 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e106e1ec-19f4-4d6b-b71f-dc04dcc437b4-metrics-certs") pod "network-metrics-daemon-q2jvs" (UID: "e106e1ec-19f4-4d6b-b71f-dc04dcc437b4") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 17:32:56 crc kubenswrapper[4724]: I0223 17:32:56.950112 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:32:56 crc kubenswrapper[4724]: I0223 17:32:56.950176 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:32:56 crc kubenswrapper[4724]: I0223 17:32:56.950285 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q2jvs" Feb 23 17:32:56 crc kubenswrapper[4724]: E0223 17:32:56.950333 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 17:32:56 crc kubenswrapper[4724]: E0223 17:32:56.950511 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q2jvs" podUID="e106e1ec-19f4-4d6b-b71f-dc04dcc437b4" Feb 23 17:32:56 crc kubenswrapper[4724]: E0223 17:32:56.950650 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 17:32:57 crc kubenswrapper[4724]: I0223 17:32:57.180014 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 07:38:04.020620412 +0000 UTC Feb 23 17:32:57 crc kubenswrapper[4724]: I0223 17:32:57.950702 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:32:57 crc kubenswrapper[4724]: E0223 17:32:57.950907 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 17:32:58 crc kubenswrapper[4724]: I0223 17:32:58.107593 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:32:58 crc kubenswrapper[4724]: I0223 17:32:58.108784 4724 scope.go:117] "RemoveContainer" containerID="087c3ef86529801a82bd5e2a43ee86c6b7c0edee49efeae580674a4f19e47d26" Feb 23 17:32:58 crc kubenswrapper[4724]: E0223 17:32:58.108987 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-78fmj_openshift-ovn-kubernetes(8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1)\"" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" podUID="8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" Feb 23 17:32:58 crc kubenswrapper[4724]: I0223 17:32:58.127492 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35021d6d-15bd-4153-9dae-7b002eff8c23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26478861fb89db7eb6b5ba0a2089cad4360893bd95a71883be07831745c5c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f971a2a092b595760de8b7a9b6a0a13fa802d3fcb6a2c2b8922b01c9c57d782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d892cd1f6ea7fabb2e64e3e78d329f6e0efbf8585cc4f72fbcd29f31a035cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa4f2136891d0ef2a9eb9101dd70b7f305efa8e1e4d0301bc07fa57c0a211f92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:58Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:58 crc kubenswrapper[4724]: I0223 17:32:58.150483 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4cc6f0c20370fbabd53ec282c0d89022adfd7706f939163b00f543f7086aad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9724a1e6d050110c119972cc34cf64fa8dc6c515f8bc0aec4dbf82463c18d3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcecf13d69c640f2efa0b9abefbb13a08c15b82126e11159faed92da3b2addf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16f5feb6e9a7c7f7013f55ac108791b423b2ba7e1e443d1653c7a85e5622cfc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd0bf4870b6f1d6369d4d1a12817188568f572708708271de41e71e46f2002e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6574641e63ac4e0e79550d6b28b2a683f11be521784b19d2fbfc9226dd5b7ef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://087c3ef86529801a82bd5e2a43ee86c6b7c0edee49efeae580674a4f19e47d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://087c3ef86529801a82bd5e2a43ee86c6b7c0edee49efeae580674a4f19e47d26\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T17:32:54Z\\\",\\\"message\\\":\\\"rIPs:[10.217.4.174],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0223 17:32:54.773087 6820 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:54Z is after 2025-08-24T17:21:41Z]\\\\nI0223 17:32:54.773107 6820 lb_config.go:1031] Cluster endpoints for openshift-dns-operator/metrics for network=default are: map[]\\\\nI0223 17:32:54.773116 6820 services_controller.go:443] Built service openshift-dns-operator/metrics LB cluster-wi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-78fmj_openshift-ovn-kubernetes(8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c79e6fcccea909f19d459103ec63aed7de2f0a498ffab190e63a647f2f20bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a45d3dd297c3d2d4ca5e437f703790082ce9d5429688cc058e989da24daf02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31a45d3dd297c3d2d4ca5e437f703790082ce9d5429688cc058e989da24daf02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpsrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78fmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:58Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:58 crc kubenswrapper[4724]: I0223 17:32:58.162293 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k77s6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"582f9368-9429-4cf2-a78d-8a255fc140a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a07900f4330536e3783fb9566c8c52b4208d624f47a662f74ee0ff184fff648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hrjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k77s6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:58Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:58 crc kubenswrapper[4724]: I0223 17:32:58.180987 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 01:47:18.651846965 +0000 UTC Feb 23 17:32:58 crc kubenswrapper[4724]: I0223 17:32:58.182238 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5334ad9-e4a5-4eec-b535-44664ba43c0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47ad3efcec8d953297d16ec3aa049656f57e69121a9dfc2f83572d1fe0c1f8ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8b4c85243660568a55df3eb0867530e60512a01960c5be951763ccd9459be30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b43f0bff4db1bc23f440c84a11c3018196009ec0fe47a9cb4b4cd136e5fffd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee21f0fcd431e1f400bb68f6a5c7b5b8d38ced5a33133684160a3255ac61ec20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65474f581d37631a7193e3ebc061e9df69206ec245113f16c7930111a5f74d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://200faace6300bc9dc3138b270b3fd029a6580cb06bfdd89eb73f97635abc7ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://200faace6300bc9dc3138b270b3fd029a6580cb06bfdd89eb73f97635abc7ac3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3268af99383f73c85412d56d39515a43362510c7a3a972e83752d5929d7d4935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3268af99383f73c85412d56d39515a43362510c7a3a972e83752d5929d7d4935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5ae1a073932b6e36d2af6015c8cd09df8d031c65d67e2d06c5bcc2fcd6b35407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ae1a073932b6e36d2af6015c8cd09df8d031c65d67e2d06c5bcc2fcd6b35407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:58Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:58 crc kubenswrapper[4724]: I0223 17:32:58.202968 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d114679fca6bbb2dbaaee0970449fe265c7c931665eb559d015abb61de3ec07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dd3d46856b5d5dece678b672db6485b68b9b7fbe9f029f7d9df37906146674c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:58Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:58 crc kubenswrapper[4724]: I0223 17:32:58.218498 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d85ca9995343f8d309c902c37fc2db6faf539dfde44cc8334e279648ec2ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:58Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:58 crc kubenswrapper[4724]: I0223 17:32:58.231253 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2dn8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00434a2a-97a5-4d8f-9a6f-9dc5b372cd20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://271eec7ff87c2b2d14aded00b308e9cd801b51352cbb3c7075bdc7037560eddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5pmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2dn8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:58Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:58 crc kubenswrapper[4724]: I0223 17:32:58.246923 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qssx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8179b275-39bb-472c-915f-a02b2a09c88d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c39702ed2e0af737992d239c015f1ec6aa187cbad84f1ad1431c11f179479ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5834e144917a9f037152ea525a0b425f1053407ebc5db5a09433bc8950db3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5834e144917a9f037152ea525a0b425f1053407ebc5db5a09433bc8950db3a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ee8d74243fdc2b520c1c145c37246abc0e2b653025c8626f05c62a9c8b20c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee8d74243fdc2b520c1c145c37246abc0e2b653025c8626f05c62a9c8b20c7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92923ccd7509d6f535098f7e46d3231ffaeed8c19edb7ed11c864d17e9489611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92923ccd7509d6f535098f7e46d3231ffaeed8c19edb7ed11c864d17e9489611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f91baa7de890c1792a75725746bacad51e728f8d8bdc1c0ccdc30ff78b4ca77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f91baa7de890c1792a75725746bacad51e728f8d8bdc1c0ccdc30ff78b4ca77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5570859da5ce3899539e8c87dc2ed79e262b6abd82c80893a6a9aa3c26f4e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5570859da5ce3899539e8c87dc2ed79e262b6abd82c80893a6a9aa3c26f4e37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a11e87c5073ca26ea5e71e822a38110312b4381fb1d1e62f52b1f55ff7d1cfdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a11e87c5073ca26ea5e71e822a38110312b4381fb1d1e62f52b1f55ff7d1cfdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qssx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:58Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:58 crc kubenswrapper[4724]: I0223 17:32:58.258293 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a065b197-b354-4d9b-b2e9-7d4882a3d1a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380efbba5a3d4d1c9997f3f90f6737934b602c7b736796ede2939f205beab4c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cgpd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://716e3c7a8293727fc9315beb9da7fea72ec299ea5d8f15035ab77347c201a7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cgpd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rw78r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:58Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:58 crc kubenswrapper[4724]: I0223 17:32:58.270523 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmvqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b0504b6-05e5-451b-af95-1745052b85f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5b7f52e2edfb544d2097b36591085d3305c7a33b691947a3dcbd50f4dd0f849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mwgs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16654edfe25baa03a90ae1b93623bedb0e673bba605f08ae8f117631e8c34989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mwgs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zmvqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:58Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:58 crc kubenswrapper[4724]: I0223 17:32:58.283866 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2df91905-ff3d-4a7d-8e22-337861483a5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9fabd2560e4a3826fb070276626936d120d612d37e2750e417f637b93b1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff1c2b88a730048902a1360cdbcddfb4f82c6a58c427edb24a792b254f55383\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:17Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 17:30:47.305065 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 17:30:47.307934 1 observer_polling.go:159] Starting file observer\\\\nI0223 17:30:47.346927 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 17:30:47.351923 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0223 17:31:17.828320 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:31:16Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbce4ddd851479ee9b40b8c773a052d5a1d1420df107d8ea0e9fa2b927300910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bfa4a72a4a487fbc7703a3b60a9ef9a18b7fea6e58fd427f50361d8df9731d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2591d81bed9e5dbad670e16b2266367cfb386a3d68ffad9d09e0652130ee2609\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:58Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:58 crc kubenswrapper[4724]: I0223 17:32:58.295922 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbcb3c1b1daa4328b6e5e049e9f1270ee94da4897df26324f6e4cd898f565de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:58Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:58 crc kubenswrapper[4724]: I0223 17:32:58.307314 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mmxrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a042db-4057-4913-8091-da7d8c79feba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6df06ac80efb6de6b872bec086095e2b4ba0c13dba6e949a4f7309ef3e4300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wrps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mmxrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:58Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:58 crc kubenswrapper[4724]: I0223 17:32:58.317145 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q2jvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e106e1ec-19f4-4d6b-b71f-dc04dcc437b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdl5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdl5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:32:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q2jvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:58Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:58 crc kubenswrapper[4724]: I0223 17:32:58.326570 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74b84e23-8d8b-47b7-b3d4-269b16e9dfb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e581ef5d1fb9752e7582661dffceba50fc7b408467d9b0f0cca3b29e1838e72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce84a3041f24b15d94b08de10c290b2772bbe51df5a5ee4cc7c3a293dde11e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:58Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:58 crc kubenswrapper[4724]: I0223 17:32:58.338173 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e1d6606-75fc-41fd-9c23-18ee248da2af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T17:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe805066e5e61f09a4f5cf1f9e912da6664229c8db86a898cbe670278e4c2530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beda3a9d030f2d4aa9c6a78508a84f5bc122debebb2c5b423233834433b8fd5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e108d74a635da102e857e2ad4f18384861e4e630ae31af9aa05841e6bbc2de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b889ee0ad15cc6ff92c69a44c4bc0bf620e23e3dc7276324662c265ba5c7d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T17:31:25Z\\\",\\\"message\\\":\\\"W0223 17:31:24.378840 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 17:31:24.379771 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771867884 cert, and key in /tmp/serving-cert-1953836024/serving-signer.crt, /tmp/serving-cert-1953836024/serving-signer.key\\\\nI0223 17:31:24.846158 1 observer_polling.go:159] Starting file observer\\\\nW0223 17:31:24.858497 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 17:31:24.858706 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 17:31:24.859835 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1953836024/tls.crt::/tmp/serving-cert-1953836024/tls.key\\\\\\\"\\\\nF0223 17:31:25.358711 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T17:31:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b550694e49325768a2674a4d2b7089dd267b91a273751fb62d060a0663f06619\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T17:30:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T17:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T17:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T17:30:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:58Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:58 crc kubenswrapper[4724]: I0223 17:32:58.347774 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:58Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:58 crc kubenswrapper[4724]: I0223 17:32:58.357686 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:58Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:58 crc kubenswrapper[4724]: I0223 17:32:58.368579 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T17:31:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T17:32:58Z is after 2025-08-24T17:21:41Z" Feb 23 17:32:58 crc kubenswrapper[4724]: I0223 17:32:58.950177 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:32:58 crc kubenswrapper[4724]: I0223 17:32:58.950248 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:32:58 crc kubenswrapper[4724]: E0223 17:32:58.950481 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 17:32:58 crc kubenswrapper[4724]: I0223 17:32:58.950283 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q2jvs" Feb 23 17:32:58 crc kubenswrapper[4724]: E0223 17:32:58.950703 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 17:32:58 crc kubenswrapper[4724]: E0223 17:32:58.950828 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q2jvs" podUID="e106e1ec-19f4-4d6b-b71f-dc04dcc437b4" Feb 23 17:32:59 crc kubenswrapper[4724]: I0223 17:32:59.182225 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 14:00:18.764295662 +0000 UTC Feb 23 17:32:59 crc kubenswrapper[4724]: I0223 17:32:59.950481 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:32:59 crc kubenswrapper[4724]: E0223 17:32:59.950622 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 17:33:00 crc kubenswrapper[4724]: I0223 17:33:00.183088 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 23:23:50.368222413 +0000 UTC Feb 23 17:33:00 crc kubenswrapper[4724]: E0223 17:33:00.188219 4724 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 17:33:00 crc kubenswrapper[4724]: I0223 17:33:00.950081 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q2jvs" Feb 23 17:33:00 crc kubenswrapper[4724]: E0223 17:33:00.950307 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q2jvs" podUID="e106e1ec-19f4-4d6b-b71f-dc04dcc437b4" Feb 23 17:33:00 crc kubenswrapper[4724]: I0223 17:33:00.950117 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:33:00 crc kubenswrapper[4724]: I0223 17:33:00.950106 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:33:00 crc kubenswrapper[4724]: E0223 17:33:00.950798 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 17:33:00 crc kubenswrapper[4724]: E0223 17:33:00.950914 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 17:33:01 crc kubenswrapper[4724]: I0223 17:33:01.184088 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 00:53:45.315855786 +0000 UTC Feb 23 17:33:01 crc kubenswrapper[4724]: I0223 17:33:01.950762 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:33:01 crc kubenswrapper[4724]: E0223 17:33:01.951138 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 17:33:02 crc kubenswrapper[4724]: I0223 17:33:02.184802 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 02:17:22.744746477 +0000 UTC Feb 23 17:33:02 crc kubenswrapper[4724]: I0223 17:33:02.950722 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:33:02 crc kubenswrapper[4724]: I0223 17:33:02.950722 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q2jvs" Feb 23 17:33:02 crc kubenswrapper[4724]: I0223 17:33:02.950815 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:33:02 crc kubenswrapper[4724]: E0223 17:33:02.950957 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 17:33:02 crc kubenswrapper[4724]: E0223 17:33:02.951077 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q2jvs" podUID="e106e1ec-19f4-4d6b-b71f-dc04dcc437b4" Feb 23 17:33:02 crc kubenswrapper[4724]: E0223 17:33:02.951143 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 17:33:03 crc kubenswrapper[4724]: I0223 17:33:03.186351 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 19:06:10.327262598 +0000 UTC Feb 23 17:33:03 crc kubenswrapper[4724]: I0223 17:33:03.951012 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:33:03 crc kubenswrapper[4724]: E0223 17:33:03.951258 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 17:33:04 crc kubenswrapper[4724]: I0223 17:33:04.186900 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 09:09:55.265312404 +0000 UTC Feb 23 17:33:04 crc kubenswrapper[4724]: I0223 17:33:04.512632 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 17:33:04 crc kubenswrapper[4724]: I0223 17:33:04.512704 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 17:33:04 crc kubenswrapper[4724]: I0223 17:33:04.512722 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 17:33:04 crc kubenswrapper[4724]: I0223 17:33:04.512749 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 17:33:04 crc kubenswrapper[4724]: I0223 17:33:04.512767 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T17:33:04Z","lastTransitionTime":"2026-02-23T17:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 17:33:04 crc kubenswrapper[4724]: I0223 17:33:04.593241 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-4pmvh"] Feb 23 17:33:04 crc kubenswrapper[4724]: I0223 17:33:04.593933 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4pmvh" Feb 23 17:33:04 crc kubenswrapper[4724]: I0223 17:33:04.597597 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 23 17:33:04 crc kubenswrapper[4724]: I0223 17:33:04.598098 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 23 17:33:04 crc kubenswrapper[4724]: I0223 17:33:04.598571 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 23 17:33:04 crc kubenswrapper[4724]: I0223 17:33:04.600483 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 23 17:33:04 crc kubenswrapper[4724]: I0223 17:33:04.611096 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=68.611061706 podStartE2EDuration="1m8.611061706s" podCreationTimestamp="2026-02-23 17:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:33:04.610597574 +0000 UTC m=+140.426797214" watchObservedRunningTime="2026-02-23 17:33:04.611061706 +0000 UTC m=+140.427261346" Feb 23 17:33:04 crc kubenswrapper[4724]: I0223 17:33:04.631734 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=94.631705438 podStartE2EDuration="1m34.631705438s" podCreationTimestamp="2026-02-23 17:31:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:33:04.631107863 +0000 UTC m=+140.447307463" watchObservedRunningTime="2026-02-23 17:33:04.631705438 +0000 UTC m=+140.447905028" Feb 23 17:33:04 crc kubenswrapper[4724]: I0223 17:33:04.632292 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9881beff-3b24-4303-b979-87ce8311ca7b-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-4pmvh\" (UID: \"9881beff-3b24-4303-b979-87ce8311ca7b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4pmvh" Feb 23 17:33:04 crc kubenswrapper[4724]: I0223 17:33:04.632370 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9881beff-3b24-4303-b979-87ce8311ca7b-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-4pmvh\" (UID: \"9881beff-3b24-4303-b979-87ce8311ca7b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4pmvh" Feb 23 17:33:04 crc kubenswrapper[4724]: I0223 17:33:04.632415 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9881beff-3b24-4303-b979-87ce8311ca7b-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-4pmvh\" (UID: \"9881beff-3b24-4303-b979-87ce8311ca7b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4pmvh" Feb 23 17:33:04 crc kubenswrapper[4724]: I0223 17:33:04.632460 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9881beff-3b24-4303-b979-87ce8311ca7b-service-ca\") pod \"cluster-version-operator-5c965bbfc6-4pmvh\" (UID: \"9881beff-3b24-4303-b979-87ce8311ca7b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4pmvh" Feb 23 17:33:04 crc kubenswrapper[4724]: I0223 17:33:04.632535 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9881beff-3b24-4303-b979-87ce8311ca7b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-4pmvh\" (UID: \"9881beff-3b24-4303-b979-87ce8311ca7b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4pmvh" Feb 23 17:33:04 crc kubenswrapper[4724]: I0223 17:33:04.690841 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=92.690818845 podStartE2EDuration="1m32.690818845s" podCreationTimestamp="2026-02-23 17:31:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:33:04.690186919 +0000 UTC m=+140.506386519" watchObservedRunningTime="2026-02-23 17:33:04.690818845 +0000 UTC m=+140.507018445" Feb 23 17:33:04 crc kubenswrapper[4724]: I0223 17:33:04.728347 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-k77s6" podStartSLOduration=84.728306645 podStartE2EDuration="1m24.728306645s" podCreationTimestamp="2026-02-23 17:31:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:33:04.728272694 +0000 UTC m=+140.544472294" watchObservedRunningTime="2026-02-23 17:33:04.728306645 +0000 UTC m=+140.544506235" Feb 23 17:33:04 crc kubenswrapper[4724]: I0223 17:33:04.733368 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9881beff-3b24-4303-b979-87ce8311ca7b-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-4pmvh\" (UID: \"9881beff-3b24-4303-b979-87ce8311ca7b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4pmvh" Feb 23 17:33:04 crc kubenswrapper[4724]: I0223 17:33:04.733526 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9881beff-3b24-4303-b979-87ce8311ca7b-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-4pmvh\" (UID: \"9881beff-3b24-4303-b979-87ce8311ca7b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4pmvh" Feb 23 17:33:04 crc kubenswrapper[4724]: I0223 17:33:04.733549 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9881beff-3b24-4303-b979-87ce8311ca7b-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-4pmvh\" (UID: \"9881beff-3b24-4303-b979-87ce8311ca7b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4pmvh" Feb 23 17:33:04 crc kubenswrapper[4724]: I0223 17:33:04.733461 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9881beff-3b24-4303-b979-87ce8311ca7b-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-4pmvh\" (UID: \"9881beff-3b24-4303-b979-87ce8311ca7b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4pmvh" Feb 23 17:33:04 crc kubenswrapper[4724]: I0223 17:33:04.734021 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9881beff-3b24-4303-b979-87ce8311ca7b-service-ca\") pod \"cluster-version-operator-5c965bbfc6-4pmvh\" (UID: \"9881beff-3b24-4303-b979-87ce8311ca7b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4pmvh" Feb 23 17:33:04 crc kubenswrapper[4724]: I0223 17:33:04.734082 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9881beff-3b24-4303-b979-87ce8311ca7b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-4pmvh\" (UID: \"9881beff-3b24-4303-b979-87ce8311ca7b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4pmvh" Feb 23 17:33:04 crc kubenswrapper[4724]: I0223 17:33:04.734155 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9881beff-3b24-4303-b979-87ce8311ca7b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-4pmvh\" (UID: \"9881beff-3b24-4303-b979-87ce8311ca7b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4pmvh" Feb 23 17:33:04 crc kubenswrapper[4724]: I0223 17:33:04.734637 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9881beff-3b24-4303-b979-87ce8311ca7b-service-ca\") pod \"cluster-version-operator-5c965bbfc6-4pmvh\" (UID: \"9881beff-3b24-4303-b979-87ce8311ca7b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4pmvh" Feb 23 17:33:04 crc kubenswrapper[4724]: I0223 17:33:04.747135 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9881beff-3b24-4303-b979-87ce8311ca7b-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-4pmvh\" (UID: \"9881beff-3b24-4303-b979-87ce8311ca7b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4pmvh" Feb 23 17:33:04 crc kubenswrapper[4724]: I0223 17:33:04.759329 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9881beff-3b24-4303-b979-87ce8311ca7b-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-4pmvh\" (UID: \"9881beff-3b24-4303-b979-87ce8311ca7b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4pmvh" Feb 23 17:33:04 crc kubenswrapper[4724]: I0223 17:33:04.776041 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=47.776023128 podStartE2EDuration="47.776023128s" podCreationTimestamp="2026-02-23 17:32:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:33:04.775383453 +0000 UTC m=+140.591583073" watchObservedRunningTime="2026-02-23 17:33:04.776023128 +0000 UTC m=+140.592222718" Feb 23 17:33:04 crc kubenswrapper[4724]: I0223 17:33:04.858216 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-2dn8m" podStartSLOduration=84.858192137 podStartE2EDuration="1m24.858192137s" podCreationTimestamp="2026-02-23 17:31:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:33:04.832681304 +0000 UTC m=+140.648880904" watchObservedRunningTime="2026-02-23 17:33:04.858192137 +0000 UTC m=+140.674391747" Feb 23 17:33:04 crc kubenswrapper[4724]: I0223 17:33:04.858345 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-qssx7" podStartSLOduration=84.858340111 podStartE2EDuration="1m24.858340111s" podCreationTimestamp="2026-02-23 17:31:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:33:04.85790901 +0000 UTC m=+140.674108640" watchObservedRunningTime="2026-02-23 17:33:04.858340111 +0000 UTC m=+140.674539721" Feb 23 17:33:04 crc kubenswrapper[4724]: I0223 17:33:04.891912 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podStartSLOduration=84.891889473 podStartE2EDuration="1m24.891889473s" podCreationTimestamp="2026-02-23 17:31:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:33:04.875194859 +0000 UTC m=+140.691394459" watchObservedRunningTime="2026-02-23 17:33:04.891889473 +0000 UTC m=+140.708089083" Feb 23 17:33:04 crc kubenswrapper[4724]: I0223 17:33:04.913688 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4pmvh" Feb 23 17:33:04 crc kubenswrapper[4724]: I0223 17:33:04.943047 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmvqp" podStartSLOduration=83.943026252 podStartE2EDuration="1m23.943026252s" podCreationTimestamp="2026-02-23 17:31:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:33:04.891344109 +0000 UTC m=+140.707543719" watchObservedRunningTime="2026-02-23 17:33:04.943026252 +0000 UTC m=+140.759225852" Feb 23 17:33:04 crc kubenswrapper[4724]: I0223 17:33:04.953290 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:33:04 crc kubenswrapper[4724]: I0223 17:33:04.953366 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:33:04 crc kubenswrapper[4724]: I0223 17:33:04.953290 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q2jvs" Feb 23 17:33:04 crc kubenswrapper[4724]: E0223 17:33:04.953490 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 17:33:04 crc kubenswrapper[4724]: E0223 17:33:04.953619 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q2jvs" podUID="e106e1ec-19f4-4d6b-b71f-dc04dcc437b4" Feb 23 17:33:04 crc kubenswrapper[4724]: E0223 17:33:04.953692 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 17:33:04 crc kubenswrapper[4724]: I0223 17:33:04.982572 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=88.982551062 podStartE2EDuration="1m28.982551062s" podCreationTimestamp="2026-02-23 17:31:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:33:04.947927243 +0000 UTC m=+140.764126843" watchObservedRunningTime="2026-02-23 17:33:04.982551062 +0000 UTC m=+140.798750662" Feb 23 17:33:05 crc kubenswrapper[4724]: I0223 17:33:05.008793 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4pmvh" event={"ID":"9881beff-3b24-4303-b979-87ce8311ca7b","Type":"ContainerStarted","Data":"94d4a7edebfd43b60d3ee5d04771cf8b15645ca712591875e4803ac03727cff9"} Feb 23 17:33:05 crc kubenswrapper[4724]: I0223 17:33:05.017099 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-mmxrg" podStartSLOduration=85.017068369 podStartE2EDuration="1m25.017068369s" podCreationTimestamp="2026-02-23 17:31:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:33:05.0030044 +0000 UTC m=+140.819204000" watchObservedRunningTime="2026-02-23 17:33:05.017068369 +0000 UTC m=+140.833267969" Feb 23 17:33:05 crc kubenswrapper[4724]: I0223 17:33:05.187085 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 21:47:27.749684657 +0000 UTC Feb 23 17:33:05 crc kubenswrapper[4724]: I0223 17:33:05.187529 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 23 17:33:05 crc kubenswrapper[4724]: E0223 17:33:05.189146 4724 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 17:33:05 crc kubenswrapper[4724]: I0223 17:33:05.199232 4724 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 23 17:33:05 crc kubenswrapper[4724]: I0223 17:33:05.950552 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:33:05 crc kubenswrapper[4724]: E0223 17:33:05.950753 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 17:33:06 crc kubenswrapper[4724]: I0223 17:33:06.019791 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4pmvh" event={"ID":"9881beff-3b24-4303-b979-87ce8311ca7b","Type":"ContainerStarted","Data":"1c4e7532203f42b9a75d9816c445643f991154c0b8e8f9bca4a959b7fff74bdd"} Feb 23 17:33:06 crc kubenswrapper[4724]: I0223 17:33:06.045107 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4pmvh" podStartSLOduration=86.045081091 podStartE2EDuration="1m26.045081091s" podCreationTimestamp="2026-02-23 17:31:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:33:06.039300348 +0000 UTC m=+141.855499978" watchObservedRunningTime="2026-02-23 17:33:06.045081091 +0000 UTC m=+141.861280711" Feb 23 17:33:06 crc kubenswrapper[4724]: I0223 17:33:06.950372 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q2jvs" Feb 23 17:33:06 crc kubenswrapper[4724]: I0223 17:33:06.950460 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:33:06 crc kubenswrapper[4724]: E0223 17:33:06.950593 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q2jvs" podUID="e106e1ec-19f4-4d6b-b71f-dc04dcc437b4" Feb 23 17:33:06 crc kubenswrapper[4724]: I0223 17:33:06.950676 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:33:06 crc kubenswrapper[4724]: E0223 17:33:06.950822 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 17:33:06 crc kubenswrapper[4724]: E0223 17:33:06.951061 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 17:33:07 crc kubenswrapper[4724]: I0223 17:33:07.949926 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:33:07 crc kubenswrapper[4724]: E0223 17:33:07.950459 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 17:33:08 crc kubenswrapper[4724]: I0223 17:33:08.950669 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q2jvs" Feb 23 17:33:08 crc kubenswrapper[4724]: I0223 17:33:08.950706 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:33:08 crc kubenswrapper[4724]: E0223 17:33:08.950917 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q2jvs" podUID="e106e1ec-19f4-4d6b-b71f-dc04dcc437b4" Feb 23 17:33:08 crc kubenswrapper[4724]: I0223 17:33:08.951016 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:33:08 crc kubenswrapper[4724]: E0223 17:33:08.951174 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 17:33:08 crc kubenswrapper[4724]: E0223 17:33:08.951312 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 17:33:09 crc kubenswrapper[4724]: I0223 17:33:09.950870 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:33:09 crc kubenswrapper[4724]: E0223 17:33:09.951041 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 17:33:09 crc kubenswrapper[4724]: I0223 17:33:09.951807 4724 scope.go:117] "RemoveContainer" containerID="087c3ef86529801a82bd5e2a43ee86c6b7c0edee49efeae580674a4f19e47d26" Feb 23 17:33:09 crc kubenswrapper[4724]: E0223 17:33:09.951992 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-78fmj_openshift-ovn-kubernetes(8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1)\"" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" podUID="8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" Feb 23 17:33:10 crc kubenswrapper[4724]: E0223 17:33:10.190873 4724 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 17:33:10 crc kubenswrapper[4724]: I0223 17:33:10.951048 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q2jvs" Feb 23 17:33:10 crc kubenswrapper[4724]: I0223 17:33:10.951152 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:33:10 crc kubenswrapper[4724]: E0223 17:33:10.951359 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q2jvs" podUID="e106e1ec-19f4-4d6b-b71f-dc04dcc437b4" Feb 23 17:33:10 crc kubenswrapper[4724]: E0223 17:33:10.951725 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 17:33:10 crc kubenswrapper[4724]: I0223 17:33:10.951843 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:33:10 crc kubenswrapper[4724]: E0223 17:33:10.951944 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 17:33:11 crc kubenswrapper[4724]: I0223 17:33:11.950794 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:33:11 crc kubenswrapper[4724]: E0223 17:33:11.951035 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 17:33:12 crc kubenswrapper[4724]: I0223 17:33:12.950726 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:33:12 crc kubenswrapper[4724]: I0223 17:33:12.950776 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q2jvs" Feb 23 17:33:12 crc kubenswrapper[4724]: I0223 17:33:12.950752 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:33:12 crc kubenswrapper[4724]: E0223 17:33:12.950897 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 17:33:12 crc kubenswrapper[4724]: E0223 17:33:12.951022 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q2jvs" podUID="e106e1ec-19f4-4d6b-b71f-dc04dcc437b4" Feb 23 17:33:12 crc kubenswrapper[4724]: E0223 17:33:12.951221 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 17:33:12 crc kubenswrapper[4724]: I0223 17:33:12.954737 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e106e1ec-19f4-4d6b-b71f-dc04dcc437b4-metrics-certs\") pod \"network-metrics-daemon-q2jvs\" (UID: \"e106e1ec-19f4-4d6b-b71f-dc04dcc437b4\") " pod="openshift-multus/network-metrics-daemon-q2jvs" Feb 23 17:33:12 crc kubenswrapper[4724]: E0223 17:33:12.954904 4724 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 17:33:12 crc kubenswrapper[4724]: E0223 17:33:12.954969 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e106e1ec-19f4-4d6b-b71f-dc04dcc437b4-metrics-certs podName:e106e1ec-19f4-4d6b-b71f-dc04dcc437b4 nodeName:}" failed. No retries permitted until 2026-02-23 17:33:44.954950068 +0000 UTC m=+180.771149668 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e106e1ec-19f4-4d6b-b71f-dc04dcc437b4-metrics-certs") pod "network-metrics-daemon-q2jvs" (UID: "e106e1ec-19f4-4d6b-b71f-dc04dcc437b4") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 17:33:13 crc kubenswrapper[4724]: I0223 17:33:13.950221 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:33:13 crc kubenswrapper[4724]: E0223 17:33:13.950430 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 17:33:14 crc kubenswrapper[4724]: I0223 17:33:14.950070 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:33:14 crc kubenswrapper[4724]: I0223 17:33:14.950112 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:33:14 crc kubenswrapper[4724]: E0223 17:33:14.951749 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 17:33:14 crc kubenswrapper[4724]: I0223 17:33:14.951780 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q2jvs" Feb 23 17:33:14 crc kubenswrapper[4724]: E0223 17:33:14.951911 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 17:33:14 crc kubenswrapper[4724]: E0223 17:33:14.952124 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q2jvs" podUID="e106e1ec-19f4-4d6b-b71f-dc04dcc437b4" Feb 23 17:33:15 crc kubenswrapper[4724]: I0223 17:33:15.057768 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mmxrg_45a042db-4057-4913-8091-da7d8c79feba/kube-multus/0.log" Feb 23 17:33:15 crc kubenswrapper[4724]: I0223 17:33:15.057873 4724 generic.go:334] "Generic (PLEG): container finished" podID="45a042db-4057-4913-8091-da7d8c79feba" containerID="1b6df06ac80efb6de6b872bec086095e2b4ba0c13dba6e949a4f7309ef3e4300" exitCode=1 Feb 23 17:33:15 crc kubenswrapper[4724]: I0223 17:33:15.057930 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mmxrg" event={"ID":"45a042db-4057-4913-8091-da7d8c79feba","Type":"ContainerDied","Data":"1b6df06ac80efb6de6b872bec086095e2b4ba0c13dba6e949a4f7309ef3e4300"} Feb 23 17:33:15 crc kubenswrapper[4724]: I0223 17:33:15.058740 4724 scope.go:117] "RemoveContainer" containerID="1b6df06ac80efb6de6b872bec086095e2b4ba0c13dba6e949a4f7309ef3e4300" Feb 23 17:33:15 crc kubenswrapper[4724]: E0223 17:33:15.191587 4724 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 17:33:15 crc kubenswrapper[4724]: I0223 17:33:15.950142 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:33:15 crc kubenswrapper[4724]: E0223 17:33:15.950276 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 17:33:16 crc kubenswrapper[4724]: I0223 17:33:16.063494 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mmxrg_45a042db-4057-4913-8091-da7d8c79feba/kube-multus/0.log" Feb 23 17:33:16 crc kubenswrapper[4724]: I0223 17:33:16.063580 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mmxrg" event={"ID":"45a042db-4057-4913-8091-da7d8c79feba","Type":"ContainerStarted","Data":"226aa2be31b966ee054e9088dea89c730f96f6f6438d8c45123ad5997ba318a1"} Feb 23 17:33:16 crc kubenswrapper[4724]: I0223 17:33:16.950861 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q2jvs" Feb 23 17:33:16 crc kubenswrapper[4724]: I0223 17:33:16.950967 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:33:16 crc kubenswrapper[4724]: E0223 17:33:16.951034 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q2jvs" podUID="e106e1ec-19f4-4d6b-b71f-dc04dcc437b4" Feb 23 17:33:16 crc kubenswrapper[4724]: I0223 17:33:16.951076 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:33:16 crc kubenswrapper[4724]: E0223 17:33:16.951193 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 17:33:16 crc kubenswrapper[4724]: E0223 17:33:16.951338 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 17:33:17 crc kubenswrapper[4724]: I0223 17:33:17.950712 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:33:17 crc kubenswrapper[4724]: E0223 17:33:17.950937 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 17:33:18 crc kubenswrapper[4724]: I0223 17:33:18.950373 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:33:18 crc kubenswrapper[4724]: I0223 17:33:18.950381 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q2jvs" Feb 23 17:33:18 crc kubenswrapper[4724]: E0223 17:33:18.950570 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 17:33:18 crc kubenswrapper[4724]: E0223 17:33:18.950746 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q2jvs" podUID="e106e1ec-19f4-4d6b-b71f-dc04dcc437b4" Feb 23 17:33:18 crc kubenswrapper[4724]: I0223 17:33:18.950843 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:33:18 crc kubenswrapper[4724]: E0223 17:33:18.950933 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 17:33:19 crc kubenswrapper[4724]: I0223 17:33:19.950296 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:33:19 crc kubenswrapper[4724]: E0223 17:33:19.950462 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 17:33:20 crc kubenswrapper[4724]: E0223 17:33:20.193427 4724 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 17:33:20 crc kubenswrapper[4724]: I0223 17:33:20.950485 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:33:20 crc kubenswrapper[4724]: I0223 17:33:20.950520 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:33:20 crc kubenswrapper[4724]: E0223 17:33:20.950723 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 17:33:20 crc kubenswrapper[4724]: E0223 17:33:20.950814 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 17:33:20 crc kubenswrapper[4724]: I0223 17:33:20.950887 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q2jvs" Feb 23 17:33:20 crc kubenswrapper[4724]: E0223 17:33:20.950956 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q2jvs" podUID="e106e1ec-19f4-4d6b-b71f-dc04dcc437b4" Feb 23 17:33:21 crc kubenswrapper[4724]: I0223 17:33:21.950700 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:33:21 crc kubenswrapper[4724]: I0223 17:33:21.951521 4724 scope.go:117] "RemoveContainer" containerID="087c3ef86529801a82bd5e2a43ee86c6b7c0edee49efeae580674a4f19e47d26" Feb 23 17:33:21 crc kubenswrapper[4724]: E0223 17:33:21.951677 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 17:33:22 crc kubenswrapper[4724]: I0223 17:33:22.881830 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-q2jvs"] Feb 23 17:33:22 crc kubenswrapper[4724]: I0223 17:33:22.881986 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q2jvs" Feb 23 17:33:22 crc kubenswrapper[4724]: E0223 17:33:22.882173 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q2jvs" podUID="e106e1ec-19f4-4d6b-b71f-dc04dcc437b4" Feb 23 17:33:22 crc kubenswrapper[4724]: I0223 17:33:22.950161 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:33:22 crc kubenswrapper[4724]: I0223 17:33:22.950243 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:33:22 crc kubenswrapper[4724]: E0223 17:33:22.950289 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 17:33:22 crc kubenswrapper[4724]: E0223 17:33:22.950358 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 17:33:23 crc kubenswrapper[4724]: I0223 17:33:23.092292 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-78fmj_8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1/ovnkube-controller/2.log" Feb 23 17:33:23 crc kubenswrapper[4724]: I0223 17:33:23.096099 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" event={"ID":"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1","Type":"ContainerStarted","Data":"88158ddc63919d018224228d921f0c979df519c84676428b368e05e5728e7216"} Feb 23 17:33:23 crc kubenswrapper[4724]: I0223 17:33:23.096739 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:33:23 crc kubenswrapper[4724]: I0223 17:33:23.134257 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" podStartSLOduration=102.134233207 podStartE2EDuration="1m42.134233207s" podCreationTimestamp="2026-02-23 17:31:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:33:23.134197416 +0000 UTC m=+158.950397016" watchObservedRunningTime="2026-02-23 17:33:23.134233207 +0000 UTC m=+158.950432807" Feb 23 17:33:23 crc kubenswrapper[4724]: I0223 17:33:23.950722 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:33:23 crc kubenswrapper[4724]: E0223 17:33:23.950962 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 17:33:24 crc kubenswrapper[4724]: I0223 17:33:24.950452 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q2jvs" Feb 23 17:33:24 crc kubenswrapper[4724]: I0223 17:33:24.950476 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:33:24 crc kubenswrapper[4724]: I0223 17:33:24.950487 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:33:24 crc kubenswrapper[4724]: E0223 17:33:24.952681 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q2jvs" podUID="e106e1ec-19f4-4d6b-b71f-dc04dcc437b4" Feb 23 17:33:24 crc kubenswrapper[4724]: E0223 17:33:24.952904 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 17:33:24 crc kubenswrapper[4724]: E0223 17:33:24.953126 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 17:33:25 crc kubenswrapper[4724]: I0223 17:33:25.951051 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:33:25 crc kubenswrapper[4724]: I0223 17:33:25.956570 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 23 17:33:25 crc kubenswrapper[4724]: I0223 17:33:25.957074 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 23 17:33:26 crc kubenswrapper[4724]: I0223 17:33:26.950851 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q2jvs" Feb 23 17:33:26 crc kubenswrapper[4724]: I0223 17:33:26.950920 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:33:26 crc kubenswrapper[4724]: I0223 17:33:26.951107 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:33:26 crc kubenswrapper[4724]: I0223 17:33:26.953757 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 23 17:33:26 crc kubenswrapper[4724]: I0223 17:33:26.954003 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 23 17:33:26 crc kubenswrapper[4724]: I0223 17:33:26.956074 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 23 17:33:26 crc kubenswrapper[4724]: I0223 17:33:26.956493 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 23 17:33:28 crc kubenswrapper[4724]: I0223 17:33:28.142574 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.051945 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.098648 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5jdvd"] Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.099468 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-5jdvd" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.099461 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-xttsp"] Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.100600 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-p78b5"] Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.101320 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p78b5" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.101854 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgdx8"] Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.102719 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgdx8" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.104234 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-xttsp" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.106297 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rrb7b"] Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.106931 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9k7fm"] Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.107698 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rrb7b" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.109724 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9k7fm" Feb 23 17:33:35 crc kubenswrapper[4724]: W0223 17:33:35.125616 4724 reflector.go:561] object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c": failed to list *v1.Secret: secrets "openshift-controller-manager-sa-dockercfg-msq4c" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Feb 23 17:33:35 crc kubenswrapper[4724]: E0223 17:33:35.125708 4724 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-msq4c\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-controller-manager-sa-dockercfg-msq4c\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 23 17:33:35 crc kubenswrapper[4724]: W0223 17:33:35.125899 4724 reflector.go:561] object-"openshift-route-controller-manager"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Feb 23 17:33:35 crc kubenswrapper[4724]: E0223 17:33:35.125928 4724 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 23 17:33:35 crc kubenswrapper[4724]: W0223 17:33:35.129233 4724 reflector.go:561] object-"openshift-route-controller-manager"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Feb 23 17:33:35 crc kubenswrapper[4724]: E0223 17:33:35.129298 4724 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.129379 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4kcvg"] Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.130261 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4kcvg" Feb 23 17:33:35 crc kubenswrapper[4724]: W0223 17:33:35.135341 4724 reflector.go:561] object-"openshift-controller-manager"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Feb 23 17:33:35 crc kubenswrapper[4724]: E0223 17:33:35.135474 4724 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 23 17:33:35 crc kubenswrapper[4724]: W0223 17:33:35.135745 4724 reflector.go:561] object-"openshift-controller-manager"/"client-ca": failed to list *v1.ConfigMap: configmaps "client-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Feb 23 17:33:35 crc kubenswrapper[4724]: E0223 17:33:35.135787 4724 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"client-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.135963 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 23 17:33:35 crc kubenswrapper[4724]: W0223 17:33:35.136371 4724 reflector.go:561] object-"openshift-controller-manager"/"openshift-global-ca": failed to list *v1.ConfigMap: configmaps "openshift-global-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Feb 23 17:33:35 crc kubenswrapper[4724]: E0223 17:33:35.136438 4724 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-global-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-global-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 23 17:33:35 crc kubenswrapper[4724]: W0223 17:33:35.136758 4724 reflector.go:561] object-"openshift-route-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Feb 23 17:33:35 crc kubenswrapper[4724]: E0223 17:33:35.136790 4724 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.136943 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.137429 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.138619 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.138923 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.139601 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 23 17:33:35 crc kubenswrapper[4724]: W0223 17:33:35.139856 4724 reflector.go:561] object-"openshift-oauth-apiserver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-oauth-apiserver": no relationship found between node 'crc' and this object Feb 23 17:33:35 crc kubenswrapper[4724]: E0223 17:33:35.139903 4724 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-oauth-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.140838 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.141048 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.141348 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.142315 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.142692 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.143232 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.143432 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.143599 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.143761 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.144030 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.144310 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.144686 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.144969 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.145037 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.145301 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.145526 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.145784 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.146119 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.146377 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.146666 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.146898 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.147117 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.147338 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.147591 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.147856 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.149266 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-hhhgx"] Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.150442 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-sg2k6"] Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.151331 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hhhgx" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.151817 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.154177 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.155094 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.155521 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.155763 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.155836 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.156007 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.156191 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.156334 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.156459 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.462024 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-8hzn4"] Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.466052 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab9dc328-de73-425e-ac20-9af46c731a01-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-rrb7b\" (UID: \"ab9dc328-de73-425e-ac20-9af46c731a01\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rrb7b" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.468596 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-sg2k6" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.470298 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c14acfb-83f3-4782-84df-6558dde9c268-config\") pod \"controller-manager-879f6c89f-5jdvd\" (UID: \"2c14acfb-83f3-4782-84df-6558dde9c268\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5jdvd" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.478351 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7746d0a1-242b-4afc-b968-36853a4ad1ac-client-ca\") pod \"route-controller-manager-6576b87f9c-wgdx8\" (UID: \"7746d0a1-242b-4afc-b968-36853a4ad1ac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgdx8" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.478453 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c14acfb-83f3-4782-84df-6558dde9c268-serving-cert\") pod \"controller-manager-879f6c89f-5jdvd\" (UID: \"2c14acfb-83f3-4782-84df-6558dde9c268\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5jdvd" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.478490 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2c14acfb-83f3-4782-84df-6558dde9c268-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-5jdvd\" (UID: \"2c14acfb-83f3-4782-84df-6558dde9c268\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5jdvd" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.478710 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7746d0a1-242b-4afc-b968-36853a4ad1ac-serving-cert\") pod \"route-controller-manager-6576b87f9c-wgdx8\" (UID: \"7746d0a1-242b-4afc-b968-36853a4ad1ac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgdx8" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.478759 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-8nh95"] Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.478800 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab9dc328-de73-425e-ac20-9af46c731a01-config\") pod \"openshift-apiserver-operator-796bbdcf4f-rrb7b\" (UID: \"ab9dc328-de73-425e-ac20-9af46c731a01\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rrb7b" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.478843 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7746d0a1-242b-4afc-b968-36853a4ad1ac-config\") pod \"route-controller-manager-6576b87f9c-wgdx8\" (UID: \"7746d0a1-242b-4afc-b968-36853a4ad1ac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgdx8" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.478916 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8kf8\" (UniqueName: \"kubernetes.io/projected/2c14acfb-83f3-4782-84df-6558dde9c268-kube-api-access-w8kf8\") pod \"controller-manager-879f6c89f-5jdvd\" (UID: \"2c14acfb-83f3-4782-84df-6558dde9c268\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5jdvd" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.478975 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-8hzn4" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.478982 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bnwc\" (UniqueName: \"kubernetes.io/projected/ab9dc328-de73-425e-ac20-9af46c731a01-kube-api-access-6bnwc\") pod \"openshift-apiserver-operator-796bbdcf4f-rrb7b\" (UID: \"ab9dc328-de73-425e-ac20-9af46c731a01\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rrb7b" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.479452 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2c14acfb-83f3-4782-84df-6558dde9c268-client-ca\") pod \"controller-manager-879f6c89f-5jdvd\" (UID: \"2c14acfb-83f3-4782-84df-6558dde9c268\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5jdvd" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.479539 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8nmb\" (UniqueName: \"kubernetes.io/projected/7746d0a1-242b-4afc-b968-36853a4ad1ac-kube-api-access-w8nmb\") pod \"route-controller-manager-6576b87f9c-wgdx8\" (UID: \"7746d0a1-242b-4afc-b968-36853a4ad1ac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgdx8" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.508434 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ddvrv"] Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.509058 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-ch92v"] Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.509403 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-p4vpc"] Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.509819 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-fknnv"] Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.510199 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qqsg7"] Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.510558 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q4lmq"] Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.510900 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5jdvd"] Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.511055 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q4lmq" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.511619 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8nh95" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.511935 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ddvrv" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.512229 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-ch92v" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.512542 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-p4vpc" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.517966 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fknnv" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.518328 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.518642 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.518943 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.516939 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r59dr"] Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.520216 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d487c"] Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.520438 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.520545 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-vrsk4"] Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.520748 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.520943 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.521183 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vrsk4" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.521376 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.522695 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.522811 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d487c" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.524052 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-c4l9z"] Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.524733 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-xttsp"] Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.524780 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-c4l9z" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.525104 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r59dr" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.522862 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.523002 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.523283 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.523327 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.523426 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.522775 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.529153 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.530118 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.530491 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.545655 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t7nzb"] Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.546403 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t7nzb" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.550571 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mr2c4"] Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.551513 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mr2c4" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.551557 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.559579 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.560648 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.560886 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.559736 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.559951 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.561506 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.561528 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.561540 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.561880 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.566377 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-s77tw"] Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.567244 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.567293 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.567598 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.567632 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.567713 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.567774 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.568726 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.569202 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.595380 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-plsvj"] Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.596522 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-plsvj" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.596909 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-s77tw" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.605075 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.607642 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-6nv99"] Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.610862 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6nv99" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.611341 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.612005 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7746d0a1-242b-4afc-b968-36853a4ad1ac-config\") pod \"route-controller-manager-6576b87f9c-wgdx8\" (UID: \"7746d0a1-242b-4afc-b968-36853a4ad1ac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgdx8" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.612052 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a71874e2-c4df-47f7-af47-b85d817995bf-serving-cert\") pod \"apiserver-7bbb656c7d-p78b5\" (UID: \"a71874e2-c4df-47f7-af47-b85d817995bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p78b5" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.612077 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab9dc328-de73-425e-ac20-9af46c731a01-config\") pod \"openshift-apiserver-operator-796bbdcf4f-rrb7b\" (UID: \"ab9dc328-de73-425e-ac20-9af46c731a01\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rrb7b" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.612108 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-4kcvg\" (UID: \"757355b8-9b0f-4c38-9560-a0281e0fa332\") " pod="openshift-authentication/oauth-openshift-558db77b4-4kcvg" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.612148 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-625kc\" (UniqueName: \"kubernetes.io/projected/757355b8-9b0f-4c38-9560-a0281e0fa332-kube-api-access-625kc\") pod \"oauth-openshift-558db77b4-4kcvg\" (UID: \"757355b8-9b0f-4c38-9560-a0281e0fa332\") " pod="openshift-authentication/oauth-openshift-558db77b4-4kcvg" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.612203 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8kf8\" (UniqueName: \"kubernetes.io/projected/2c14acfb-83f3-4782-84df-6558dde9c268-kube-api-access-w8kf8\") pod \"controller-manager-879f6c89f-5jdvd\" (UID: \"2c14acfb-83f3-4782-84df-6558dde9c268\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5jdvd" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.612239 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bnwc\" (UniqueName: \"kubernetes.io/projected/ab9dc328-de73-425e-ac20-9af46c731a01-kube-api-access-6bnwc\") pod \"openshift-apiserver-operator-796bbdcf4f-rrb7b\" (UID: \"ab9dc328-de73-425e-ac20-9af46c731a01\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rrb7b" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.612265 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-4kcvg\" (UID: \"757355b8-9b0f-4c38-9560-a0281e0fa332\") " pod="openshift-authentication/oauth-openshift-558db77b4-4kcvg" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.612292 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a71874e2-c4df-47f7-af47-b85d817995bf-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-p78b5\" (UID: \"a71874e2-c4df-47f7-af47-b85d817995bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p78b5" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.612317 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/9cfbc29a-94db-48d8-9393-38c581d767a5-machine-approver-tls\") pod \"machine-approver-56656f9798-hhhgx\" (UID: \"9cfbc29a-94db-48d8-9393-38c581d767a5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hhhgx" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.612341 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-4kcvg\" (UID: \"757355b8-9b0f-4c38-9560-a0281e0fa332\") " pod="openshift-authentication/oauth-openshift-558db77b4-4kcvg" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.612364 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a71874e2-c4df-47f7-af47-b85d817995bf-encryption-config\") pod \"apiserver-7bbb656c7d-p78b5\" (UID: \"a71874e2-c4df-47f7-af47-b85d817995bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p78b5" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.612437 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94pbp\" (UniqueName: \"kubernetes.io/projected/9cfbc29a-94db-48d8-9393-38c581d767a5-kube-api-access-94pbp\") pod \"machine-approver-56656f9798-hhhgx\" (UID: \"9cfbc29a-94db-48d8-9393-38c581d767a5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hhhgx" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.612463 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4a9e0634-64a7-4106-8a10-bfed1ab672da-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-xttsp\" (UID: \"4a9e0634-64a7-4106-8a10-bfed1ab672da\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xttsp" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.612485 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjmz9\" (UniqueName: \"kubernetes.io/projected/a71874e2-c4df-47f7-af47-b85d817995bf-kube-api-access-xjmz9\") pod \"apiserver-7bbb656c7d-p78b5\" (UID: \"a71874e2-c4df-47f7-af47-b85d817995bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p78b5" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.612523 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-4kcvg\" (UID: \"757355b8-9b0f-4c38-9560-a0281e0fa332\") " pod="openshift-authentication/oauth-openshift-558db77b4-4kcvg" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.612562 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-4kcvg\" (UID: \"757355b8-9b0f-4c38-9560-a0281e0fa332\") " pod="openshift-authentication/oauth-openshift-558db77b4-4kcvg" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.612586 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a9e0634-64a7-4106-8a10-bfed1ab672da-config\") pod \"machine-api-operator-5694c8668f-xttsp\" (UID: \"4a9e0634-64a7-4106-8a10-bfed1ab672da\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xttsp" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.612610 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2c14acfb-83f3-4782-84df-6558dde9c268-client-ca\") pod \"controller-manager-879f6c89f-5jdvd\" (UID: \"2c14acfb-83f3-4782-84df-6558dde9c268\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5jdvd" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.612797 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8nmb\" (UniqueName: \"kubernetes.io/projected/7746d0a1-242b-4afc-b968-36853a4ad1ac-kube-api-access-w8nmb\") pod \"route-controller-manager-6576b87f9c-wgdx8\" (UID: \"7746d0a1-242b-4afc-b968-36853a4ad1ac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgdx8" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.612993 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a71874e2-c4df-47f7-af47-b85d817995bf-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-p78b5\" (UID: \"a71874e2-c4df-47f7-af47-b85d817995bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p78b5" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.613150 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a71874e2-c4df-47f7-af47-b85d817995bf-audit-policies\") pod \"apiserver-7bbb656c7d-p78b5\" (UID: \"a71874e2-c4df-47f7-af47-b85d817995bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p78b5" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.613254 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-4kcvg\" (UID: \"757355b8-9b0f-4c38-9560-a0281e0fa332\") " pod="openshift-authentication/oauth-openshift-558db77b4-4kcvg" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.613306 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a71874e2-c4df-47f7-af47-b85d817995bf-etcd-client\") pod \"apiserver-7bbb656c7d-p78b5\" (UID: \"a71874e2-c4df-47f7-af47-b85d817995bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p78b5" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.613728 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab9dc328-de73-425e-ac20-9af46c731a01-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-rrb7b\" (UID: \"ab9dc328-de73-425e-ac20-9af46c731a01\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rrb7b" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.613802 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/757355b8-9b0f-4c38-9560-a0281e0fa332-audit-dir\") pod \"oauth-openshift-558db77b4-4kcvg\" (UID: \"757355b8-9b0f-4c38-9560-a0281e0fa332\") " pod="openshift-authentication/oauth-openshift-558db77b4-4kcvg" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.613833 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c14acfb-83f3-4782-84df-6558dde9c268-config\") pod \"controller-manager-879f6c89f-5jdvd\" (UID: \"2c14acfb-83f3-4782-84df-6558dde9c268\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5jdvd" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.614348 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-4kcvg\" (UID: \"757355b8-9b0f-4c38-9560-a0281e0fa332\") " pod="openshift-authentication/oauth-openshift-558db77b4-4kcvg" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.614420 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4a9e0634-64a7-4106-8a10-bfed1ab672da-images\") pod \"machine-api-operator-5694c8668f-xttsp\" (UID: \"4a9e0634-64a7-4106-8a10-bfed1ab672da\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xttsp" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.620539 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gsvf\" (UniqueName: \"kubernetes.io/projected/fe2c617a-30bc-4095-b085-d6306827fcce-kube-api-access-8gsvf\") pod \"downloads-7954f5f757-8hzn4\" (UID: \"fe2c617a-30bc-4095-b085-d6306827fcce\") " pod="openshift-console/downloads-7954f5f757-8hzn4" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.620656 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f1ba78f6-528b-46c5-b908-a0b5e69d4787-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-9k7fm\" (UID: \"f1ba78f6-528b-46c5-b908-a0b5e69d4787\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9k7fm" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.620704 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7746d0a1-242b-4afc-b968-36853a4ad1ac-client-ca\") pod \"route-controller-manager-6576b87f9c-wgdx8\" (UID: \"7746d0a1-242b-4afc-b968-36853a4ad1ac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgdx8" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.620761 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qxgm\" (UniqueName: \"kubernetes.io/projected/f1ba78f6-528b-46c5-b908-a0b5e69d4787-kube-api-access-4qxgm\") pod \"cluster-samples-operator-665b6dd947-9k7fm\" (UID: \"f1ba78f6-528b-46c5-b908-a0b5e69d4787\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9k7fm" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.620793 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/757355b8-9b0f-4c38-9560-a0281e0fa332-audit-policies\") pod \"oauth-openshift-558db77b4-4kcvg\" (UID: \"757355b8-9b0f-4c38-9560-a0281e0fa332\") " pod="openshift-authentication/oauth-openshift-558db77b4-4kcvg" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.620823 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c14acfb-83f3-4782-84df-6558dde9c268-serving-cert\") pod \"controller-manager-879f6c89f-5jdvd\" (UID: \"2c14acfb-83f3-4782-84df-6558dde9c268\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5jdvd" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.620753 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.620892 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2c14acfb-83f3-4782-84df-6558dde9c268-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-5jdvd\" (UID: \"2c14acfb-83f3-4782-84df-6558dde9c268\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5jdvd" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.620765 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.620879 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.621476 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.621502 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.621626 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhsml\" (UniqueName: \"kubernetes.io/projected/4a9e0634-64a7-4106-8a10-bfed1ab672da-kube-api-access-qhsml\") pod \"machine-api-operator-5694c8668f-xttsp\" (UID: \"4a9e0634-64a7-4106-8a10-bfed1ab672da\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xttsp" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.627379 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab9dc328-de73-425e-ac20-9af46c731a01-config\") pod \"openshift-apiserver-operator-796bbdcf4f-rrb7b\" (UID: \"ab9dc328-de73-425e-ac20-9af46c731a01\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rrb7b" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.621682 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a71874e2-c4df-47f7-af47-b85d817995bf-audit-dir\") pod \"apiserver-7bbb656c7d-p78b5\" (UID: \"a71874e2-c4df-47f7-af47-b85d817995bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p78b5" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.628266 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-45j55"] Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.639547 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-4kcvg\" (UID: \"757355b8-9b0f-4c38-9560-a0281e0fa332\") " pod="openshift-authentication/oauth-openshift-558db77b4-4kcvg" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.640137 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.642967 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.646447 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.646105 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.646881 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.646965 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c14acfb-83f3-4782-84df-6558dde9c268-config\") pod \"controller-manager-879f6c89f-5jdvd\" (UID: \"2c14acfb-83f3-4782-84df-6558dde9c268\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5jdvd" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.646115 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.646225 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.646264 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.647738 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.648673 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.648874 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.649974 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.650208 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.650527 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.650777 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.650932 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.651047 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.651844 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-45j55" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.652289 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-4kcvg\" (UID: \"757355b8-9b0f-4c38-9560-a0281e0fa332\") " pod="openshift-authentication/oauth-openshift-558db77b4-4kcvg" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.652323 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9cfbc29a-94db-48d8-9393-38c581d767a5-auth-proxy-config\") pod \"machine-approver-56656f9798-hhhgx\" (UID: \"9cfbc29a-94db-48d8-9393-38c581d767a5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hhhgx" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.652371 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.653404 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.654541 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.654834 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xtsjf"] Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.655123 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab9dc328-de73-425e-ac20-9af46c731a01-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-rrb7b\" (UID: \"ab9dc328-de73-425e-ac20-9af46c731a01\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rrb7b" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.655325 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.655355 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.655579 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.655618 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.655797 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.655815 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7746d0a1-242b-4afc-b968-36853a4ad1ac-client-ca\") pod \"route-controller-manager-6576b87f9c-wgdx8\" (UID: \"7746d0a1-242b-4afc-b968-36853a4ad1ac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgdx8" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.655938 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.656072 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.656588 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-gddsl"] Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.657523 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.657686 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gddsl" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.657963 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-4kcvg\" (UID: \"757355b8-9b0f-4c38-9560-a0281e0fa332\") " pod="openshift-authentication/oauth-openshift-558db77b4-4kcvg" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.657704 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.658221 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7746d0a1-242b-4afc-b968-36853a4ad1ac-serving-cert\") pod \"route-controller-manager-6576b87f9c-wgdx8\" (UID: \"7746d0a1-242b-4afc-b968-36853a4ad1ac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgdx8" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.658346 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-4kcvg\" (UID: \"757355b8-9b0f-4c38-9560-a0281e0fa332\") " pod="openshift-authentication/oauth-openshift-558db77b4-4kcvg" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.658379 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cfbc29a-94db-48d8-9393-38c581d767a5-config\") pod \"machine-approver-56656f9798-hhhgx\" (UID: \"9cfbc29a-94db-48d8-9393-38c581d767a5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hhhgx" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.658692 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-xtsjf" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.660011 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rrb7b"] Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.661535 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-n984k"] Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.662069 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-n984k" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.663499 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.667274 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.671181 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.671663 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.672357 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.672357 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-sg2k6"] Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.672472 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ddvrv"] Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.676374 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sxnxc"] Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.677204 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sxnxc" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.678597 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-sjpqd"] Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.680557 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.681155 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-sjpqd" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.683657 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.685038 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n4kjh"] Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.693632 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531130-wbshf"] Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.694238 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9vlcc"] Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.694708 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-6rznq"] Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.694868 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9vlcc" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.695706 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-4kmgv"] Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.696209 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dlfsv"] Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.709794 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-72w5z"] Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.710027 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531130-wbshf" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.710113 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n4kjh" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.713168 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6rznq" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.714272 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-4kmgv" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.714688 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.715218 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dlfsv" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.729447 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9k7fm"] Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.729638 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-72w5z" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.729496 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-c4l9z"] Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.729707 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-ch92v"] Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.729722 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-vrsk4"] Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.729737 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-6nv99"] Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.729760 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r59dr"] Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.730104 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.732862 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t7nzb"] Feb 23 17:33:35 crc kubenswrapper[4724]: I0223 17:33:35.743231 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qqsg7"] Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.039136 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4kcvg"] Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.039673 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.041592 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-p78b5"] Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.042509 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a71874e2-c4df-47f7-af47-b85d817995bf-encryption-config\") pod \"apiserver-7bbb656c7d-p78b5\" (UID: \"a71874e2-c4df-47f7-af47-b85d817995bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p78b5" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.042583 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/66b7c770-0864-43b0-8be8-8c9e26cedb5f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-8nh95\" (UID: \"66b7c770-0864-43b0-8be8-8c9e26cedb5f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8nh95" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.042649 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g797b\" (UniqueName: \"kubernetes.io/projected/000721f3-4213-4d68-b390-d172a0fea797-kube-api-access-g797b\") pod \"dns-operator-744455d44c-p4vpc\" (UID: \"000721f3-4213-4d68-b390-d172a0fea797\") " pod="openshift-dns-operator/dns-operator-744455d44c-p4vpc" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.042682 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/415b36c5-1000-4ff9-9640-1ec29a3728c1-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-sg2k6\" (UID: \"415b36c5-1000-4ff9-9640-1ec29a3728c1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sg2k6" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.042722 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-4kcvg\" (UID: \"757355b8-9b0f-4c38-9560-a0281e0fa332\") " pod="openshift-authentication/oauth-openshift-558db77b4-4kcvg" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.042748 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94pbp\" (UniqueName: \"kubernetes.io/projected/9cfbc29a-94db-48d8-9393-38c581d767a5-kube-api-access-94pbp\") pod \"machine-approver-56656f9798-hhhgx\" (UID: \"9cfbc29a-94db-48d8-9393-38c581d767a5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hhhgx" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.042779 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4a9e0634-64a7-4106-8a10-bfed1ab672da-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-xttsp\" (UID: \"4a9e0634-64a7-4106-8a10-bfed1ab672da\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xttsp" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.042811 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjmz9\" (UniqueName: \"kubernetes.io/projected/a71874e2-c4df-47f7-af47-b85d817995bf-kube-api-access-xjmz9\") pod \"apiserver-7bbb656c7d-p78b5\" (UID: \"a71874e2-c4df-47f7-af47-b85d817995bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p78b5" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.042839 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-4kcvg\" (UID: \"757355b8-9b0f-4c38-9560-a0281e0fa332\") " pod="openshift-authentication/oauth-openshift-558db77b4-4kcvg" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.043368 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.043427 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.044151 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-8hzn4"] Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.045024 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a9e0634-64a7-4106-8a10-bfed1ab672da-config\") pod \"machine-api-operator-5694c8668f-xttsp\" (UID: \"4a9e0634-64a7-4106-8a10-bfed1ab672da\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xttsp" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.045158 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a43a88c1-27e9-46ab-a605-3aed976d512c-config\") pod \"etcd-operator-b45778765-c4l9z\" (UID: \"a43a88c1-27e9-46ab-a605-3aed976d512c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c4l9z" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.045417 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a5519032-42eb-483d-8bc4-a1fad9b5dc28-proxy-tls\") pod \"machine-config-controller-84d6567774-6nv99\" (UID: \"a5519032-42eb-483d-8bc4-a1fad9b5dc28\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6nv99" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.045593 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a71874e2-c4df-47f7-af47-b85d817995bf-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-p78b5\" (UID: \"a71874e2-c4df-47f7-af47-b85d817995bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p78b5" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.045666 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a71874e2-c4df-47f7-af47-b85d817995bf-audit-policies\") pod \"apiserver-7bbb656c7d-p78b5\" (UID: \"a71874e2-c4df-47f7-af47-b85d817995bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p78b5" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.045707 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a43a88c1-27e9-46ab-a605-3aed976d512c-etcd-client\") pod \"etcd-operator-b45778765-c4l9z\" (UID: \"a43a88c1-27e9-46ab-a605-3aed976d512c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c4l9z" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.045754 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/757355b8-9b0f-4c38-9560-a0281e0fa332-audit-dir\") pod \"oauth-openshift-558db77b4-4kcvg\" (UID: \"757355b8-9b0f-4c38-9560-a0281e0fa332\") " pod="openshift-authentication/oauth-openshift-558db77b4-4kcvg" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.045801 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-4kcvg\" (UID: \"757355b8-9b0f-4c38-9560-a0281e0fa332\") " pod="openshift-authentication/oauth-openshift-558db77b4-4kcvg" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.045836 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a71874e2-c4df-47f7-af47-b85d817995bf-etcd-client\") pod \"apiserver-7bbb656c7d-p78b5\" (UID: \"a71874e2-c4df-47f7-af47-b85d817995bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p78b5" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.045890 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-4kcvg\" (UID: \"757355b8-9b0f-4c38-9560-a0281e0fa332\") " pod="openshift-authentication/oauth-openshift-558db77b4-4kcvg" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.045929 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4a9e0634-64a7-4106-8a10-bfed1ab672da-images\") pod \"machine-api-operator-5694c8668f-xttsp\" (UID: \"4a9e0634-64a7-4106-8a10-bfed1ab672da\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xttsp" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.045970 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a43a88c1-27e9-46ab-a605-3aed976d512c-serving-cert\") pod \"etcd-operator-b45778765-c4l9z\" (UID: \"a43a88c1-27e9-46ab-a605-3aed976d512c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c4l9z" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.046028 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f1ba78f6-528b-46c5-b908-a0b5e69d4787-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-9k7fm\" (UID: \"f1ba78f6-528b-46c5-b908-a0b5e69d4787\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9k7fm" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.046069 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gsvf\" (UniqueName: \"kubernetes.io/projected/fe2c617a-30bc-4095-b085-d6306827fcce-kube-api-access-8gsvf\") pod \"downloads-7954f5f757-8hzn4\" (UID: \"fe2c617a-30bc-4095-b085-d6306827fcce\") " pod="openshift-console/downloads-7954f5f757-8hzn4" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.046140 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qxgm\" (UniqueName: \"kubernetes.io/projected/f1ba78f6-528b-46c5-b908-a0b5e69d4787-kube-api-access-4qxgm\") pod \"cluster-samples-operator-665b6dd947-9k7fm\" (UID: \"f1ba78f6-528b-46c5-b908-a0b5e69d4787\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9k7fm" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.046178 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/757355b8-9b0f-4c38-9560-a0281e0fa332-audit-policies\") pod \"oauth-openshift-558db77b4-4kcvg\" (UID: \"757355b8-9b0f-4c38-9560-a0281e0fa332\") " pod="openshift-authentication/oauth-openshift-558db77b4-4kcvg" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.046235 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a9e0634-64a7-4106-8a10-bfed1ab672da-config\") pod \"machine-api-operator-5694c8668f-xttsp\" (UID: \"4a9e0634-64a7-4106-8a10-bfed1ab672da\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xttsp" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.046246 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/66b7c770-0864-43b0-8be8-8c9e26cedb5f-serving-cert\") pod \"openshift-config-operator-7777fb866f-8nh95\" (UID: \"66b7c770-0864-43b0-8be8-8c9e26cedb5f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8nh95" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.046336 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhsml\" (UniqueName: \"kubernetes.io/projected/4a9e0634-64a7-4106-8a10-bfed1ab672da-kube-api-access-qhsml\") pod \"machine-api-operator-5694c8668f-xttsp\" (UID: \"4a9e0634-64a7-4106-8a10-bfed1ab672da\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xttsp" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.046494 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a71874e2-c4df-47f7-af47-b85d817995bf-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-p78b5\" (UID: \"a71874e2-c4df-47f7-af47-b85d817995bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p78b5" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.046637 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a71874e2-c4df-47f7-af47-b85d817995bf-audit-dir\") pod \"apiserver-7bbb656c7d-p78b5\" (UID: \"a71874e2-c4df-47f7-af47-b85d817995bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p78b5" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.046736 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgdx8"] Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.046898 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-4kcvg\" (UID: \"757355b8-9b0f-4c38-9560-a0281e0fa332\") " pod="openshift-authentication/oauth-openshift-558db77b4-4kcvg" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.046931 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-4kcvg\" (UID: \"757355b8-9b0f-4c38-9560-a0281e0fa332\") " pod="openshift-authentication/oauth-openshift-558db77b4-4kcvg" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.046966 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9cfbc29a-94db-48d8-9393-38c581d767a5-auth-proxy-config\") pod \"machine-approver-56656f9798-hhhgx\" (UID: \"9cfbc29a-94db-48d8-9393-38c581d767a5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hhhgx" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.047002 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/000721f3-4213-4d68-b390-d172a0fea797-metrics-tls\") pod \"dns-operator-744455d44c-p4vpc\" (UID: \"000721f3-4213-4d68-b390-d172a0fea797\") " pod="openshift-dns-operator/dns-operator-744455d44c-p4vpc" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.047042 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a71874e2-c4df-47f7-af47-b85d817995bf-audit-dir\") pod \"apiserver-7bbb656c7d-p78b5\" (UID: \"a71874e2-c4df-47f7-af47-b85d817995bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p78b5" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.047488 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/757355b8-9b0f-4c38-9560-a0281e0fa332-audit-dir\") pod \"oauth-openshift-558db77b4-4kcvg\" (UID: \"757355b8-9b0f-4c38-9560-a0281e0fa332\") " pod="openshift-authentication/oauth-openshift-558db77b4-4kcvg" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.047626 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-p4vpc"] Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.047715 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-4kcvg\" (UID: \"757355b8-9b0f-4c38-9560-a0281e0fa332\") " pod="openshift-authentication/oauth-openshift-558db77b4-4kcvg" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.047732 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/757355b8-9b0f-4c38-9560-a0281e0fa332-audit-policies\") pod \"oauth-openshift-558db77b4-4kcvg\" (UID: \"757355b8-9b0f-4c38-9560-a0281e0fa332\") " pod="openshift-authentication/oauth-openshift-558db77b4-4kcvg" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.048018 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4a9e0634-64a7-4106-8a10-bfed1ab672da-images\") pod \"machine-api-operator-5694c8668f-xttsp\" (UID: \"4a9e0634-64a7-4106-8a10-bfed1ab672da\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xttsp" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.048526 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.048548 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a71874e2-c4df-47f7-af47-b85d817995bf-audit-policies\") pod \"apiserver-7bbb656c7d-p78b5\" (UID: \"a71874e2-c4df-47f7-af47-b85d817995bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p78b5" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.048830 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9cfbc29a-94db-48d8-9393-38c581d767a5-auth-proxy-config\") pod \"machine-approver-56656f9798-hhhgx\" (UID: \"9cfbc29a-94db-48d8-9393-38c581d767a5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hhhgx" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.048868 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-4kcvg\" (UID: \"757355b8-9b0f-4c38-9560-a0281e0fa332\") " pod="openshift-authentication/oauth-openshift-558db77b4-4kcvg" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.049057 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.049285 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.049421 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-fknnv"] Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.049483 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4a9e0634-64a7-4106-8a10-bfed1ab672da-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-xttsp\" (UID: \"4a9e0634-64a7-4106-8a10-bfed1ab672da\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xttsp" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.049697 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-4kcvg\" (UID: \"757355b8-9b0f-4c38-9560-a0281e0fa332\") " pod="openshift-authentication/oauth-openshift-558db77b4-4kcvg" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.049888 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a5519032-42eb-483d-8bc4-a1fad9b5dc28-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-6nv99\" (UID: \"a5519032-42eb-483d-8bc4-a1fad9b5dc28\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6nv99" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.049989 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-4kcvg\" (UID: \"757355b8-9b0f-4c38-9560-a0281e0fa332\") " pod="openshift-authentication/oauth-openshift-558db77b4-4kcvg" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.050020 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cfbc29a-94db-48d8-9393-38c581d767a5-config\") pod \"machine-approver-56656f9798-hhhgx\" (UID: \"9cfbc29a-94db-48d8-9393-38c581d767a5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hhhgx" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.050077 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r98r5\" (UniqueName: \"kubernetes.io/projected/66b7c770-0864-43b0-8be8-8c9e26cedb5f-kube-api-access-r98r5\") pod \"openshift-config-operator-7777fb866f-8nh95\" (UID: \"66b7c770-0864-43b0-8be8-8c9e26cedb5f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8nh95" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.050655 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-4kcvg\" (UID: \"757355b8-9b0f-4c38-9560-a0281e0fa332\") " pod="openshift-authentication/oauth-openshift-558db77b4-4kcvg" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.050804 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cfbc29a-94db-48d8-9393-38c581d767a5-config\") pod \"machine-approver-56656f9798-hhhgx\" (UID: \"9cfbc29a-94db-48d8-9393-38c581d767a5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hhhgx" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.051254 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a71874e2-c4df-47f7-af47-b85d817995bf-serving-cert\") pod \"apiserver-7bbb656c7d-p78b5\" (UID: \"a71874e2-c4df-47f7-af47-b85d817995bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p78b5" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.051315 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a43a88c1-27e9-46ab-a605-3aed976d512c-etcd-service-ca\") pod \"etcd-operator-b45778765-c4l9z\" (UID: \"a43a88c1-27e9-46ab-a605-3aed976d512c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c4l9z" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.051351 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5mrc\" (UniqueName: \"kubernetes.io/projected/a43a88c1-27e9-46ab-a605-3aed976d512c-kube-api-access-l5mrc\") pod \"etcd-operator-b45778765-c4l9z\" (UID: \"a43a88c1-27e9-46ab-a605-3aed976d512c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c4l9z" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.051417 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-4kcvg\" (UID: \"757355b8-9b0f-4c38-9560-a0281e0fa332\") " pod="openshift-authentication/oauth-openshift-558db77b4-4kcvg" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.051447 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-625kc\" (UniqueName: \"kubernetes.io/projected/757355b8-9b0f-4c38-9560-a0281e0fa332-kube-api-access-625kc\") pod \"oauth-openshift-558db77b4-4kcvg\" (UID: \"757355b8-9b0f-4c38-9560-a0281e0fa332\") " pod="openshift-authentication/oauth-openshift-558db77b4-4kcvg" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.052556 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f1ba78f6-528b-46c5-b908-a0b5e69d4787-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-9k7fm\" (UID: \"f1ba78f6-528b-46c5-b908-a0b5e69d4787\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9k7fm" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.053846 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-4kcvg\" (UID: \"757355b8-9b0f-4c38-9560-a0281e0fa332\") " pod="openshift-authentication/oauth-openshift-558db77b4-4kcvg" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.054072 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-4kcvg\" (UID: \"757355b8-9b0f-4c38-9560-a0281e0fa332\") " pod="openshift-authentication/oauth-openshift-558db77b4-4kcvg" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.054107 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-5xvjh"] Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.054231 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-4kcvg\" (UID: \"757355b8-9b0f-4c38-9560-a0281e0fa332\") " pod="openshift-authentication/oauth-openshift-558db77b4-4kcvg" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.054465 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.054871 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-plsvj"] Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.054898 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-45j55"] Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.054998 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-5xvjh" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.055384 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.055624 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.055695 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.055913 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.056049 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.056074 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.056041 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a71874e2-c4df-47f7-af47-b85d817995bf-serving-cert\") pod \"apiserver-7bbb656c7d-p78b5\" (UID: \"a71874e2-c4df-47f7-af47-b85d817995bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p78b5" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.056189 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-bdrlz"] Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.056243 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a43a88c1-27e9-46ab-a605-3aed976d512c-etcd-ca\") pod \"etcd-operator-b45778765-c4l9z\" (UID: \"a43a88c1-27e9-46ab-a605-3aed976d512c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c4l9z" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.056291 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/415b36c5-1000-4ff9-9640-1ec29a3728c1-serving-cert\") pod \"authentication-operator-69f744f599-sg2k6\" (UID: \"415b36c5-1000-4ff9-9640-1ec29a3728c1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sg2k6" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.056321 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/415b36c5-1000-4ff9-9640-1ec29a3728c1-service-ca-bundle\") pod \"authentication-operator-69f744f599-sg2k6\" (UID: \"415b36c5-1000-4ff9-9640-1ec29a3728c1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sg2k6" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.056434 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a71874e2-c4df-47f7-af47-b85d817995bf-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-p78b5\" (UID: \"a71874e2-c4df-47f7-af47-b85d817995bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p78b5" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.056481 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-4kcvg\" (UID: \"757355b8-9b0f-4c38-9560-a0281e0fa332\") " pod="openshift-authentication/oauth-openshift-558db77b4-4kcvg" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.056534 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnqlx\" (UniqueName: \"kubernetes.io/projected/415b36c5-1000-4ff9-9640-1ec29a3728c1-kube-api-access-qnqlx\") pod \"authentication-operator-69f744f599-sg2k6\" (UID: \"415b36c5-1000-4ff9-9640-1ec29a3728c1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sg2k6" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.056628 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/9cfbc29a-94db-48d8-9393-38c581d767a5-machine-approver-tls\") pod \"machine-approver-56656f9798-hhhgx\" (UID: \"9cfbc29a-94db-48d8-9393-38c581d767a5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hhhgx" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.056716 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvx2n\" (UniqueName: \"kubernetes.io/projected/a5519032-42eb-483d-8bc4-a1fad9b5dc28-kube-api-access-wvx2n\") pod \"machine-config-controller-84d6567774-6nv99\" (UID: \"a5519032-42eb-483d-8bc4-a1fad9b5dc28\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6nv99" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.056762 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-4kcvg\" (UID: \"757355b8-9b0f-4c38-9560-a0281e0fa332\") " pod="openshift-authentication/oauth-openshift-558db77b4-4kcvg" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.056802 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrsz4\" (UniqueName: \"kubernetes.io/projected/334ae0f0-e733-430f-b670-4ed4244bfa22-kube-api-access-rrsz4\") pod \"migrator-59844c95c7-plsvj\" (UID: \"334ae0f0-e733-430f-b670-4ed4244bfa22\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-plsvj" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.056837 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/415b36c5-1000-4ff9-9640-1ec29a3728c1-config\") pod \"authentication-operator-69f744f599-sg2k6\" (UID: \"415b36c5-1000-4ff9-9640-1ec29a3728c1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sg2k6" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.057501 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q4lmq"] Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.057602 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-bdrlz" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.057906 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-4kcvg\" (UID: \"757355b8-9b0f-4c38-9560-a0281e0fa332\") " pod="openshift-authentication/oauth-openshift-558db77b4-4kcvg" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.058073 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a71874e2-c4df-47f7-af47-b85d817995bf-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-p78b5\" (UID: \"a71874e2-c4df-47f7-af47-b85d817995bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p78b5" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.058348 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-8nh95"] Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.059596 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sxnxc"] Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.060411 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/9cfbc29a-94db-48d8-9393-38c581d767a5-machine-approver-tls\") pod \"machine-approver-56656f9798-hhhgx\" (UID: \"9cfbc29a-94db-48d8-9393-38c581d767a5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hhhgx" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.060726 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-n984k"] Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.061795 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-gddsl"] Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.063072 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mr2c4"] Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.063695 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-4kcvg\" (UID: \"757355b8-9b0f-4c38-9560-a0281e0fa332\") " pod="openshift-authentication/oauth-openshift-558db77b4-4kcvg" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.064798 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xtsjf"] Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.066346 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dlfsv"] Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.068103 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d487c"] Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.068620 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a71874e2-c4df-47f7-af47-b85d817995bf-etcd-client\") pod \"apiserver-7bbb656c7d-p78b5\" (UID: \"a71874e2-c4df-47f7-af47-b85d817995bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p78b5" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.069093 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bnwc\" (UniqueName: \"kubernetes.io/projected/ab9dc328-de73-425e-ac20-9af46c731a01-kube-api-access-6bnwc\") pod \"openshift-apiserver-operator-796bbdcf4f-rrb7b\" (UID: \"ab9dc328-de73-425e-ac20-9af46c731a01\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rrb7b" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.069701 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-4kcvg\" (UID: \"757355b8-9b0f-4c38-9560-a0281e0fa332\") " pod="openshift-authentication/oauth-openshift-558db77b4-4kcvg" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.070754 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-sjpqd"] Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.071845 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-bdrlz"] Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.072932 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9vlcc"] Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.072976 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-4kcvg\" (UID: \"757355b8-9b0f-4c38-9560-a0281e0fa332\") " pod="openshift-authentication/oauth-openshift-558db77b4-4kcvg" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.073955 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-72w5z"] Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.075004 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531130-wbshf"] Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.075209 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-4kcvg\" (UID: \"757355b8-9b0f-4c38-9560-a0281e0fa332\") " pod="openshift-authentication/oauth-openshift-558db77b4-4kcvg" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.075650 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a71874e2-c4df-47f7-af47-b85d817995bf-encryption-config\") pod \"apiserver-7bbb656c7d-p78b5\" (UID: \"a71874e2-c4df-47f7-af47-b85d817995bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p78b5" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.076282 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-6rznq"] Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.077577 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n4kjh"] Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.078843 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-4kmgv"] Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.080124 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-qbqjf"] Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.081249 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-qbqjf"] Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.081438 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-qbqjf" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.082858 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.083018 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8kf8\" (UniqueName: \"kubernetes.io/projected/2c14acfb-83f3-4782-84df-6558dde9c268-kube-api-access-w8kf8\") pod \"controller-manager-879f6c89f-5jdvd\" (UID: \"2c14acfb-83f3-4782-84df-6558dde9c268\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5jdvd" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.124868 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.144163 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.157649 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a43a88c1-27e9-46ab-a605-3aed976d512c-serving-cert\") pod \"etcd-operator-b45778765-c4l9z\" (UID: \"a43a88c1-27e9-46ab-a605-3aed976d512c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c4l9z" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.157717 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/66b7c770-0864-43b0-8be8-8c9e26cedb5f-serving-cert\") pod \"openshift-config-operator-7777fb866f-8nh95\" (UID: \"66b7c770-0864-43b0-8be8-8c9e26cedb5f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8nh95" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.157757 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/000721f3-4213-4d68-b390-d172a0fea797-metrics-tls\") pod \"dns-operator-744455d44c-p4vpc\" (UID: \"000721f3-4213-4d68-b390-d172a0fea797\") " pod="openshift-dns-operator/dns-operator-744455d44c-p4vpc" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.157780 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a5519032-42eb-483d-8bc4-a1fad9b5dc28-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-6nv99\" (UID: \"a5519032-42eb-483d-8bc4-a1fad9b5dc28\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6nv99" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.157813 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r98r5\" (UniqueName: \"kubernetes.io/projected/66b7c770-0864-43b0-8be8-8c9e26cedb5f-kube-api-access-r98r5\") pod \"openshift-config-operator-7777fb866f-8nh95\" (UID: \"66b7c770-0864-43b0-8be8-8c9e26cedb5f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8nh95" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.157836 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a43a88c1-27e9-46ab-a605-3aed976d512c-etcd-service-ca\") pod \"etcd-operator-b45778765-c4l9z\" (UID: \"a43a88c1-27e9-46ab-a605-3aed976d512c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c4l9z" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.157851 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5mrc\" (UniqueName: \"kubernetes.io/projected/a43a88c1-27e9-46ab-a605-3aed976d512c-kube-api-access-l5mrc\") pod \"etcd-operator-b45778765-c4l9z\" (UID: \"a43a88c1-27e9-46ab-a605-3aed976d512c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c4l9z" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.157883 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/415b36c5-1000-4ff9-9640-1ec29a3728c1-serving-cert\") pod \"authentication-operator-69f744f599-sg2k6\" (UID: \"415b36c5-1000-4ff9-9640-1ec29a3728c1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sg2k6" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.157902 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/415b36c5-1000-4ff9-9640-1ec29a3728c1-service-ca-bundle\") pod \"authentication-operator-69f744f599-sg2k6\" (UID: \"415b36c5-1000-4ff9-9640-1ec29a3728c1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sg2k6" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.157921 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a43a88c1-27e9-46ab-a605-3aed976d512c-etcd-ca\") pod \"etcd-operator-b45778765-c4l9z\" (UID: \"a43a88c1-27e9-46ab-a605-3aed976d512c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c4l9z" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.157942 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnqlx\" (UniqueName: \"kubernetes.io/projected/415b36c5-1000-4ff9-9640-1ec29a3728c1-kube-api-access-qnqlx\") pod \"authentication-operator-69f744f599-sg2k6\" (UID: \"415b36c5-1000-4ff9-9640-1ec29a3728c1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sg2k6" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.157960 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvx2n\" (UniqueName: \"kubernetes.io/projected/a5519032-42eb-483d-8bc4-a1fad9b5dc28-kube-api-access-wvx2n\") pod \"machine-config-controller-84d6567774-6nv99\" (UID: \"a5519032-42eb-483d-8bc4-a1fad9b5dc28\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6nv99" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.158885 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a43a88c1-27e9-46ab-a605-3aed976d512c-etcd-ca\") pod \"etcd-operator-b45778765-c4l9z\" (UID: \"a43a88c1-27e9-46ab-a605-3aed976d512c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c4l9z" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.158923 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/415b36c5-1000-4ff9-9640-1ec29a3728c1-service-ca-bundle\") pod \"authentication-operator-69f744f599-sg2k6\" (UID: \"415b36c5-1000-4ff9-9640-1ec29a3728c1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sg2k6" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.158973 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/415b36c5-1000-4ff9-9640-1ec29a3728c1-config\") pod \"authentication-operator-69f744f599-sg2k6\" (UID: \"415b36c5-1000-4ff9-9640-1ec29a3728c1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sg2k6" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.159007 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrsz4\" (UniqueName: \"kubernetes.io/projected/334ae0f0-e733-430f-b670-4ed4244bfa22-kube-api-access-rrsz4\") pod \"migrator-59844c95c7-plsvj\" (UID: \"334ae0f0-e733-430f-b670-4ed4244bfa22\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-plsvj" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.159057 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/66b7c770-0864-43b0-8be8-8c9e26cedb5f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-8nh95\" (UID: \"66b7c770-0864-43b0-8be8-8c9e26cedb5f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8nh95" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.159209 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g797b\" (UniqueName: \"kubernetes.io/projected/000721f3-4213-4d68-b390-d172a0fea797-kube-api-access-g797b\") pod \"dns-operator-744455d44c-p4vpc\" (UID: \"000721f3-4213-4d68-b390-d172a0fea797\") " pod="openshift-dns-operator/dns-operator-744455d44c-p4vpc" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.159229 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/415b36c5-1000-4ff9-9640-1ec29a3728c1-config\") pod \"authentication-operator-69f744f599-sg2k6\" (UID: \"415b36c5-1000-4ff9-9640-1ec29a3728c1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sg2k6" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.159244 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/415b36c5-1000-4ff9-9640-1ec29a3728c1-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-sg2k6\" (UID: \"415b36c5-1000-4ff9-9640-1ec29a3728c1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sg2k6" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.159258 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a43a88c1-27e9-46ab-a605-3aed976d512c-etcd-service-ca\") pod \"etcd-operator-b45778765-c4l9z\" (UID: \"a43a88c1-27e9-46ab-a605-3aed976d512c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c4l9z" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.159310 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a43a88c1-27e9-46ab-a605-3aed976d512c-config\") pod \"etcd-operator-b45778765-c4l9z\" (UID: \"a43a88c1-27e9-46ab-a605-3aed976d512c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c4l9z" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.159418 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a5519032-42eb-483d-8bc4-a1fad9b5dc28-proxy-tls\") pod \"machine-config-controller-84d6567774-6nv99\" (UID: \"a5519032-42eb-483d-8bc4-a1fad9b5dc28\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6nv99" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.159521 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a43a88c1-27e9-46ab-a605-3aed976d512c-etcd-client\") pod \"etcd-operator-b45778765-c4l9z\" (UID: \"a43a88c1-27e9-46ab-a605-3aed976d512c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c4l9z" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.159523 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/66b7c770-0864-43b0-8be8-8c9e26cedb5f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-8nh95\" (UID: \"66b7c770-0864-43b0-8be8-8c9e26cedb5f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8nh95" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.159861 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a43a88c1-27e9-46ab-a605-3aed976d512c-config\") pod \"etcd-operator-b45778765-c4l9z\" (UID: \"a43a88c1-27e9-46ab-a605-3aed976d512c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c4l9z" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.160260 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/415b36c5-1000-4ff9-9640-1ec29a3728c1-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-sg2k6\" (UID: \"415b36c5-1000-4ff9-9640-1ec29a3728c1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sg2k6" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.160485 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a5519032-42eb-483d-8bc4-a1fad9b5dc28-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-6nv99\" (UID: \"a5519032-42eb-483d-8bc4-a1fad9b5dc28\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6nv99" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.161447 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/66b7c770-0864-43b0-8be8-8c9e26cedb5f-serving-cert\") pod \"openshift-config-operator-7777fb866f-8nh95\" (UID: \"66b7c770-0864-43b0-8be8-8c9e26cedb5f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8nh95" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.162739 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a43a88c1-27e9-46ab-a605-3aed976d512c-serving-cert\") pod \"etcd-operator-b45778765-c4l9z\" (UID: \"a43a88c1-27e9-46ab-a605-3aed976d512c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c4l9z" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.163267 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a43a88c1-27e9-46ab-a605-3aed976d512c-etcd-client\") pod \"etcd-operator-b45778765-c4l9z\" (UID: \"a43a88c1-27e9-46ab-a605-3aed976d512c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c4l9z" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.163451 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/000721f3-4213-4d68-b390-d172a0fea797-metrics-tls\") pod \"dns-operator-744455d44c-p4vpc\" (UID: \"000721f3-4213-4d68-b390-d172a0fea797\") " pod="openshift-dns-operator/dns-operator-744455d44c-p4vpc" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.163556 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/415b36c5-1000-4ff9-9640-1ec29a3728c1-serving-cert\") pod \"authentication-operator-69f744f599-sg2k6\" (UID: \"415b36c5-1000-4ff9-9640-1ec29a3728c1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sg2k6" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.164486 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.164919 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a5519032-42eb-483d-8bc4-a1fad9b5dc28-proxy-tls\") pod \"machine-config-controller-84d6567774-6nv99\" (UID: \"a5519032-42eb-483d-8bc4-a1fad9b5dc28\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6nv99" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.184032 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.203999 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.224679 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.245793 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.263901 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.264195 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rrb7b" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.284639 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.305563 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.323862 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.346541 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.365341 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.384318 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.404261 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.424577 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.444173 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.471589 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.484102 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.485589 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rrb7b"] Feb 23 17:33:36 crc kubenswrapper[4724]: W0223 17:33:36.498127 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab9dc328_de73_425e_ac20_9af46c731a01.slice/crio-c505a16b808e5fccca25ec9b69e9e6a9b56243e80f5464c9b758fd6e163a0610 WatchSource:0}: Error finding container c505a16b808e5fccca25ec9b69e9e6a9b56243e80f5464c9b758fd6e163a0610: Status 404 returned error can't find the container with id c505a16b808e5fccca25ec9b69e9e6a9b56243e80f5464c9b758fd6e163a0610 Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.511687 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.523886 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.543942 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.564747 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.583231 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.605926 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 23 17:33:36 crc kubenswrapper[4724]: E0223 17:33:36.613654 4724 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 23 17:33:36 crc kubenswrapper[4724]: E0223 17:33:36.613791 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2c14acfb-83f3-4782-84df-6558dde9c268-client-ca podName:2c14acfb-83f3-4782-84df-6558dde9c268 nodeName:}" failed. No retries permitted until 2026-02-23 17:33:37.113742802 +0000 UTC m=+172.929942402 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/2c14acfb-83f3-4782-84df-6558dde9c268-client-ca") pod "controller-manager-879f6c89f-5jdvd" (UID: "2c14acfb-83f3-4782-84df-6558dde9c268") : failed to sync configmap cache: timed out waiting for the condition Feb 23 17:33:36 crc kubenswrapper[4724]: E0223 17:33:36.621421 4724 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Feb 23 17:33:36 crc kubenswrapper[4724]: E0223 17:33:36.621538 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7746d0a1-242b-4afc-b968-36853a4ad1ac-config podName:7746d0a1-242b-4afc-b968-36853a4ad1ac nodeName:}" failed. No retries permitted until 2026-02-23 17:33:37.121505585 +0000 UTC m=+172.937705185 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/7746d0a1-242b-4afc-b968-36853a4ad1ac-config") pod "route-controller-manager-6576b87f9c-wgdx8" (UID: "7746d0a1-242b-4afc-b968-36853a4ad1ac") : failed to sync configmap cache: timed out waiting for the condition Feb 23 17:33:36 crc kubenswrapper[4724]: E0223 17:33:36.621545 4724 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Feb 23 17:33:36 crc kubenswrapper[4724]: E0223 17:33:36.621601 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2c14acfb-83f3-4782-84df-6558dde9c268-proxy-ca-bundles podName:2c14acfb-83f3-4782-84df-6558dde9c268 nodeName:}" failed. No retries permitted until 2026-02-23 17:33:37.121587617 +0000 UTC m=+172.937787217 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/2c14acfb-83f3-4782-84df-6558dde9c268-proxy-ca-bundles") pod "controller-manager-879f6c89f-5jdvd" (UID: "2c14acfb-83f3-4782-84df-6558dde9c268") : failed to sync configmap cache: timed out waiting for the condition Feb 23 17:33:36 crc kubenswrapper[4724]: E0223 17:33:36.624464 4724 secret.go:188] Couldn't get secret openshift-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.624531 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 23 17:33:36 crc kubenswrapper[4724]: E0223 17:33:36.624558 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c14acfb-83f3-4782-84df-6558dde9c268-serving-cert podName:2c14acfb-83f3-4782-84df-6558dde9c268 nodeName:}" failed. No retries permitted until 2026-02-23 17:33:37.12453715 +0000 UTC m=+172.940736920 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/2c14acfb-83f3-4782-84df-6558dde9c268-serving-cert") pod "controller-manager-879f6c89f-5jdvd" (UID: "2c14acfb-83f3-4782-84df-6558dde9c268") : failed to sync secret cache: timed out waiting for the condition Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.644977 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 23 17:33:36 crc kubenswrapper[4724]: E0223 17:33:36.658719 4724 secret.go:188] Couldn't get secret openshift-route-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 23 17:33:36 crc kubenswrapper[4724]: E0223 17:33:36.658845 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7746d0a1-242b-4afc-b968-36853a4ad1ac-serving-cert podName:7746d0a1-242b-4afc-b968-36853a4ad1ac nodeName:}" failed. No retries permitted until 2026-02-23 17:33:37.15881424 +0000 UTC m=+172.975013840 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7746d0a1-242b-4afc-b968-36853a4ad1ac-serving-cert") pod "route-controller-manager-6576b87f9c-wgdx8" (UID: "7746d0a1-242b-4afc-b968-36853a4ad1ac") : failed to sync secret cache: timed out waiting for the condition Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.664052 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.684642 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.703677 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.722726 4724 request.go:700] Waited for 1.011906675s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcontrol-plane-machine-set-operator-tls&limit=500&resourceVersion=0 Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.724572 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.744617 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.763334 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.784075 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.803211 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.823581 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.843630 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.864266 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.884963 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.903489 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.924617 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.943205 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.964897 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 23 17:33:36 crc kubenswrapper[4724]: I0223 17:33:36.984428 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.004434 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.025471 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.044257 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 23 17:33:37 crc kubenswrapper[4724]: E0223 17:33:37.060646 4724 projected.go:288] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 23 17:33:37 crc kubenswrapper[4724]: E0223 17:33:37.060697 4724 projected.go:194] Error preparing data for projected volume kube-api-access-w8nmb for pod openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgdx8: failed to sync configmap cache: timed out waiting for the condition Feb 23 17:33:37 crc kubenswrapper[4724]: E0223 17:33:37.060801 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7746d0a1-242b-4afc-b968-36853a4ad1ac-kube-api-access-w8nmb podName:7746d0a1-242b-4afc-b968-36853a4ad1ac nodeName:}" failed. No retries permitted until 2026-02-23 17:33:37.560770962 +0000 UTC m=+173.376970562 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-w8nmb" (UniqueName: "kubernetes.io/projected/7746d0a1-242b-4afc-b968-36853a4ad1ac-kube-api-access-w8nmb") pod "route-controller-manager-6576b87f9c-wgdx8" (UID: "7746d0a1-242b-4afc-b968-36853a4ad1ac") : failed to sync configmap cache: timed out waiting for the condition Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.082801 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjmz9\" (UniqueName: \"kubernetes.io/projected/a71874e2-c4df-47f7-af47-b85d817995bf-kube-api-access-xjmz9\") pod \"apiserver-7bbb656c7d-p78b5\" (UID: \"a71874e2-c4df-47f7-af47-b85d817995bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p78b5" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.103157 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94pbp\" (UniqueName: \"kubernetes.io/projected/9cfbc29a-94db-48d8-9393-38c581d767a5-kube-api-access-94pbp\") pod \"machine-approver-56656f9798-hhhgx\" (UID: \"9cfbc29a-94db-48d8-9393-38c581d767a5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hhhgx" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.118518 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gsvf\" (UniqueName: \"kubernetes.io/projected/fe2c617a-30bc-4095-b085-d6306827fcce-kube-api-access-8gsvf\") pod \"downloads-7954f5f757-8hzn4\" (UID: \"fe2c617a-30bc-4095-b085-d6306827fcce\") " pod="openshift-console/downloads-7954f5f757-8hzn4" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.142457 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qxgm\" (UniqueName: \"kubernetes.io/projected/f1ba78f6-528b-46c5-b908-a0b5e69d4787-kube-api-access-4qxgm\") pod \"cluster-samples-operator-665b6dd947-9k7fm\" (UID: \"f1ba78f6-528b-46c5-b908-a0b5e69d4787\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9k7fm" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.157773 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rrb7b" event={"ID":"ab9dc328-de73-425e-ac20-9af46c731a01","Type":"ContainerStarted","Data":"e8c0df6dd640db39f8796a7d5d7b2b0969160fc69599cb033731ce9e484a2ee2"} Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.157856 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rrb7b" event={"ID":"ab9dc328-de73-425e-ac20-9af46c731a01","Type":"ContainerStarted","Data":"c505a16b808e5fccca25ec9b69e9e6a9b56243e80f5464c9b758fd6e163a0610"} Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.161677 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhsml\" (UniqueName: \"kubernetes.io/projected/4a9e0634-64a7-4106-8a10-bfed1ab672da-kube-api-access-qhsml\") pod \"machine-api-operator-5694c8668f-xttsp\" (UID: \"4a9e0634-64a7-4106-8a10-bfed1ab672da\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xttsp" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.177685 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2c14acfb-83f3-4782-84df-6558dde9c268-client-ca\") pod \"controller-manager-879f6c89f-5jdvd\" (UID: \"2c14acfb-83f3-4782-84df-6558dde9c268\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5jdvd" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.177749 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c14acfb-83f3-4782-84df-6558dde9c268-serving-cert\") pod \"controller-manager-879f6c89f-5jdvd\" (UID: \"2c14acfb-83f3-4782-84df-6558dde9c268\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5jdvd" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.177779 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2c14acfb-83f3-4782-84df-6558dde9c268-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-5jdvd\" (UID: \"2c14acfb-83f3-4782-84df-6558dde9c268\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5jdvd" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.177812 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7746d0a1-242b-4afc-b968-36853a4ad1ac-serving-cert\") pod \"route-controller-manager-6576b87f9c-wgdx8\" (UID: \"7746d0a1-242b-4afc-b968-36853a4ad1ac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgdx8" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.177844 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7746d0a1-242b-4afc-b968-36853a4ad1ac-config\") pod \"route-controller-manager-6576b87f9c-wgdx8\" (UID: \"7746d0a1-242b-4afc-b968-36853a4ad1ac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgdx8" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.178001 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-625kc\" (UniqueName: \"kubernetes.io/projected/757355b8-9b0f-4c38-9560-a0281e0fa332-kube-api-access-625kc\") pod \"oauth-openshift-558db77b4-4kcvg\" (UID: \"757355b8-9b0f-4c38-9560-a0281e0fa332\") " pod="openshift-authentication/oauth-openshift-558db77b4-4kcvg" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.183350 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.203887 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.214146 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4kcvg" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.217489 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9k7fm" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.224724 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.225020 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hhhgx" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.231801 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-8hzn4" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.245670 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.264012 4724 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.285045 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.365751 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.366094 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.366226 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.373226 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p78b5" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.381256 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5470bb85-cb17-49bf-ae67-bf41931ee055-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-ddvrv\" (UID: \"5470bb85-cb17-49bf-ae67-bf41931ee055\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ddvrv" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.381303 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/254acf08-7bbb-4e84-95ed-21ce84733817-trusted-ca\") pod \"console-operator-58897d9998-ch92v\" (UID: \"254acf08-7bbb-4e84-95ed-21ce84733817\") " pod="openshift-console-operator/console-operator-58897d9998-ch92v" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.381335 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5895e15-1275-4b7b-9f0d-0a3baf72490b-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-mr2c4\" (UID: \"b5895e15-1275-4b7b-9f0d-0a3baf72490b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mr2c4" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.381371 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.381448 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9d52ec05-b283-48f8-aed2-50c0a6dcc9e3-registry-certificates\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.381490 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/613a3da0-fd65-48f7-a750-b53e06ec39d8-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-r59dr\" (UID: \"613a3da0-fd65-48f7-a750-b53e06ec39d8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r59dr" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.381515 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2f4h\" (UniqueName: \"kubernetes.io/projected/a43d9fa2-d037-4b14-a90e-30b81108b214-kube-api-access-q2f4h\") pod \"cluster-image-registry-operator-dc59b4c8b-q4lmq\" (UID: \"a43d9fa2-d037-4b14-a90e-30b81108b214\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q4lmq" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.381548 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/32ce877e-4675-4f92-a2b3-7be9a27b36d2-metrics-tls\") pod \"ingress-operator-5b745b69d9-vrsk4\" (UID: \"32ce877e-4675-4f92-a2b3-7be9a27b36d2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vrsk4" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.381586 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4ttb\" (UniqueName: \"kubernetes.io/projected/254acf08-7bbb-4e84-95ed-21ce84733817-kube-api-access-n4ttb\") pod \"console-operator-58897d9998-ch92v\" (UID: \"254acf08-7bbb-4e84-95ed-21ce84733817\") " pod="openshift-console-operator/console-operator-58897d9998-ch92v" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.381610 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9d52ec05-b283-48f8-aed2-50c0a6dcc9e3-ca-trust-extracted\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.381632 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a10ea92-fb0c-4819-92d9-e3703c3dbe09-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-t7nzb\" (UID: \"3a10ea92-fb0c-4819-92d9-e3703c3dbe09\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t7nzb" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.381707 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5470bb85-cb17-49bf-ae67-bf41931ee055-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-ddvrv\" (UID: \"5470bb85-cb17-49bf-ae67-bf41931ee055\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ddvrv" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.381733 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/254acf08-7bbb-4e84-95ed-21ce84733817-config\") pod \"console-operator-58897d9998-ch92v\" (UID: \"254acf08-7bbb-4e84-95ed-21ce84733817\") " pod="openshift-console-operator/console-operator-58897d9998-ch92v" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.381758 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5895e15-1275-4b7b-9f0d-0a3baf72490b-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-mr2c4\" (UID: \"b5895e15-1275-4b7b-9f0d-0a3baf72490b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mr2c4" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.381804 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a43d9fa2-d037-4b14-a90e-30b81108b214-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-q4lmq\" (UID: \"a43d9fa2-d037-4b14-a90e-30b81108b214\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q4lmq" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.381839 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ea86b52-00b1-458f-8c02-4baaf402d190-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-d487c\" (UID: \"8ea86b52-00b1-458f-8c02-4baaf402d190\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d487c" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.381914 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcfrr\" (UniqueName: \"kubernetes.io/projected/997b5710-9b99-4207-92da-28b7a1923db2-kube-api-access-fcfrr\") pod \"console-f9d7485db-fknnv\" (UID: \"997b5710-9b99-4207-92da-28b7a1923db2\") " pod="openshift-console/console-f9d7485db-fknnv" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.381964 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9d52ec05-b283-48f8-aed2-50c0a6dcc9e3-bound-sa-token\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.381989 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8ea86b52-00b1-458f-8c02-4baaf402d190-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-d487c\" (UID: \"8ea86b52-00b1-458f-8c02-4baaf402d190\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d487c" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.382015 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/facb437a-7568-41fc-a922-644ad2cfdda2-default-certificate\") pod \"router-default-5444994796-s77tw\" (UID: \"facb437a-7568-41fc-a922-644ad2cfdda2\") " pod="openshift-ingress/router-default-5444994796-s77tw" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.382054 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/613a3da0-fd65-48f7-a750-b53e06ec39d8-config\") pod \"kube-apiserver-operator-766d6c64bb-r59dr\" (UID: \"613a3da0-fd65-48f7-a750-b53e06ec39d8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r59dr" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.382085 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/997b5710-9b99-4207-92da-28b7a1923db2-trusted-ca-bundle\") pod \"console-f9d7485db-fknnv\" (UID: \"997b5710-9b99-4207-92da-28b7a1923db2\") " pod="openshift-console/console-f9d7485db-fknnv" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.382124 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/32ce877e-4675-4f92-a2b3-7be9a27b36d2-bound-sa-token\") pod \"ingress-operator-5b745b69d9-vrsk4\" (UID: \"32ce877e-4675-4f92-a2b3-7be9a27b36d2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vrsk4" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.382151 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a10ea92-fb0c-4819-92d9-e3703c3dbe09-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-t7nzb\" (UID: \"3a10ea92-fb0c-4819-92d9-e3703c3dbe09\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t7nzb" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.382180 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/997b5710-9b99-4207-92da-28b7a1923db2-console-config\") pod \"console-f9d7485db-fknnv\" (UID: \"997b5710-9b99-4207-92da-28b7a1923db2\") " pod="openshift-console/console-f9d7485db-fknnv" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.382206 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7wcc\" (UniqueName: \"kubernetes.io/projected/b5895e15-1275-4b7b-9f0d-0a3baf72490b-kube-api-access-t7wcc\") pod \"kube-storage-version-migrator-operator-b67b599dd-mr2c4\" (UID: \"b5895e15-1275-4b7b-9f0d-0a3baf72490b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mr2c4" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.382247 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2llc\" (UniqueName: \"kubernetes.io/projected/9d52ec05-b283-48f8-aed2-50c0a6dcc9e3-kube-api-access-r2llc\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.382276 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spzpf\" (UniqueName: \"kubernetes.io/projected/5470bb85-cb17-49bf-ae67-bf41931ee055-kube-api-access-spzpf\") pod \"openshift-controller-manager-operator-756b6f6bc6-ddvrv\" (UID: \"5470bb85-cb17-49bf-ae67-bf41931ee055\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ddvrv" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.382329 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d52ec05-b283-48f8-aed2-50c0a6dcc9e3-trusted-ca\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.382357 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/613a3da0-fd65-48f7-a750-b53e06ec39d8-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-r59dr\" (UID: \"613a3da0-fd65-48f7-a750-b53e06ec39d8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r59dr" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.382406 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/997b5710-9b99-4207-92da-28b7a1923db2-console-oauth-config\") pod \"console-f9d7485db-fknnv\" (UID: \"997b5710-9b99-4207-92da-28b7a1923db2\") " pod="openshift-console/console-f9d7485db-fknnv" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.382435 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/997b5710-9b99-4207-92da-28b7a1923db2-service-ca\") pod \"console-f9d7485db-fknnv\" (UID: \"997b5710-9b99-4207-92da-28b7a1923db2\") " pod="openshift-console/console-f9d7485db-fknnv" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.382473 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/997b5710-9b99-4207-92da-28b7a1923db2-oauth-serving-cert\") pod \"console-f9d7485db-fknnv\" (UID: \"997b5710-9b99-4207-92da-28b7a1923db2\") " pod="openshift-console/console-f9d7485db-fknnv" Feb 23 17:33:37 crc kubenswrapper[4724]: E0223 17:33:37.382535 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:37.882508405 +0000 UTC m=+173.698708215 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.382839 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/32ce877e-4675-4f92-a2b3-7be9a27b36d2-trusted-ca\") pod \"ingress-operator-5b745b69d9-vrsk4\" (UID: \"32ce877e-4675-4f92-a2b3-7be9a27b36d2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vrsk4" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.382899 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/facb437a-7568-41fc-a922-644ad2cfdda2-service-ca-bundle\") pod \"router-default-5444994796-s77tw\" (UID: \"facb437a-7568-41fc-a922-644ad2cfdda2\") " pod="openshift-ingress/router-default-5444994796-s77tw" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.383017 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ea86b52-00b1-458f-8c02-4baaf402d190-config\") pod \"kube-controller-manager-operator-78b949d7b-d487c\" (UID: \"8ea86b52-00b1-458f-8c02-4baaf402d190\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d487c" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.383118 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/997b5710-9b99-4207-92da-28b7a1923db2-console-serving-cert\") pod \"console-f9d7485db-fknnv\" (UID: \"997b5710-9b99-4207-92da-28b7a1923db2\") " pod="openshift-console/console-f9d7485db-fknnv" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.383260 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a43d9fa2-d037-4b14-a90e-30b81108b214-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-q4lmq\" (UID: \"a43d9fa2-d037-4b14-a90e-30b81108b214\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q4lmq" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.383432 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9d52ec05-b283-48f8-aed2-50c0a6dcc9e3-installation-pull-secrets\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.383774 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/facb437a-7568-41fc-a922-644ad2cfdda2-stats-auth\") pod \"router-default-5444994796-s77tw\" (UID: \"facb437a-7568-41fc-a922-644ad2cfdda2\") " pod="openshift-ingress/router-default-5444994796-s77tw" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.384127 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a43d9fa2-d037-4b14-a90e-30b81108b214-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-q4lmq\" (UID: \"a43d9fa2-d037-4b14-a90e-30b81108b214\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q4lmq" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.384163 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/254acf08-7bbb-4e84-95ed-21ce84733817-serving-cert\") pod \"console-operator-58897d9998-ch92v\" (UID: \"254acf08-7bbb-4e84-95ed-21ce84733817\") " pod="openshift-console-operator/console-operator-58897d9998-ch92v" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.384607 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzsp9\" (UniqueName: \"kubernetes.io/projected/facb437a-7568-41fc-a922-644ad2cfdda2-kube-api-access-fzsp9\") pod \"router-default-5444994796-s77tw\" (UID: \"facb437a-7568-41fc-a922-644ad2cfdda2\") " pod="openshift-ingress/router-default-5444994796-s77tw" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.384868 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8pzs\" (UniqueName: \"kubernetes.io/projected/32ce877e-4675-4f92-a2b3-7be9a27b36d2-kube-api-access-x8pzs\") pod \"ingress-operator-5b745b69d9-vrsk4\" (UID: \"32ce877e-4675-4f92-a2b3-7be9a27b36d2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vrsk4" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.384906 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a10ea92-fb0c-4819-92d9-e3703c3dbe09-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-t7nzb\" (UID: \"3a10ea92-fb0c-4819-92d9-e3703c3dbe09\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t7nzb" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.384921 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/facb437a-7568-41fc-a922-644ad2cfdda2-metrics-certs\") pod \"router-default-5444994796-s77tw\" (UID: \"facb437a-7568-41fc-a922-644ad2cfdda2\") " pod="openshift-ingress/router-default-5444994796-s77tw" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.384957 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9d52ec05-b283-48f8-aed2-50c0a6dcc9e3-registry-tls\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.387558 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.403946 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.454788 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-xttsp" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.464473 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r98r5\" (UniqueName: \"kubernetes.io/projected/66b7c770-0864-43b0-8be8-8c9e26cedb5f-kube-api-access-r98r5\") pod \"openshift-config-operator-7777fb866f-8nh95\" (UID: \"66b7c770-0864-43b0-8be8-8c9e26cedb5f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8nh95" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.481258 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5mrc\" (UniqueName: \"kubernetes.io/projected/a43a88c1-27e9-46ab-a605-3aed976d512c-kube-api-access-l5mrc\") pod \"etcd-operator-b45778765-c4l9z\" (UID: \"a43a88c1-27e9-46ab-a605-3aed976d512c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c4l9z" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.485742 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:37 crc kubenswrapper[4724]: E0223 17:33:37.485937 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:37.98590031 +0000 UTC m=+173.802099910 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.485994 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/613a3da0-fd65-48f7-a750-b53e06ec39d8-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-r59dr\" (UID: \"613a3da0-fd65-48f7-a750-b53e06ec39d8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r59dr" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.486032 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2f4h\" (UniqueName: \"kubernetes.io/projected/a43d9fa2-d037-4b14-a90e-30b81108b214-kube-api-access-q2f4h\") pod \"cluster-image-registry-operator-dc59b4c8b-q4lmq\" (UID: \"a43d9fa2-d037-4b14-a90e-30b81108b214\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q4lmq" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.486050 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/32ce877e-4675-4f92-a2b3-7be9a27b36d2-metrics-tls\") pod \"ingress-operator-5b745b69d9-vrsk4\" (UID: \"32ce877e-4675-4f92-a2b3-7be9a27b36d2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vrsk4" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.486091 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e63a5cc4-56f4-414c-87e9-4ec6ff77de47-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-n984k\" (UID: \"e63a5cc4-56f4-414c-87e9-4ec6ff77de47\") " pod="openshift-marketplace/marketplace-operator-79b997595-n984k" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.486112 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0880746f-1a97-4302-8bd8-062a1f849e23-etcd-client\") pod \"apiserver-76f77b778f-xtsjf\" (UID: \"0880746f-1a97-4302-8bd8-062a1f849e23\") " pod="openshift-apiserver/apiserver-76f77b778f-xtsjf" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.486129 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0880746f-1a97-4302-8bd8-062a1f849e23-serving-cert\") pod \"apiserver-76f77b778f-xtsjf\" (UID: \"0880746f-1a97-4302-8bd8-062a1f849e23\") " pod="openshift-apiserver/apiserver-76f77b778f-xtsjf" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.486147 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5895e15-1275-4b7b-9f0d-0a3baf72490b-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-mr2c4\" (UID: \"b5895e15-1275-4b7b-9f0d-0a3baf72490b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mr2c4" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.486165 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwfqr\" (UniqueName: \"kubernetes.io/projected/0880746f-1a97-4302-8bd8-062a1f849e23-kube-api-access-qwfqr\") pod \"apiserver-76f77b778f-xtsjf\" (UID: \"0880746f-1a97-4302-8bd8-062a1f849e23\") " pod="openshift-apiserver/apiserver-76f77b778f-xtsjf" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.486218 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a43d9fa2-d037-4b14-a90e-30b81108b214-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-q4lmq\" (UID: \"a43d9fa2-d037-4b14-a90e-30b81108b214\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q4lmq" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.486238 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0880746f-1a97-4302-8bd8-062a1f849e23-audit-dir\") pod \"apiserver-76f77b778f-xtsjf\" (UID: \"0880746f-1a97-4302-8bd8-062a1f849e23\") " pod="openshift-apiserver/apiserver-76f77b778f-xtsjf" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.486283 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8ea86b52-00b1-458f-8c02-4baaf402d190-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-d487c\" (UID: \"8ea86b52-00b1-458f-8c02-4baaf402d190\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d487c" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.486303 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3301d9a4-0f26-4331-b29d-d38fec4a60c7-certs\") pod \"machine-config-server-5xvjh\" (UID: \"3301d9a4-0f26-4331-b29d-d38fec4a60c7\") " pod="openshift-machine-config-operator/machine-config-server-5xvjh" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.486326 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/32ce877e-4675-4f92-a2b3-7be9a27b36d2-bound-sa-token\") pod \"ingress-operator-5b745b69d9-vrsk4\" (UID: \"32ce877e-4675-4f92-a2b3-7be9a27b36d2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vrsk4" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.486343 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7wcc\" (UniqueName: \"kubernetes.io/projected/b5895e15-1275-4b7b-9f0d-0a3baf72490b-kube-api-access-t7wcc\") pod \"kube-storage-version-migrator-operator-b67b599dd-mr2c4\" (UID: \"b5895e15-1275-4b7b-9f0d-0a3baf72490b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mr2c4" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.486360 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/627d0ee4-906a-4e10-9350-80074b99e9f4-config-volume\") pod \"dns-default-72w5z\" (UID: \"627d0ee4-906a-4e10-9350-80074b99e9f4\") " pod="openshift-dns/dns-default-72w5z" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.486450 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2llc\" (UniqueName: \"kubernetes.io/projected/9d52ec05-b283-48f8-aed2-50c0a6dcc9e3-kube-api-access-r2llc\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.486471 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spzpf\" (UniqueName: \"kubernetes.io/projected/5470bb85-cb17-49bf-ae67-bf41931ee055-kube-api-access-spzpf\") pod \"openshift-controller-manager-operator-756b6f6bc6-ddvrv\" (UID: \"5470bb85-cb17-49bf-ae67-bf41931ee055\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ddvrv" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.486496 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/689a8ef9-8892-4a61-b050-540f2e13ac4c-socket-dir\") pod \"csi-hostpathplugin-bdrlz\" (UID: \"689a8ef9-8892-4a61-b050-540f2e13ac4c\") " pod="hostpath-provisioner/csi-hostpathplugin-bdrlz" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.486515 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/997b5710-9b99-4207-92da-28b7a1923db2-console-oauth-config\") pod \"console-f9d7485db-fknnv\" (UID: \"997b5710-9b99-4207-92da-28b7a1923db2\") " pod="openshift-console/console-f9d7485db-fknnv" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.486531 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3301d9a4-0f26-4331-b29d-d38fec4a60c7-node-bootstrap-token\") pod \"machine-config-server-5xvjh\" (UID: \"3301d9a4-0f26-4331-b29d-d38fec4a60c7\") " pod="openshift-machine-config-operator/machine-config-server-5xvjh" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.487638 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5895e15-1275-4b7b-9f0d-0a3baf72490b-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-mr2c4\" (UID: \"b5895e15-1275-4b7b-9f0d-0a3baf72490b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mr2c4" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.487713 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d52ec05-b283-48f8-aed2-50c0a6dcc9e3-trusted-ca\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.487740 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/613a3da0-fd65-48f7-a750-b53e06ec39d8-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-r59dr\" (UID: \"613a3da0-fd65-48f7-a750-b53e06ec39d8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r59dr" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.487760 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/997b5710-9b99-4207-92da-28b7a1923db2-oauth-serving-cert\") pod \"console-f9d7485db-fknnv\" (UID: \"997b5710-9b99-4207-92da-28b7a1923db2\") " pod="openshift-console/console-f9d7485db-fknnv" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.487776 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/32ce877e-4675-4f92-a2b3-7be9a27b36d2-trusted-ca\") pod \"ingress-operator-5b745b69d9-vrsk4\" (UID: \"32ce877e-4675-4f92-a2b3-7be9a27b36d2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vrsk4" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.487794 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0b52ca21-3afa-490c-aa78-a60b67dc0c52-cert\") pod \"ingress-canary-qbqjf\" (UID: \"0b52ca21-3afa-490c-aa78-a60b67dc0c52\") " pod="openshift-ingress-canary/ingress-canary-qbqjf" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.487810 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/edfc2b29-2ed6-4d6f-aed5-61bdfd205dc6-apiservice-cert\") pod \"packageserver-d55dfcdfc-sxnxc\" (UID: \"edfc2b29-2ed6-4d6f-aed5-61bdfd205dc6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sxnxc" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.487839 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ea86b52-00b1-458f-8c02-4baaf402d190-config\") pod \"kube-controller-manager-operator-78b949d7b-d487c\" (UID: \"8ea86b52-00b1-458f-8c02-4baaf402d190\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d487c" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.487865 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67dbde4d-5c0f-45cf-82ae-435b16e17121-config-volume\") pod \"collect-profiles-29531130-wbshf\" (UID: \"67dbde4d-5c0f-45cf-82ae-435b16e17121\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531130-wbshf" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.487912 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/997b5710-9b99-4207-92da-28b7a1923db2-console-serving-cert\") pod \"console-f9d7485db-fknnv\" (UID: \"997b5710-9b99-4207-92da-28b7a1923db2\") " pod="openshift-console/console-f9d7485db-fknnv" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.487927 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/1a5f0220-44b0-4db7-8849-0846e57a8730-signing-cabundle\") pod \"service-ca-9c57cc56f-4kmgv\" (UID: \"1a5f0220-44b0-4db7-8849-0846e57a8730\") " pod="openshift-service-ca/service-ca-9c57cc56f-4kmgv" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.487957 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e870c417-07ff-4c42-8e8f-7db6078f3b5d-images\") pod \"machine-config-operator-74547568cd-gddsl\" (UID: \"e870c417-07ff-4c42-8e8f-7db6078f3b5d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gddsl" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.488540 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ea86b52-00b1-458f-8c02-4baaf402d190-config\") pod \"kube-controller-manager-operator-78b949d7b-d487c\" (UID: \"8ea86b52-00b1-458f-8c02-4baaf402d190\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d487c" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.489149 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/997b5710-9b99-4207-92da-28b7a1923db2-oauth-serving-cert\") pod \"console-f9d7485db-fknnv\" (UID: \"997b5710-9b99-4207-92da-28b7a1923db2\") " pod="openshift-console/console-f9d7485db-fknnv" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.489378 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a43d9fa2-d037-4b14-a90e-30b81108b214-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-q4lmq\" (UID: \"a43d9fa2-d037-4b14-a90e-30b81108b214\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q4lmq" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.489604 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h72ft\" (UniqueName: \"kubernetes.io/projected/1a5f0220-44b0-4db7-8849-0846e57a8730-kube-api-access-h72ft\") pod \"service-ca-9c57cc56f-4kmgv\" (UID: \"1a5f0220-44b0-4db7-8849-0846e57a8730\") " pod="openshift-service-ca/service-ca-9c57cc56f-4kmgv" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.490362 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a43d9fa2-d037-4b14-a90e-30b81108b214-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-q4lmq\" (UID: \"a43d9fa2-d037-4b14-a90e-30b81108b214\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q4lmq" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.490810 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzsp9\" (UniqueName: \"kubernetes.io/projected/facb437a-7568-41fc-a922-644ad2cfdda2-kube-api-access-fzsp9\") pod \"router-default-5444994796-s77tw\" (UID: \"facb437a-7568-41fc-a922-644ad2cfdda2\") " pod="openshift-ingress/router-default-5444994796-s77tw" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.490983 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/facb437a-7568-41fc-a922-644ad2cfdda2-metrics-certs\") pod \"router-default-5444994796-s77tw\" (UID: \"facb437a-7568-41fc-a922-644ad2cfdda2\") " pod="openshift-ingress/router-default-5444994796-s77tw" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.491011 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d52ec05-b283-48f8-aed2-50c0a6dcc9e3-trusted-ca\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.491162 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c846n\" (UniqueName: \"kubernetes.io/projected/3301d9a4-0f26-4331-b29d-d38fec4a60c7-kube-api-access-c846n\") pod \"machine-config-server-5xvjh\" (UID: \"3301d9a4-0f26-4331-b29d-d38fec4a60c7\") " pod="openshift-machine-config-operator/machine-config-server-5xvjh" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.491211 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9d52ec05-b283-48f8-aed2-50c0a6dcc9e3-registry-tls\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.491238 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0880746f-1a97-4302-8bd8-062a1f849e23-config\") pod \"apiserver-76f77b778f-xtsjf\" (UID: \"0880746f-1a97-4302-8bd8-062a1f849e23\") " pod="openshift-apiserver/apiserver-76f77b778f-xtsjf" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.491265 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grsgh\" (UniqueName: \"kubernetes.io/projected/bfa1e7f2-fdc3-47cb-b906-0e138164c57d-kube-api-access-grsgh\") pod \"multus-admission-controller-857f4d67dd-sjpqd\" (UID: \"bfa1e7f2-fdc3-47cb-b906-0e138164c57d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-sjpqd" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.491299 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0880746f-1a97-4302-8bd8-062a1f849e23-node-pullsecrets\") pod \"apiserver-76f77b778f-xtsjf\" (UID: \"0880746f-1a97-4302-8bd8-062a1f849e23\") " pod="openshift-apiserver/apiserver-76f77b778f-xtsjf" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.491325 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww4qm\" (UniqueName: \"kubernetes.io/projected/627d0ee4-906a-4e10-9350-80074b99e9f4-kube-api-access-ww4qm\") pod \"dns-default-72w5z\" (UID: \"627d0ee4-906a-4e10-9350-80074b99e9f4\") " pod="openshift-dns/dns-default-72w5z" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.491367 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9d52ec05-b283-48f8-aed2-50c0a6dcc9e3-registry-certificates\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.491427 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5470bb85-cb17-49bf-ae67-bf41931ee055-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-ddvrv\" (UID: \"5470bb85-cb17-49bf-ae67-bf41931ee055\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ddvrv" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.491453 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/254acf08-7bbb-4e84-95ed-21ce84733817-trusted-ca\") pod \"console-operator-58897d9998-ch92v\" (UID: \"254acf08-7bbb-4e84-95ed-21ce84733817\") " pod="openshift-console-operator/console-operator-58897d9998-ch92v" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.491482 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfq8d\" (UniqueName: \"kubernetes.io/projected/689a8ef9-8892-4a61-b050-540f2e13ac4c-kube-api-access-xfq8d\") pod \"csi-hostpathplugin-bdrlz\" (UID: \"689a8ef9-8892-4a61-b050-540f2e13ac4c\") " pod="hostpath-provisioner/csi-hostpathplugin-bdrlz" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.491512 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9d52ec05-b283-48f8-aed2-50c0a6dcc9e3-ca-trust-extracted\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.491538 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a10ea92-fb0c-4819-92d9-e3703c3dbe09-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-t7nzb\" (UID: \"3a10ea92-fb0c-4819-92d9-e3703c3dbe09\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t7nzb" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.491564 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4ttb\" (UniqueName: \"kubernetes.io/projected/254acf08-7bbb-4e84-95ed-21ce84733817-kube-api-access-n4ttb\") pod \"console-operator-58897d9998-ch92v\" (UID: \"254acf08-7bbb-4e84-95ed-21ce84733817\") " pod="openshift-console-operator/console-operator-58897d9998-ch92v" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.491589 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/689a8ef9-8892-4a61-b050-540f2e13ac4c-plugins-dir\") pod \"csi-hostpathplugin-bdrlz\" (UID: \"689a8ef9-8892-4a61-b050-540f2e13ac4c\") " pod="hostpath-provisioner/csi-hostpathplugin-bdrlz" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.491618 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5470bb85-cb17-49bf-ae67-bf41931ee055-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-ddvrv\" (UID: \"5470bb85-cb17-49bf-ae67-bf41931ee055\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ddvrv" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.491643 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/254acf08-7bbb-4e84-95ed-21ce84733817-config\") pod \"console-operator-58897d9998-ch92v\" (UID: \"254acf08-7bbb-4e84-95ed-21ce84733817\") " pod="openshift-console-operator/console-operator-58897d9998-ch92v" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.491686 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/67dbde4d-5c0f-45cf-82ae-435b16e17121-secret-volume\") pod \"collect-profiles-29531130-wbshf\" (UID: \"67dbde4d-5c0f-45cf-82ae-435b16e17121\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531130-wbshf" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.491711 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6qh7\" (UniqueName: \"kubernetes.io/projected/e870c417-07ff-4c42-8e8f-7db6078f3b5d-kube-api-access-f6qh7\") pod \"machine-config-operator-74547568cd-gddsl\" (UID: \"e870c417-07ff-4c42-8e8f-7db6078f3b5d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gddsl" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.491739 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ea86b52-00b1-458f-8c02-4baaf402d190-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-d487c\" (UID: \"8ea86b52-00b1-458f-8c02-4baaf402d190\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d487c" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.491765 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9fnk\" (UniqueName: \"kubernetes.io/projected/e63a5cc4-56f4-414c-87e9-4ec6ff77de47-kube-api-access-w9fnk\") pod \"marketplace-operator-79b997595-n984k\" (UID: \"e63a5cc4-56f4-414c-87e9-4ec6ff77de47\") " pod="openshift-marketplace/marketplace-operator-79b997595-n984k" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.491786 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0880746f-1a97-4302-8bd8-062a1f849e23-etcd-serving-ca\") pod \"apiserver-76f77b778f-xtsjf\" (UID: \"0880746f-1a97-4302-8bd8-062a1f849e23\") " pod="openshift-apiserver/apiserver-76f77b778f-xtsjf" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.491813 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcfrr\" (UniqueName: \"kubernetes.io/projected/997b5710-9b99-4207-92da-28b7a1923db2-kube-api-access-fcfrr\") pod \"console-f9d7485db-fknnv\" (UID: \"997b5710-9b99-4207-92da-28b7a1923db2\") " pod="openshift-console/console-f9d7485db-fknnv" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.491841 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e870c417-07ff-4c42-8e8f-7db6078f3b5d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-gddsl\" (UID: \"e870c417-07ff-4c42-8e8f-7db6078f3b5d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gddsl" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.491862 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/627d0ee4-906a-4e10-9350-80074b99e9f4-metrics-tls\") pod \"dns-default-72w5z\" (UID: \"627d0ee4-906a-4e10-9350-80074b99e9f4\") " pod="openshift-dns/dns-default-72w5z" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.491899 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9d52ec05-b283-48f8-aed2-50c0a6dcc9e3-bound-sa-token\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.491926 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/facb437a-7568-41fc-a922-644ad2cfdda2-default-certificate\") pod \"router-default-5444994796-s77tw\" (UID: \"facb437a-7568-41fc-a922-644ad2cfdda2\") " pod="openshift-ingress/router-default-5444994796-s77tw" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.491950 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c553fb55-ce10-4c80-82fa-8ccd91ff5cd0-config\") pod \"service-ca-operator-777779d784-6rznq\" (UID: \"c553fb55-ce10-4c80-82fa-8ccd91ff5cd0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6rznq" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.491985 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/613a3da0-fd65-48f7-a750-b53e06ec39d8-config\") pod \"kube-apiserver-operator-766d6c64bb-r59dr\" (UID: \"613a3da0-fd65-48f7-a750-b53e06ec39d8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r59dr" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.492016 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/997b5710-9b99-4207-92da-28b7a1923db2-trusted-ca-bundle\") pod \"console-f9d7485db-fknnv\" (UID: \"997b5710-9b99-4207-92da-28b7a1923db2\") " pod="openshift-console/console-f9d7485db-fknnv" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.492038 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/edfc2b29-2ed6-4d6f-aed5-61bdfd205dc6-webhook-cert\") pod \"packageserver-d55dfcdfc-sxnxc\" (UID: \"edfc2b29-2ed6-4d6f-aed5-61bdfd205dc6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sxnxc" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.492060 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcnw8\" (UniqueName: \"kubernetes.io/projected/dbbfa5dc-5393-48b1-bc93-3d6e6ccf188d-kube-api-access-kcnw8\") pod \"package-server-manager-789f6589d5-dlfsv\" (UID: \"dbbfa5dc-5393-48b1-bc93-3d6e6ccf188d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dlfsv" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.492116 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4a5ae8b2-dd3e-49f0-97d2-790cc9b76107-profile-collector-cert\") pod \"olm-operator-6b444d44fb-45j55\" (UID: \"4a5ae8b2-dd3e-49f0-97d2-790cc9b76107\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-45j55" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.492144 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a10ea92-fb0c-4819-92d9-e3703c3dbe09-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-t7nzb\" (UID: \"3a10ea92-fb0c-4819-92d9-e3703c3dbe09\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t7nzb" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.492166 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/997b5710-9b99-4207-92da-28b7a1923db2-console-config\") pod \"console-f9d7485db-fknnv\" (UID: \"997b5710-9b99-4207-92da-28b7a1923db2\") " pod="openshift-console/console-f9d7485db-fknnv" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.492195 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/689a8ef9-8892-4a61-b050-540f2e13ac4c-mountpoint-dir\") pod \"csi-hostpathplugin-bdrlz\" (UID: \"689a8ef9-8892-4a61-b050-540f2e13ac4c\") " pod="hostpath-provisioner/csi-hostpathplugin-bdrlz" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.492216 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0880746f-1a97-4302-8bd8-062a1f849e23-encryption-config\") pod \"apiserver-76f77b778f-xtsjf\" (UID: \"0880746f-1a97-4302-8bd8-062a1f849e23\") " pod="openshift-apiserver/apiserver-76f77b778f-xtsjf" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.492237 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0880746f-1a97-4302-8bd8-062a1f849e23-audit\") pod \"apiserver-76f77b778f-xtsjf\" (UID: \"0880746f-1a97-4302-8bd8-062a1f849e23\") " pod="openshift-apiserver/apiserver-76f77b778f-xtsjf" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.492262 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/997b5710-9b99-4207-92da-28b7a1923db2-service-ca\") pod \"console-f9d7485db-fknnv\" (UID: \"997b5710-9b99-4207-92da-28b7a1923db2\") " pod="openshift-console/console-f9d7485db-fknnv" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.492284 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/689a8ef9-8892-4a61-b050-540f2e13ac4c-registration-dir\") pod \"csi-hostpathplugin-bdrlz\" (UID: \"689a8ef9-8892-4a61-b050-540f2e13ac4c\") " pod="hostpath-provisioner/csi-hostpathplugin-bdrlz" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.492306 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4a5ae8b2-dd3e-49f0-97d2-790cc9b76107-srv-cert\") pod \"olm-operator-6b444d44fb-45j55\" (UID: \"4a5ae8b2-dd3e-49f0-97d2-790cc9b76107\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-45j55" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.492336 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/facb437a-7568-41fc-a922-644ad2cfdda2-service-ca-bundle\") pod \"router-default-5444994796-s77tw\" (UID: \"facb437a-7568-41fc-a922-644ad2cfdda2\") " pod="openshift-ingress/router-default-5444994796-s77tw" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.492366 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88bgl\" (UniqueName: \"kubernetes.io/projected/0b52ca21-3afa-490c-aa78-a60b67dc0c52-kube-api-access-88bgl\") pod \"ingress-canary-qbqjf\" (UID: \"0b52ca21-3afa-490c-aa78-a60b67dc0c52\") " pod="openshift-ingress-canary/ingress-canary-qbqjf" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.492405 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c553fb55-ce10-4c80-82fa-8ccd91ff5cd0-serving-cert\") pod \"service-ca-operator-777779d784-6rznq\" (UID: \"c553fb55-ce10-4c80-82fa-8ccd91ff5cd0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6rznq" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.492435 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc9w4\" (UniqueName: \"kubernetes.io/projected/c084a005-fee9-4c88-b875-8b5ddaf06820-kube-api-access-fc9w4\") pod \"catalog-operator-68c6474976-9vlcc\" (UID: \"c084a005-fee9-4c88-b875-8b5ddaf06820\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9vlcc" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.492463 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e842e9a3-2897-414d-8606-46bb70b207d9-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-n4kjh\" (UID: \"e842e9a3-2897-414d-8606-46bb70b207d9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n4kjh" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.492490 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0880746f-1a97-4302-8bd8-062a1f849e23-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xtsjf\" (UID: \"0880746f-1a97-4302-8bd8-062a1f849e23\") " pod="openshift-apiserver/apiserver-76f77b778f-xtsjf" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.492623 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/32ce877e-4675-4f92-a2b3-7be9a27b36d2-trusted-ca\") pod \"ingress-operator-5b745b69d9-vrsk4\" (UID: \"32ce877e-4675-4f92-a2b3-7be9a27b36d2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vrsk4" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.493224 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a10ea92-fb0c-4819-92d9-e3703c3dbe09-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-t7nzb\" (UID: \"3a10ea92-fb0c-4819-92d9-e3703c3dbe09\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t7nzb" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.493291 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5470bb85-cb17-49bf-ae67-bf41931ee055-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-ddvrv\" (UID: \"5470bb85-cb17-49bf-ae67-bf41931ee055\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ddvrv" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.493503 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/997b5710-9b99-4207-92da-28b7a1923db2-console-config\") pod \"console-f9d7485db-fknnv\" (UID: \"997b5710-9b99-4207-92da-28b7a1923db2\") " pod="openshift-console/console-f9d7485db-fknnv" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.493860 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/613a3da0-fd65-48f7-a750-b53e06ec39d8-config\") pod \"kube-apiserver-operator-766d6c64bb-r59dr\" (UID: \"613a3da0-fd65-48f7-a750-b53e06ec39d8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r59dr" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.494178 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9d52ec05-b283-48f8-aed2-50c0a6dcc9e3-ca-trust-extracted\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.494479 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/facb437a-7568-41fc-a922-644ad2cfdda2-service-ca-bundle\") pod \"router-default-5444994796-s77tw\" (UID: \"facb437a-7568-41fc-a922-644ad2cfdda2\") " pod="openshift-ingress/router-default-5444994796-s77tw" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.494658 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a43d9fa2-d037-4b14-a90e-30b81108b214-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-q4lmq\" (UID: \"a43d9fa2-d037-4b14-a90e-30b81108b214\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q4lmq" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.494686 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bfa1e7f2-fdc3-47cb-b906-0e138164c57d-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-sjpqd\" (UID: \"bfa1e7f2-fdc3-47cb-b906-0e138164c57d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-sjpqd" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.494711 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e870c417-07ff-4c42-8e8f-7db6078f3b5d-proxy-tls\") pod \"machine-config-operator-74547568cd-gddsl\" (UID: \"e870c417-07ff-4c42-8e8f-7db6078f3b5d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gddsl" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.494753 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9d52ec05-b283-48f8-aed2-50c0a6dcc9e3-installation-pull-secrets\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.494770 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwn4x\" (UniqueName: \"kubernetes.io/projected/c553fb55-ce10-4c80-82fa-8ccd91ff5cd0-kube-api-access-vwn4x\") pod \"service-ca-operator-777779d784-6rznq\" (UID: \"c553fb55-ce10-4c80-82fa-8ccd91ff5cd0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6rznq" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.494794 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjfq2\" (UniqueName: \"kubernetes.io/projected/4a5ae8b2-dd3e-49f0-97d2-790cc9b76107-kube-api-access-rjfq2\") pod \"olm-operator-6b444d44fb-45j55\" (UID: \"4a5ae8b2-dd3e-49f0-97d2-790cc9b76107\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-45j55" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.495555 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/997b5710-9b99-4207-92da-28b7a1923db2-trusted-ca-bundle\") pod \"console-f9d7485db-fknnv\" (UID: \"997b5710-9b99-4207-92da-28b7a1923db2\") " pod="openshift-console/console-f9d7485db-fknnv" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.495615 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a43d9fa2-d037-4b14-a90e-30b81108b214-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-q4lmq\" (UID: \"a43d9fa2-d037-4b14-a90e-30b81108b214\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q4lmq" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.495936 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/254acf08-7bbb-4e84-95ed-21ce84733817-trusted-ca\") pod \"console-operator-58897d9998-ch92v\" (UID: \"254acf08-7bbb-4e84-95ed-21ce84733817\") " pod="openshift-console-operator/console-operator-58897d9998-ch92v" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.496145 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/689a8ef9-8892-4a61-b050-540f2e13ac4c-csi-data-dir\") pod \"csi-hostpathplugin-bdrlz\" (UID: \"689a8ef9-8892-4a61-b050-540f2e13ac4c\") " pod="hostpath-provisioner/csi-hostpathplugin-bdrlz" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.496192 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/1a5f0220-44b0-4db7-8849-0846e57a8730-signing-key\") pod \"service-ca-9c57cc56f-4kmgv\" (UID: \"1a5f0220-44b0-4db7-8849-0846e57a8730\") " pod="openshift-service-ca/service-ca-9c57cc56f-4kmgv" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.496222 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/facb437a-7568-41fc-a922-644ad2cfdda2-stats-auth\") pod \"router-default-5444994796-s77tw\" (UID: \"facb437a-7568-41fc-a922-644ad2cfdda2\") " pod="openshift-ingress/router-default-5444994796-s77tw" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.496245 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e63a5cc4-56f4-414c-87e9-4ec6ff77de47-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-n984k\" (UID: \"e63a5cc4-56f4-414c-87e9-4ec6ff77de47\") " pod="openshift-marketplace/marketplace-operator-79b997595-n984k" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.496371 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdvgw\" (UniqueName: \"kubernetes.io/projected/edfc2b29-2ed6-4d6f-aed5-61bdfd205dc6-kube-api-access-wdvgw\") pod \"packageserver-d55dfcdfc-sxnxc\" (UID: \"edfc2b29-2ed6-4d6f-aed5-61bdfd205dc6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sxnxc" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.496494 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/254acf08-7bbb-4e84-95ed-21ce84733817-serving-cert\") pod \"console-operator-58897d9998-ch92v\" (UID: \"254acf08-7bbb-4e84-95ed-21ce84733817\") " pod="openshift-console-operator/console-operator-58897d9998-ch92v" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.496532 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0880746f-1a97-4302-8bd8-062a1f849e23-image-import-ca\") pod \"apiserver-76f77b778f-xtsjf\" (UID: \"0880746f-1a97-4302-8bd8-062a1f849e23\") " pod="openshift-apiserver/apiserver-76f77b778f-xtsjf" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.496559 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4msdm\" (UniqueName: \"kubernetes.io/projected/67dbde4d-5c0f-45cf-82ae-435b16e17121-kube-api-access-4msdm\") pod \"collect-profiles-29531130-wbshf\" (UID: \"67dbde4d-5c0f-45cf-82ae-435b16e17121\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531130-wbshf" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.496596 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8pzs\" (UniqueName: \"kubernetes.io/projected/32ce877e-4675-4f92-a2b3-7be9a27b36d2-kube-api-access-x8pzs\") pod \"ingress-operator-5b745b69d9-vrsk4\" (UID: \"32ce877e-4675-4f92-a2b3-7be9a27b36d2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vrsk4" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.496676 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a10ea92-fb0c-4819-92d9-e3703c3dbe09-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-t7nzb\" (UID: \"3a10ea92-fb0c-4819-92d9-e3703c3dbe09\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t7nzb" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.496705 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c084a005-fee9-4c88-b875-8b5ddaf06820-profile-collector-cert\") pod \"catalog-operator-68c6474976-9vlcc\" (UID: \"c084a005-fee9-4c88-b875-8b5ddaf06820\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9vlcc" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.496734 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwphs\" (UniqueName: \"kubernetes.io/projected/e842e9a3-2897-414d-8606-46bb70b207d9-kube-api-access-jwphs\") pod \"control-plane-machine-set-operator-78cbb6b69f-n4kjh\" (UID: \"e842e9a3-2897-414d-8606-46bb70b207d9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n4kjh" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.496843 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c084a005-fee9-4c88-b875-8b5ddaf06820-srv-cert\") pod \"catalog-operator-68c6474976-9vlcc\" (UID: \"c084a005-fee9-4c88-b875-8b5ddaf06820\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9vlcc" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.496869 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/edfc2b29-2ed6-4d6f-aed5-61bdfd205dc6-tmpfs\") pod \"packageserver-d55dfcdfc-sxnxc\" (UID: \"edfc2b29-2ed6-4d6f-aed5-61bdfd205dc6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sxnxc" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.496897 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5895e15-1275-4b7b-9f0d-0a3baf72490b-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-mr2c4\" (UID: \"b5895e15-1275-4b7b-9f0d-0a3baf72490b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mr2c4" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.496923 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/dbbfa5dc-5393-48b1-bc93-3d6e6ccf188d-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-dlfsv\" (UID: \"dbbfa5dc-5393-48b1-bc93-3d6e6ccf188d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dlfsv" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.496989 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:37 crc kubenswrapper[4724]: E0223 17:33:37.497425 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:37.997367354 +0000 UTC m=+173.813566954 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.498284 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/254acf08-7bbb-4e84-95ed-21ce84733817-config\") pod \"console-operator-58897d9998-ch92v\" (UID: \"254acf08-7bbb-4e84-95ed-21ce84733817\") " pod="openshift-console-operator/console-operator-58897d9998-ch92v" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.498824 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/997b5710-9b99-4207-92da-28b7a1923db2-service-ca\") pod \"console-f9d7485db-fknnv\" (UID: \"997b5710-9b99-4207-92da-28b7a1923db2\") " pod="openshift-console/console-f9d7485db-fknnv" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.499336 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9d52ec05-b283-48f8-aed2-50c0a6dcc9e3-registry-certificates\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.501079 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/997b5710-9b99-4207-92da-28b7a1923db2-console-oauth-config\") pod \"console-f9d7485db-fknnv\" (UID: \"997b5710-9b99-4207-92da-28b7a1923db2\") " pod="openshift-console/console-f9d7485db-fknnv" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.501222 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/32ce877e-4675-4f92-a2b3-7be9a27b36d2-metrics-tls\") pod \"ingress-operator-5b745b69d9-vrsk4\" (UID: \"32ce877e-4675-4f92-a2b3-7be9a27b36d2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vrsk4" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.502248 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/facb437a-7568-41fc-a922-644ad2cfdda2-stats-auth\") pod \"router-default-5444994796-s77tw\" (UID: \"facb437a-7568-41fc-a922-644ad2cfdda2\") " pod="openshift-ingress/router-default-5444994796-s77tw" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.504140 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9d52ec05-b283-48f8-aed2-50c0a6dcc9e3-registry-tls\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.504244 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9d52ec05-b283-48f8-aed2-50c0a6dcc9e3-installation-pull-secrets\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.504436 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5895e15-1275-4b7b-9f0d-0a3baf72490b-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-mr2c4\" (UID: \"b5895e15-1275-4b7b-9f0d-0a3baf72490b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mr2c4" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.504960 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a10ea92-fb0c-4819-92d9-e3703c3dbe09-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-t7nzb\" (UID: \"3a10ea92-fb0c-4819-92d9-e3703c3dbe09\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t7nzb" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.505528 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/facb437a-7568-41fc-a922-644ad2cfdda2-default-certificate\") pod \"router-default-5444994796-s77tw\" (UID: \"facb437a-7568-41fc-a922-644ad2cfdda2\") " pod="openshift-ingress/router-default-5444994796-s77tw" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.505595 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/997b5710-9b99-4207-92da-28b7a1923db2-console-serving-cert\") pod \"console-f9d7485db-fknnv\" (UID: \"997b5710-9b99-4207-92da-28b7a1923db2\") " pod="openshift-console/console-f9d7485db-fknnv" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.506183 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5470bb85-cb17-49bf-ae67-bf41931ee055-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-ddvrv\" (UID: \"5470bb85-cb17-49bf-ae67-bf41931ee055\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ddvrv" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.506310 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ea86b52-00b1-458f-8c02-4baaf402d190-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-d487c\" (UID: \"8ea86b52-00b1-458f-8c02-4baaf402d190\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d487c" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.506375 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/613a3da0-fd65-48f7-a750-b53e06ec39d8-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-r59dr\" (UID: \"613a3da0-fd65-48f7-a750-b53e06ec39d8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r59dr" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.507950 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnqlx\" (UniqueName: \"kubernetes.io/projected/415b36c5-1000-4ff9-9640-1ec29a3728c1-kube-api-access-qnqlx\") pod \"authentication-operator-69f744f599-sg2k6\" (UID: \"415b36c5-1000-4ff9-9640-1ec29a3728c1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sg2k6" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.509025 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/facb437a-7568-41fc-a922-644ad2cfdda2-metrics-certs\") pod \"router-default-5444994796-s77tw\" (UID: \"facb437a-7568-41fc-a922-644ad2cfdda2\") " pod="openshift-ingress/router-default-5444994796-s77tw" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.518052 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/254acf08-7bbb-4e84-95ed-21ce84733817-serving-cert\") pod \"console-operator-58897d9998-ch92v\" (UID: \"254acf08-7bbb-4e84-95ed-21ce84733817\") " pod="openshift-console-operator/console-operator-58897d9998-ch92v" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.526229 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4kcvg"] Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.532600 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvx2n\" (UniqueName: \"kubernetes.io/projected/a5519032-42eb-483d-8bc4-a1fad9b5dc28-kube-api-access-wvx2n\") pod \"machine-config-controller-84d6567774-6nv99\" (UID: \"a5519032-42eb-483d-8bc4-a1fad9b5dc28\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6nv99" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.539016 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-8hzn4"] Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.541789 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8nh95" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.543744 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrsz4\" (UniqueName: \"kubernetes.io/projected/334ae0f0-e733-430f-b670-4ed4244bfa22-kube-api-access-rrsz4\") pod \"migrator-59844c95c7-plsvj\" (UID: \"334ae0f0-e733-430f-b670-4ed4244bfa22\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-plsvj" Feb 23 17:33:37 crc kubenswrapper[4724]: W0223 17:33:37.546704 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod757355b8_9b0f_4c38_9560_a0281e0fa332.slice/crio-ee2cacd57ae8ab25878bba19ffdcf1a59d7e52428addc164d77c55653bf47e80 WatchSource:0}: Error finding container ee2cacd57ae8ab25878bba19ffdcf1a59d7e52428addc164d77c55653bf47e80: Status 404 returned error can't find the container with id ee2cacd57ae8ab25878bba19ffdcf1a59d7e52428addc164d77c55653bf47e80 Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.563653 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g797b\" (UniqueName: \"kubernetes.io/projected/000721f3-4213-4d68-b390-d172a0fea797-kube-api-access-g797b\") pod \"dns-operator-744455d44c-p4vpc\" (UID: \"000721f3-4213-4d68-b390-d172a0fea797\") " pod="openshift-dns-operator/dns-operator-744455d44c-p4vpc" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.563912 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.570547 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9k7fm"] Feb 23 17:33:37 crc kubenswrapper[4724]: W0223 17:33:37.575340 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe2c617a_30bc_4095_b085_d6306827fcce.slice/crio-24e93b8c2fa2e7c39ebd20d67cf98d3958ff02efa094effc23b80aa94dc8f86e WatchSource:0}: Error finding container 24e93b8c2fa2e7c39ebd20d67cf98d3958ff02efa094effc23b80aa94dc8f86e: Status 404 returned error can't find the container with id 24e93b8c2fa2e7c39ebd20d67cf98d3958ff02efa094effc23b80aa94dc8f86e Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.584274 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.592765 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7746d0a1-242b-4afc-b968-36853a4ad1ac-serving-cert\") pod \"route-controller-manager-6576b87f9c-wgdx8\" (UID: \"7746d0a1-242b-4afc-b968-36853a4ad1ac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgdx8" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.601063 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.601252 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/1a5f0220-44b0-4db7-8849-0846e57a8730-signing-cabundle\") pod \"service-ca-9c57cc56f-4kmgv\" (UID: \"1a5f0220-44b0-4db7-8849-0846e57a8730\") " pod="openshift-service-ca/service-ca-9c57cc56f-4kmgv" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.601287 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e870c417-07ff-4c42-8e8f-7db6078f3b5d-images\") pod \"machine-config-operator-74547568cd-gddsl\" (UID: \"e870c417-07ff-4c42-8e8f-7db6078f3b5d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gddsl" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.601311 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8nmb\" (UniqueName: \"kubernetes.io/projected/7746d0a1-242b-4afc-b968-36853a4ad1ac-kube-api-access-w8nmb\") pod \"route-controller-manager-6576b87f9c-wgdx8\" (UID: \"7746d0a1-242b-4afc-b968-36853a4ad1ac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgdx8" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.601344 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h72ft\" (UniqueName: \"kubernetes.io/projected/1a5f0220-44b0-4db7-8849-0846e57a8730-kube-api-access-h72ft\") pod \"service-ca-9c57cc56f-4kmgv\" (UID: \"1a5f0220-44b0-4db7-8849-0846e57a8730\") " pod="openshift-service-ca/service-ca-9c57cc56f-4kmgv" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.601380 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c846n\" (UniqueName: \"kubernetes.io/projected/3301d9a4-0f26-4331-b29d-d38fec4a60c7-kube-api-access-c846n\") pod \"machine-config-server-5xvjh\" (UID: \"3301d9a4-0f26-4331-b29d-d38fec4a60c7\") " pod="openshift-machine-config-operator/machine-config-server-5xvjh" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.601422 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0880746f-1a97-4302-8bd8-062a1f849e23-config\") pod \"apiserver-76f77b778f-xtsjf\" (UID: \"0880746f-1a97-4302-8bd8-062a1f849e23\") " pod="openshift-apiserver/apiserver-76f77b778f-xtsjf" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.601440 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grsgh\" (UniqueName: \"kubernetes.io/projected/bfa1e7f2-fdc3-47cb-b906-0e138164c57d-kube-api-access-grsgh\") pod \"multus-admission-controller-857f4d67dd-sjpqd\" (UID: \"bfa1e7f2-fdc3-47cb-b906-0e138164c57d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-sjpqd" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.601461 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0880746f-1a97-4302-8bd8-062a1f849e23-node-pullsecrets\") pod \"apiserver-76f77b778f-xtsjf\" (UID: \"0880746f-1a97-4302-8bd8-062a1f849e23\") " pod="openshift-apiserver/apiserver-76f77b778f-xtsjf" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.601475 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ww4qm\" (UniqueName: \"kubernetes.io/projected/627d0ee4-906a-4e10-9350-80074b99e9f4-kube-api-access-ww4qm\") pod \"dns-default-72w5z\" (UID: \"627d0ee4-906a-4e10-9350-80074b99e9f4\") " pod="openshift-dns/dns-default-72w5z" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.601494 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfq8d\" (UniqueName: \"kubernetes.io/projected/689a8ef9-8892-4a61-b050-540f2e13ac4c-kube-api-access-xfq8d\") pod \"csi-hostpathplugin-bdrlz\" (UID: \"689a8ef9-8892-4a61-b050-540f2e13ac4c\") " pod="hostpath-provisioner/csi-hostpathplugin-bdrlz" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.601517 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/689a8ef9-8892-4a61-b050-540f2e13ac4c-plugins-dir\") pod \"csi-hostpathplugin-bdrlz\" (UID: \"689a8ef9-8892-4a61-b050-540f2e13ac4c\") " pod="hostpath-provisioner/csi-hostpathplugin-bdrlz" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.601545 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/67dbde4d-5c0f-45cf-82ae-435b16e17121-secret-volume\") pod \"collect-profiles-29531130-wbshf\" (UID: \"67dbde4d-5c0f-45cf-82ae-435b16e17121\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531130-wbshf" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.601563 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9fnk\" (UniqueName: \"kubernetes.io/projected/e63a5cc4-56f4-414c-87e9-4ec6ff77de47-kube-api-access-w9fnk\") pod \"marketplace-operator-79b997595-n984k\" (UID: \"e63a5cc4-56f4-414c-87e9-4ec6ff77de47\") " pod="openshift-marketplace/marketplace-operator-79b997595-n984k" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.601582 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0880746f-1a97-4302-8bd8-062a1f849e23-etcd-serving-ca\") pod \"apiserver-76f77b778f-xtsjf\" (UID: \"0880746f-1a97-4302-8bd8-062a1f849e23\") " pod="openshift-apiserver/apiserver-76f77b778f-xtsjf" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.601599 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6qh7\" (UniqueName: \"kubernetes.io/projected/e870c417-07ff-4c42-8e8f-7db6078f3b5d-kube-api-access-f6qh7\") pod \"machine-config-operator-74547568cd-gddsl\" (UID: \"e870c417-07ff-4c42-8e8f-7db6078f3b5d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gddsl" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.601623 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e870c417-07ff-4c42-8e8f-7db6078f3b5d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-gddsl\" (UID: \"e870c417-07ff-4c42-8e8f-7db6078f3b5d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gddsl" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.601648 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c553fb55-ce10-4c80-82fa-8ccd91ff5cd0-config\") pod \"service-ca-operator-777779d784-6rznq\" (UID: \"c553fb55-ce10-4c80-82fa-8ccd91ff5cd0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6rznq" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.601667 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/627d0ee4-906a-4e10-9350-80074b99e9f4-metrics-tls\") pod \"dns-default-72w5z\" (UID: \"627d0ee4-906a-4e10-9350-80074b99e9f4\") " pod="openshift-dns/dns-default-72w5z" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.601690 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/689a8ef9-8892-4a61-b050-540f2e13ac4c-mountpoint-dir\") pod \"csi-hostpathplugin-bdrlz\" (UID: \"689a8ef9-8892-4a61-b050-540f2e13ac4c\") " pod="hostpath-provisioner/csi-hostpathplugin-bdrlz" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.601707 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0880746f-1a97-4302-8bd8-062a1f849e23-encryption-config\") pod \"apiserver-76f77b778f-xtsjf\" (UID: \"0880746f-1a97-4302-8bd8-062a1f849e23\") " pod="openshift-apiserver/apiserver-76f77b778f-xtsjf" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.601729 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/edfc2b29-2ed6-4d6f-aed5-61bdfd205dc6-webhook-cert\") pod \"packageserver-d55dfcdfc-sxnxc\" (UID: \"edfc2b29-2ed6-4d6f-aed5-61bdfd205dc6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sxnxc" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.601747 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcnw8\" (UniqueName: \"kubernetes.io/projected/dbbfa5dc-5393-48b1-bc93-3d6e6ccf188d-kube-api-access-kcnw8\") pod \"package-server-manager-789f6589d5-dlfsv\" (UID: \"dbbfa5dc-5393-48b1-bc93-3d6e6ccf188d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dlfsv" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.601765 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4a5ae8b2-dd3e-49f0-97d2-790cc9b76107-profile-collector-cert\") pod \"olm-operator-6b444d44fb-45j55\" (UID: \"4a5ae8b2-dd3e-49f0-97d2-790cc9b76107\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-45j55" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.601781 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0880746f-1a97-4302-8bd8-062a1f849e23-audit\") pod \"apiserver-76f77b778f-xtsjf\" (UID: \"0880746f-1a97-4302-8bd8-062a1f849e23\") " pod="openshift-apiserver/apiserver-76f77b778f-xtsjf" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.601801 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/689a8ef9-8892-4a61-b050-540f2e13ac4c-registration-dir\") pod \"csi-hostpathplugin-bdrlz\" (UID: \"689a8ef9-8892-4a61-b050-540f2e13ac4c\") " pod="hostpath-provisioner/csi-hostpathplugin-bdrlz" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.601817 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4a5ae8b2-dd3e-49f0-97d2-790cc9b76107-srv-cert\") pod \"olm-operator-6b444d44fb-45j55\" (UID: \"4a5ae8b2-dd3e-49f0-97d2-790cc9b76107\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-45j55" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.601838 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fc9w4\" (UniqueName: \"kubernetes.io/projected/c084a005-fee9-4c88-b875-8b5ddaf06820-kube-api-access-fc9w4\") pod \"catalog-operator-68c6474976-9vlcc\" (UID: \"c084a005-fee9-4c88-b875-8b5ddaf06820\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9vlcc" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.601855 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e842e9a3-2897-414d-8606-46bb70b207d9-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-n4kjh\" (UID: \"e842e9a3-2897-414d-8606-46bb70b207d9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n4kjh" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.601874 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88bgl\" (UniqueName: \"kubernetes.io/projected/0b52ca21-3afa-490c-aa78-a60b67dc0c52-kube-api-access-88bgl\") pod \"ingress-canary-qbqjf\" (UID: \"0b52ca21-3afa-490c-aa78-a60b67dc0c52\") " pod="openshift-ingress-canary/ingress-canary-qbqjf" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.601889 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c553fb55-ce10-4c80-82fa-8ccd91ff5cd0-serving-cert\") pod \"service-ca-operator-777779d784-6rznq\" (UID: \"c553fb55-ce10-4c80-82fa-8ccd91ff5cd0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6rznq" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.601905 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0880746f-1a97-4302-8bd8-062a1f849e23-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xtsjf\" (UID: \"0880746f-1a97-4302-8bd8-062a1f849e23\") " pod="openshift-apiserver/apiserver-76f77b778f-xtsjf" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.601921 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bfa1e7f2-fdc3-47cb-b906-0e138164c57d-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-sjpqd\" (UID: \"bfa1e7f2-fdc3-47cb-b906-0e138164c57d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-sjpqd" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.601938 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e870c417-07ff-4c42-8e8f-7db6078f3b5d-proxy-tls\") pod \"machine-config-operator-74547568cd-gddsl\" (UID: \"e870c417-07ff-4c42-8e8f-7db6078f3b5d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gddsl" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.601961 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwn4x\" (UniqueName: \"kubernetes.io/projected/c553fb55-ce10-4c80-82fa-8ccd91ff5cd0-kube-api-access-vwn4x\") pod \"service-ca-operator-777779d784-6rznq\" (UID: \"c553fb55-ce10-4c80-82fa-8ccd91ff5cd0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6rznq" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.601989 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjfq2\" (UniqueName: \"kubernetes.io/projected/4a5ae8b2-dd3e-49f0-97d2-790cc9b76107-kube-api-access-rjfq2\") pod \"olm-operator-6b444d44fb-45j55\" (UID: \"4a5ae8b2-dd3e-49f0-97d2-790cc9b76107\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-45j55" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.602011 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/689a8ef9-8892-4a61-b050-540f2e13ac4c-csi-data-dir\") pod \"csi-hostpathplugin-bdrlz\" (UID: \"689a8ef9-8892-4a61-b050-540f2e13ac4c\") " pod="hostpath-provisioner/csi-hostpathplugin-bdrlz" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.602032 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e63a5cc4-56f4-414c-87e9-4ec6ff77de47-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-n984k\" (UID: \"e63a5cc4-56f4-414c-87e9-4ec6ff77de47\") " pod="openshift-marketplace/marketplace-operator-79b997595-n984k" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.602055 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdvgw\" (UniqueName: \"kubernetes.io/projected/edfc2b29-2ed6-4d6f-aed5-61bdfd205dc6-kube-api-access-wdvgw\") pod \"packageserver-d55dfcdfc-sxnxc\" (UID: \"edfc2b29-2ed6-4d6f-aed5-61bdfd205dc6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sxnxc" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.602071 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/1a5f0220-44b0-4db7-8849-0846e57a8730-signing-key\") pod \"service-ca-9c57cc56f-4kmgv\" (UID: \"1a5f0220-44b0-4db7-8849-0846e57a8730\") " pod="openshift-service-ca/service-ca-9c57cc56f-4kmgv" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.602088 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0880746f-1a97-4302-8bd8-062a1f849e23-image-import-ca\") pod \"apiserver-76f77b778f-xtsjf\" (UID: \"0880746f-1a97-4302-8bd8-062a1f849e23\") " pod="openshift-apiserver/apiserver-76f77b778f-xtsjf" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.602109 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4msdm\" (UniqueName: \"kubernetes.io/projected/67dbde4d-5c0f-45cf-82ae-435b16e17121-kube-api-access-4msdm\") pod \"collect-profiles-29531130-wbshf\" (UID: \"67dbde4d-5c0f-45cf-82ae-435b16e17121\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531130-wbshf" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.602132 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c084a005-fee9-4c88-b875-8b5ddaf06820-profile-collector-cert\") pod \"catalog-operator-68c6474976-9vlcc\" (UID: \"c084a005-fee9-4c88-b875-8b5ddaf06820\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9vlcc" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.602150 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwphs\" (UniqueName: \"kubernetes.io/projected/e842e9a3-2897-414d-8606-46bb70b207d9-kube-api-access-jwphs\") pod \"control-plane-machine-set-operator-78cbb6b69f-n4kjh\" (UID: \"e842e9a3-2897-414d-8606-46bb70b207d9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n4kjh" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.602172 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c084a005-fee9-4c88-b875-8b5ddaf06820-srv-cert\") pod \"catalog-operator-68c6474976-9vlcc\" (UID: \"c084a005-fee9-4c88-b875-8b5ddaf06820\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9vlcc" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.602191 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/edfc2b29-2ed6-4d6f-aed5-61bdfd205dc6-tmpfs\") pod \"packageserver-d55dfcdfc-sxnxc\" (UID: \"edfc2b29-2ed6-4d6f-aed5-61bdfd205dc6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sxnxc" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.602224 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/dbbfa5dc-5393-48b1-bc93-3d6e6ccf188d-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-dlfsv\" (UID: \"dbbfa5dc-5393-48b1-bc93-3d6e6ccf188d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dlfsv" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.602264 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e63a5cc4-56f4-414c-87e9-4ec6ff77de47-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-n984k\" (UID: \"e63a5cc4-56f4-414c-87e9-4ec6ff77de47\") " pod="openshift-marketplace/marketplace-operator-79b997595-n984k" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.602287 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0880746f-1a97-4302-8bd8-062a1f849e23-etcd-client\") pod \"apiserver-76f77b778f-xtsjf\" (UID: \"0880746f-1a97-4302-8bd8-062a1f849e23\") " pod="openshift-apiserver/apiserver-76f77b778f-xtsjf" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.602309 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0880746f-1a97-4302-8bd8-062a1f849e23-serving-cert\") pod \"apiserver-76f77b778f-xtsjf\" (UID: \"0880746f-1a97-4302-8bd8-062a1f849e23\") " pod="openshift-apiserver/apiserver-76f77b778f-xtsjf" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.602331 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwfqr\" (UniqueName: \"kubernetes.io/projected/0880746f-1a97-4302-8bd8-062a1f849e23-kube-api-access-qwfqr\") pod \"apiserver-76f77b778f-xtsjf\" (UID: \"0880746f-1a97-4302-8bd8-062a1f849e23\") " pod="openshift-apiserver/apiserver-76f77b778f-xtsjf" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.602357 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0880746f-1a97-4302-8bd8-062a1f849e23-audit-dir\") pod \"apiserver-76f77b778f-xtsjf\" (UID: \"0880746f-1a97-4302-8bd8-062a1f849e23\") " pod="openshift-apiserver/apiserver-76f77b778f-xtsjf" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.602413 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3301d9a4-0f26-4331-b29d-d38fec4a60c7-certs\") pod \"machine-config-server-5xvjh\" (UID: \"3301d9a4-0f26-4331-b29d-d38fec4a60c7\") " pod="openshift-machine-config-operator/machine-config-server-5xvjh" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.602465 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/689a8ef9-8892-4a61-b050-540f2e13ac4c-socket-dir\") pod \"csi-hostpathplugin-bdrlz\" (UID: \"689a8ef9-8892-4a61-b050-540f2e13ac4c\") " pod="hostpath-provisioner/csi-hostpathplugin-bdrlz" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.602485 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/627d0ee4-906a-4e10-9350-80074b99e9f4-config-volume\") pod \"dns-default-72w5z\" (UID: \"627d0ee4-906a-4e10-9350-80074b99e9f4\") " pod="openshift-dns/dns-default-72w5z" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.602518 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3301d9a4-0f26-4331-b29d-d38fec4a60c7-node-bootstrap-token\") pod \"machine-config-server-5xvjh\" (UID: \"3301d9a4-0f26-4331-b29d-d38fec4a60c7\") " pod="openshift-machine-config-operator/machine-config-server-5xvjh" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.602555 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0b52ca21-3afa-490c-aa78-a60b67dc0c52-cert\") pod \"ingress-canary-qbqjf\" (UID: \"0b52ca21-3afa-490c-aa78-a60b67dc0c52\") " pod="openshift-ingress-canary/ingress-canary-qbqjf" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.602580 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/edfc2b29-2ed6-4d6f-aed5-61bdfd205dc6-apiservice-cert\") pod \"packageserver-d55dfcdfc-sxnxc\" (UID: \"edfc2b29-2ed6-4d6f-aed5-61bdfd205dc6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sxnxc" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.602613 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67dbde4d-5c0f-45cf-82ae-435b16e17121-config-volume\") pod \"collect-profiles-29531130-wbshf\" (UID: \"67dbde4d-5c0f-45cf-82ae-435b16e17121\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531130-wbshf" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.603620 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67dbde4d-5c0f-45cf-82ae-435b16e17121-config-volume\") pod \"collect-profiles-29531130-wbshf\" (UID: \"67dbde4d-5c0f-45cf-82ae-435b16e17121\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531130-wbshf" Feb 23 17:33:37 crc kubenswrapper[4724]: E0223 17:33:37.603722 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:38.103703871 +0000 UTC m=+173.919903471 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.604471 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/1a5f0220-44b0-4db7-8849-0846e57a8730-signing-cabundle\") pod \"service-ca-9c57cc56f-4kmgv\" (UID: \"1a5f0220-44b0-4db7-8849-0846e57a8730\") " pod="openshift-service-ca/service-ca-9c57cc56f-4kmgv" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.604954 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e870c417-07ff-4c42-8e8f-7db6078f3b5d-images\") pod \"machine-config-operator-74547568cd-gddsl\" (UID: \"e870c417-07ff-4c42-8e8f-7db6078f3b5d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gddsl" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.606604 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0880746f-1a97-4302-8bd8-062a1f849e23-config\") pod \"apiserver-76f77b778f-xtsjf\" (UID: \"0880746f-1a97-4302-8bd8-062a1f849e23\") " pod="openshift-apiserver/apiserver-76f77b778f-xtsjf" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.607093 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0880746f-1a97-4302-8bd8-062a1f849e23-node-pullsecrets\") pod \"apiserver-76f77b778f-xtsjf\" (UID: \"0880746f-1a97-4302-8bd8-062a1f849e23\") " pod="openshift-apiserver/apiserver-76f77b778f-xtsjf" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.607296 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/689a8ef9-8892-4a61-b050-540f2e13ac4c-plugins-dir\") pod \"csi-hostpathplugin-bdrlz\" (UID: \"689a8ef9-8892-4a61-b050-540f2e13ac4c\") " pod="hostpath-provisioner/csi-hostpathplugin-bdrlz" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.608887 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0880746f-1a97-4302-8bd8-062a1f849e23-audit\") pod \"apiserver-76f77b778f-xtsjf\" (UID: \"0880746f-1a97-4302-8bd8-062a1f849e23\") " pod="openshift-apiserver/apiserver-76f77b778f-xtsjf" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.608903 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0880746f-1a97-4302-8bd8-062a1f849e23-etcd-serving-ca\") pod \"apiserver-76f77b778f-xtsjf\" (UID: \"0880746f-1a97-4302-8bd8-062a1f849e23\") " pod="openshift-apiserver/apiserver-76f77b778f-xtsjf" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.609050 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/689a8ef9-8892-4a61-b050-540f2e13ac4c-mountpoint-dir\") pod \"csi-hostpathplugin-bdrlz\" (UID: \"689a8ef9-8892-4a61-b050-540f2e13ac4c\") " pod="hostpath-provisioner/csi-hostpathplugin-bdrlz" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.609303 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/689a8ef9-8892-4a61-b050-540f2e13ac4c-registration-dir\") pod \"csi-hostpathplugin-bdrlz\" (UID: \"689a8ef9-8892-4a61-b050-540f2e13ac4c\") " pod="hostpath-provisioner/csi-hostpathplugin-bdrlz" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.610085 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c553fb55-ce10-4c80-82fa-8ccd91ff5cd0-config\") pod \"service-ca-operator-777779d784-6rznq\" (UID: \"c553fb55-ce10-4c80-82fa-8ccd91ff5cd0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6rznq" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.610756 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0880746f-1a97-4302-8bd8-062a1f849e23-audit-dir\") pod \"apiserver-76f77b778f-xtsjf\" (UID: \"0880746f-1a97-4302-8bd8-062a1f849e23\") " pod="openshift-apiserver/apiserver-76f77b778f-xtsjf" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.610869 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8nmb\" (UniqueName: \"kubernetes.io/projected/7746d0a1-242b-4afc-b968-36853a4ad1ac-kube-api-access-w8nmb\") pod \"route-controller-manager-6576b87f9c-wgdx8\" (UID: \"7746d0a1-242b-4afc-b968-36853a4ad1ac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgdx8" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.612544 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.612830 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/edfc2b29-2ed6-4d6f-aed5-61bdfd205dc6-tmpfs\") pod \"packageserver-d55dfcdfc-sxnxc\" (UID: \"edfc2b29-2ed6-4d6f-aed5-61bdfd205dc6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sxnxc" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.613200 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/627d0ee4-906a-4e10-9350-80074b99e9f4-metrics-tls\") pod \"dns-default-72w5z\" (UID: \"627d0ee4-906a-4e10-9350-80074b99e9f4\") " pod="openshift-dns/dns-default-72w5z" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.614006 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/67dbde4d-5c0f-45cf-82ae-435b16e17121-secret-volume\") pod \"collect-profiles-29531130-wbshf\" (UID: \"67dbde4d-5c0f-45cf-82ae-435b16e17121\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531130-wbshf" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.614346 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c553fb55-ce10-4c80-82fa-8ccd91ff5cd0-serving-cert\") pod \"service-ca-operator-777779d784-6rznq\" (UID: \"c553fb55-ce10-4c80-82fa-8ccd91ff5cd0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6rznq" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.614739 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0880746f-1a97-4302-8bd8-062a1f849e23-image-import-ca\") pod \"apiserver-76f77b778f-xtsjf\" (UID: \"0880746f-1a97-4302-8bd8-062a1f849e23\") " pod="openshift-apiserver/apiserver-76f77b778f-xtsjf" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.615676 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0880746f-1a97-4302-8bd8-062a1f849e23-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xtsjf\" (UID: \"0880746f-1a97-4302-8bd8-062a1f849e23\") " pod="openshift-apiserver/apiserver-76f77b778f-xtsjf" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.616356 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0880746f-1a97-4302-8bd8-062a1f849e23-encryption-config\") pod \"apiserver-76f77b778f-xtsjf\" (UID: \"0880746f-1a97-4302-8bd8-062a1f849e23\") " pod="openshift-apiserver/apiserver-76f77b778f-xtsjf" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.617416 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4a5ae8b2-dd3e-49f0-97d2-790cc9b76107-srv-cert\") pod \"olm-operator-6b444d44fb-45j55\" (UID: \"4a5ae8b2-dd3e-49f0-97d2-790cc9b76107\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-45j55" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.618029 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-p4vpc" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.618082 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/1a5f0220-44b0-4db7-8849-0846e57a8730-signing-key\") pod \"service-ca-9c57cc56f-4kmgv\" (UID: \"1a5f0220-44b0-4db7-8849-0846e57a8730\") " pod="openshift-service-ca/service-ca-9c57cc56f-4kmgv" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.618522 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c084a005-fee9-4c88-b875-8b5ddaf06820-profile-collector-cert\") pod \"catalog-operator-68c6474976-9vlcc\" (UID: \"c084a005-fee9-4c88-b875-8b5ddaf06820\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9vlcc" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.619089 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2c14acfb-83f3-4782-84df-6558dde9c268-client-ca\") pod \"controller-manager-879f6c89f-5jdvd\" (UID: \"2c14acfb-83f3-4782-84df-6558dde9c268\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5jdvd" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.619142 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e842e9a3-2897-414d-8606-46bb70b207d9-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-n4kjh\" (UID: \"e842e9a3-2897-414d-8606-46bb70b207d9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n4kjh" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.619328 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/627d0ee4-906a-4e10-9350-80074b99e9f4-config-volume\") pod \"dns-default-72w5z\" (UID: \"627d0ee4-906a-4e10-9350-80074b99e9f4\") " pod="openshift-dns/dns-default-72w5z" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.619712 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e63a5cc4-56f4-414c-87e9-4ec6ff77de47-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-n984k\" (UID: \"e63a5cc4-56f4-414c-87e9-4ec6ff77de47\") " pod="openshift-marketplace/marketplace-operator-79b997595-n984k" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.620619 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e63a5cc4-56f4-414c-87e9-4ec6ff77de47-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-n984k\" (UID: \"e63a5cc4-56f4-414c-87e9-4ec6ff77de47\") " pod="openshift-marketplace/marketplace-operator-79b997595-n984k" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.621100 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/dbbfa5dc-5393-48b1-bc93-3d6e6ccf188d-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-dlfsv\" (UID: \"dbbfa5dc-5393-48b1-bc93-3d6e6ccf188d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dlfsv" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.623334 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/edfc2b29-2ed6-4d6f-aed5-61bdfd205dc6-webhook-cert\") pod \"packageserver-d55dfcdfc-sxnxc\" (UID: \"edfc2b29-2ed6-4d6f-aed5-61bdfd205dc6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sxnxc" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.623770 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e870c417-07ff-4c42-8e8f-7db6078f3b5d-proxy-tls\") pod \"machine-config-operator-74547568cd-gddsl\" (UID: \"e870c417-07ff-4c42-8e8f-7db6078f3b5d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gddsl" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.623917 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c084a005-fee9-4c88-b875-8b5ddaf06820-srv-cert\") pod \"catalog-operator-68c6474976-9vlcc\" (UID: \"c084a005-fee9-4c88-b875-8b5ddaf06820\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9vlcc" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.624144 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/689a8ef9-8892-4a61-b050-540f2e13ac4c-csi-data-dir\") pod \"csi-hostpathplugin-bdrlz\" (UID: \"689a8ef9-8892-4a61-b050-540f2e13ac4c\") " pod="hostpath-provisioner/csi-hostpathplugin-bdrlz" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.624211 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/689a8ef9-8892-4a61-b050-540f2e13ac4c-socket-dir\") pod \"csi-hostpathplugin-bdrlz\" (UID: \"689a8ef9-8892-4a61-b050-540f2e13ac4c\") " pod="hostpath-provisioner/csi-hostpathplugin-bdrlz" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.625260 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4a5ae8b2-dd3e-49f0-97d2-790cc9b76107-profile-collector-cert\") pod \"olm-operator-6b444d44fb-45j55\" (UID: \"4a5ae8b2-dd3e-49f0-97d2-790cc9b76107\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-45j55" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.625858 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e870c417-07ff-4c42-8e8f-7db6078f3b5d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-gddsl\" (UID: \"e870c417-07ff-4c42-8e8f-7db6078f3b5d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gddsl" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.626536 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-p78b5"] Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.627239 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.627584 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0880746f-1a97-4302-8bd8-062a1f849e23-etcd-client\") pod \"apiserver-76f77b778f-xtsjf\" (UID: \"0880746f-1a97-4302-8bd8-062a1f849e23\") " pod="openshift-apiserver/apiserver-76f77b778f-xtsjf" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.628831 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0880746f-1a97-4302-8bd8-062a1f849e23-serving-cert\") pod \"apiserver-76f77b778f-xtsjf\" (UID: \"0880746f-1a97-4302-8bd8-062a1f849e23\") " pod="openshift-apiserver/apiserver-76f77b778f-xtsjf" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.629508 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3301d9a4-0f26-4331-b29d-d38fec4a60c7-certs\") pod \"machine-config-server-5xvjh\" (UID: \"3301d9a4-0f26-4331-b29d-d38fec4a60c7\") " pod="openshift-machine-config-operator/machine-config-server-5xvjh" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.629959 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3301d9a4-0f26-4331-b29d-d38fec4a60c7-node-bootstrap-token\") pod \"machine-config-server-5xvjh\" (UID: \"3301d9a4-0f26-4331-b29d-d38fec4a60c7\") " pod="openshift-machine-config-operator/machine-config-server-5xvjh" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.630532 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7746d0a1-242b-4afc-b968-36853a4ad1ac-config\") pod \"route-controller-manager-6576b87f9c-wgdx8\" (UID: \"7746d0a1-242b-4afc-b968-36853a4ad1ac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgdx8" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.631022 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/edfc2b29-2ed6-4d6f-aed5-61bdfd205dc6-apiservice-cert\") pod \"packageserver-d55dfcdfc-sxnxc\" (UID: \"edfc2b29-2ed6-4d6f-aed5-61bdfd205dc6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sxnxc" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.637827 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0b52ca21-3afa-490c-aa78-a60b67dc0c52-cert\") pod \"ingress-canary-qbqjf\" (UID: \"0b52ca21-3afa-490c-aa78-a60b67dc0c52\") " pod="openshift-ingress-canary/ingress-canary-qbqjf" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.640240 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bfa1e7f2-fdc3-47cb-b906-0e138164c57d-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-sjpqd\" (UID: \"bfa1e7f2-fdc3-47cb-b906-0e138164c57d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-sjpqd" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.646128 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-sg2k6" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.649108 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.664080 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c14acfb-83f3-4782-84df-6558dde9c268-serving-cert\") pod \"controller-manager-879f6c89f-5jdvd\" (UID: \"2c14acfb-83f3-4782-84df-6558dde9c268\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5jdvd" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.668826 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-c4l9z" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.677422 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.689486 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-xttsp"] Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.690233 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2c14acfb-83f3-4782-84df-6558dde9c268-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-5jdvd\" (UID: \"2c14acfb-83f3-4782-84df-6558dde9c268\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5jdvd" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.704545 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:37 crc kubenswrapper[4724]: E0223 17:33:37.705112 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:38.205090297 +0000 UTC m=+174.021289897 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.706349 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-plsvj" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.710380 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgdx8" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.720647 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2f4h\" (UniqueName: \"kubernetes.io/projected/a43d9fa2-d037-4b14-a90e-30b81108b214-kube-api-access-q2f4h\") pod \"cluster-image-registry-operator-dc59b4c8b-q4lmq\" (UID: \"a43d9fa2-d037-4b14-a90e-30b81108b214\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q4lmq" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.730245 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6nv99" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.736984 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2llc\" (UniqueName: \"kubernetes.io/projected/9d52ec05-b283-48f8-aed2-50c0a6dcc9e3-kube-api-access-r2llc\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.787591 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7wcc\" (UniqueName: \"kubernetes.io/projected/b5895e15-1275-4b7b-9f0d-0a3baf72490b-kube-api-access-t7wcc\") pod \"kube-storage-version-migrator-operator-b67b599dd-mr2c4\" (UID: \"b5895e15-1275-4b7b-9f0d-0a3baf72490b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mr2c4" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.796677 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/32ce877e-4675-4f92-a2b3-7be9a27b36d2-bound-sa-token\") pod \"ingress-operator-5b745b69d9-vrsk4\" (UID: \"32ce877e-4675-4f92-a2b3-7be9a27b36d2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vrsk4" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.806198 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:37 crc kubenswrapper[4724]: E0223 17:33:37.806936 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:38.306913063 +0000 UTC m=+174.123112653 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.809112 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spzpf\" (UniqueName: \"kubernetes.io/projected/5470bb85-cb17-49bf-ae67-bf41931ee055-kube-api-access-spzpf\") pod \"openshift-controller-manager-operator-756b6f6bc6-ddvrv\" (UID: \"5470bb85-cb17-49bf-ae67-bf41931ee055\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ddvrv" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.820409 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8ea86b52-00b1-458f-8c02-4baaf402d190-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-d487c\" (UID: \"8ea86b52-00b1-458f-8c02-4baaf402d190\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d487c" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.858936 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/613a3da0-fd65-48f7-a750-b53e06ec39d8-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-r59dr\" (UID: \"613a3da0-fd65-48f7-a750-b53e06ec39d8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r59dr" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.877096 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a43d9fa2-d037-4b14-a90e-30b81108b214-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-q4lmq\" (UID: \"a43d9fa2-d037-4b14-a90e-30b81108b214\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q4lmq" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.890337 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzsp9\" (UniqueName: \"kubernetes.io/projected/facb437a-7568-41fc-a922-644ad2cfdda2-kube-api-access-fzsp9\") pod \"router-default-5444994796-s77tw\" (UID: \"facb437a-7568-41fc-a922-644ad2cfdda2\") " pod="openshift-ingress/router-default-5444994796-s77tw" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.899019 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ddvrv" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.909057 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:37 crc kubenswrapper[4724]: E0223 17:33:37.909651 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:38.409615191 +0000 UTC m=+174.225814791 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.918569 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4ttb\" (UniqueName: \"kubernetes.io/projected/254acf08-7bbb-4e84-95ed-21ce84733817-kube-api-access-n4ttb\") pod \"console-operator-58897d9998-ch92v\" (UID: \"254acf08-7bbb-4e84-95ed-21ce84733817\") " pod="openshift-console-operator/console-operator-58897d9998-ch92v" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.922526 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a10ea92-fb0c-4819-92d9-e3703c3dbe09-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-t7nzb\" (UID: \"3a10ea92-fb0c-4819-92d9-e3703c3dbe09\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t7nzb" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.931466 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-5jdvd" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.947082 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9d52ec05-b283-48f8-aed2-50c0a6dcc9e3-bound-sa-token\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.958016 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d487c" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.967749 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcfrr\" (UniqueName: \"kubernetes.io/projected/997b5710-9b99-4207-92da-28b7a1923db2-kube-api-access-fcfrr\") pod \"console-f9d7485db-fknnv\" (UID: \"997b5710-9b99-4207-92da-28b7a1923db2\") " pod="openshift-console/console-f9d7485db-fknnv" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.979602 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r59dr" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.986834 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t7nzb" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.991272 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8pzs\" (UniqueName: \"kubernetes.io/projected/32ce877e-4675-4f92-a2b3-7be9a27b36d2-kube-api-access-x8pzs\") pod \"ingress-operator-5b745b69d9-vrsk4\" (UID: \"32ce877e-4675-4f92-a2b3-7be9a27b36d2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vrsk4" Feb 23 17:33:37 crc kubenswrapper[4724]: I0223 17:33:37.995206 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-8nh95"] Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:37.999092 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mr2c4" Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.009958 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:38 crc kubenswrapper[4724]: E0223 17:33:38.010126 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:38.510103694 +0000 UTC m=+174.326303294 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.010133 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ww4qm\" (UniqueName: \"kubernetes.io/projected/627d0ee4-906a-4e10-9350-80074b99e9f4-kube-api-access-ww4qm\") pod \"dns-default-72w5z\" (UID: \"627d0ee4-906a-4e10-9350-80074b99e9f4\") " pod="openshift-dns/dns-default-72w5z" Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.010364 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:38 crc kubenswrapper[4724]: E0223 17:33:38.010989 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:38.510980975 +0000 UTC m=+174.327180576 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.017971 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-s77tw" Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.041506 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h72ft\" (UniqueName: \"kubernetes.io/projected/1a5f0220-44b0-4db7-8849-0846e57a8730-kube-api-access-h72ft\") pod \"service-ca-9c57cc56f-4kmgv\" (UID: \"1a5f0220-44b0-4db7-8849-0846e57a8730\") " pod="openshift-service-ca/service-ca-9c57cc56f-4kmgv" Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.042298 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-plsvj"] Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.071609 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grsgh\" (UniqueName: \"kubernetes.io/projected/bfa1e7f2-fdc3-47cb-b906-0e138164c57d-kube-api-access-grsgh\") pod \"multus-admission-controller-857f4d67dd-sjpqd\" (UID: \"bfa1e7f2-fdc3-47cb-b906-0e138164c57d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-sjpqd" Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.080617 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-sjpqd" Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.082972 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-sg2k6"] Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.086191 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c846n\" (UniqueName: \"kubernetes.io/projected/3301d9a4-0f26-4331-b29d-d38fec4a60c7-kube-api-access-c846n\") pod \"machine-config-server-5xvjh\" (UID: \"3301d9a4-0f26-4331-b29d-d38fec4a60c7\") " pod="openshift-machine-config-operator/machine-config-server-5xvjh" Feb 23 17:33:38 crc kubenswrapper[4724]: W0223 17:33:38.089658 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66b7c770_0864_43b0_8be8_8c9e26cedb5f.slice/crio-3f2b1d90e14e1592e46da4a72b79a33335ed4903666bfe7419965b3f3bd6deca WatchSource:0}: Error finding container 3f2b1d90e14e1592e46da4a72b79a33335ed4903666bfe7419965b3f3bd6deca: Status 404 returned error can't find the container with id 3f2b1d90e14e1592e46da4a72b79a33335ed4903666bfe7419965b3f3bd6deca Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.093801 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfq8d\" (UniqueName: \"kubernetes.io/projected/689a8ef9-8892-4a61-b050-540f2e13ac4c-kube-api-access-xfq8d\") pod \"csi-hostpathplugin-bdrlz\" (UID: \"689a8ef9-8892-4a61-b050-540f2e13ac4c\") " pod="hostpath-provisioner/csi-hostpathplugin-bdrlz" Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.111004 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:38 crc kubenswrapper[4724]: E0223 17:33:38.111960 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:38.61193438 +0000 UTC m=+174.428133980 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.122790 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88bgl\" (UniqueName: \"kubernetes.io/projected/0b52ca21-3afa-490c-aa78-a60b67dc0c52-kube-api-access-88bgl\") pod \"ingress-canary-qbqjf\" (UID: \"0b52ca21-3afa-490c-aa78-a60b67dc0c52\") " pod="openshift-ingress-canary/ingress-canary-qbqjf" Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.124412 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9fnk\" (UniqueName: \"kubernetes.io/projected/e63a5cc4-56f4-414c-87e9-4ec6ff77de47-kube-api-access-w9fnk\") pod \"marketplace-operator-79b997595-n984k\" (UID: \"e63a5cc4-56f4-414c-87e9-4ec6ff77de47\") " pod="openshift-marketplace/marketplace-operator-79b997595-n984k" Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.131561 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-4kmgv" Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.139783 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q4lmq" Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.149314 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcnw8\" (UniqueName: \"kubernetes.io/projected/dbbfa5dc-5393-48b1-bc93-3d6e6ccf188d-kube-api-access-kcnw8\") pod \"package-server-manager-789f6589d5-dlfsv\" (UID: \"dbbfa5dc-5393-48b1-bc93-3d6e6ccf188d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dlfsv" Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.159178 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-72w5z" Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.159811 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-5xvjh" Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.160967 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-p4vpc"] Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.178153 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9k7fm" event={"ID":"f1ba78f6-528b-46c5-b908-a0b5e69d4787","Type":"ContainerStarted","Data":"9f744f9892ceeba1d86dd538f0dce7030d26fd3478d815fb6dcba6eaabbb6769"} Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.180229 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc9w4\" (UniqueName: \"kubernetes.io/projected/c084a005-fee9-4c88-b875-8b5ddaf06820-kube-api-access-fc9w4\") pod \"catalog-operator-68c6474976-9vlcc\" (UID: \"c084a005-fee9-4c88-b875-8b5ddaf06820\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9vlcc" Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.182501 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6qh7\" (UniqueName: \"kubernetes.io/projected/e870c417-07ff-4c42-8e8f-7db6078f3b5d-kube-api-access-f6qh7\") pod \"machine-config-operator-74547568cd-gddsl\" (UID: \"e870c417-07ff-4c42-8e8f-7db6078f3b5d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gddsl" Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.182708 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-bdrlz" Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.198673 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8nh95" event={"ID":"66b7c770-0864-43b0-8be8-8c9e26cedb5f","Type":"ContainerStarted","Data":"3f2b1d90e14e1592e46da4a72b79a33335ed4903666bfe7419965b3f3bd6deca"} Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.199478 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-qbqjf" Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.205500 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-xttsp" event={"ID":"4a9e0634-64a7-4106-8a10-bfed1ab672da","Type":"ContainerStarted","Data":"e65f1a1781ec0c52f0e8849219ab857d0e4c03d13f96a07bbedb7c27c850c004"} Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.205561 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-xttsp" event={"ID":"4a9e0634-64a7-4106-8a10-bfed1ab672da","Type":"ContainerStarted","Data":"6dfb257a4767e9ac0f580979597a9aa26fa131dea228b522091635ac3c7cb746"} Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.207230 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-ch92v" Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.211019 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-8hzn4" event={"ID":"fe2c617a-30bc-4095-b085-d6306827fcce","Type":"ContainerStarted","Data":"abb4489f7569db615d62eb5428a708fac1fb3d5fd786c7e04bcf3f66aa863e1c"} Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.211066 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-8hzn4" event={"ID":"fe2c617a-30bc-4095-b085-d6306827fcce","Type":"ContainerStarted","Data":"24e93b8c2fa2e7c39ebd20d67cf98d3958ff02efa094effc23b80aa94dc8f86e"} Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.211726 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-8hzn4" Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.213968 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.214349 4724 patch_prober.go:28] interesting pod/downloads-7954f5f757-8hzn4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 23 17:33:38 crc kubenswrapper[4724]: E0223 17:33:38.214438 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:38.714425713 +0000 UTC m=+174.530625313 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.214432 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8hzn4" podUID="fe2c617a-30bc-4095-b085-d6306827fcce" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.224074 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4kcvg" event={"ID":"757355b8-9b0f-4c38-9560-a0281e0fa332","Type":"ContainerStarted","Data":"ee2cacd57ae8ab25878bba19ffdcf1a59d7e52428addc164d77c55653bf47e80"} Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.225312 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hhhgx" event={"ID":"9cfbc29a-94db-48d8-9393-38c581d767a5","Type":"ContainerStarted","Data":"40e63b00429fad01188a06a2a12343fa64106ee89613844b5881fc81bf62b4f2"} Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.225366 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hhhgx" event={"ID":"9cfbc29a-94db-48d8-9393-38c581d767a5","Type":"ContainerStarted","Data":"31cb79835cd0aa0088d001d30c2dc72da8bfff365bb5c357d8c28f7ce3a648d1"} Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.225935 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fknnv" Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.227871 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p78b5" event={"ID":"a71874e2-c4df-47f7-af47-b85d817995bf","Type":"ContainerStarted","Data":"75e9b71d598252bd1a206730bff96dfb5be1fecd5fb73697467255a19ae320c0"} Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.247760 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vrsk4" Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.250116 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4msdm\" (UniqueName: \"kubernetes.io/projected/67dbde4d-5c0f-45cf-82ae-435b16e17121-kube-api-access-4msdm\") pod \"collect-profiles-29531130-wbshf\" (UID: \"67dbde4d-5c0f-45cf-82ae-435b16e17121\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531130-wbshf" Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.258617 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwn4x\" (UniqueName: \"kubernetes.io/projected/c553fb55-ce10-4c80-82fa-8ccd91ff5cd0-kube-api-access-vwn4x\") pod \"service-ca-operator-777779d784-6rznq\" (UID: \"c553fb55-ce10-4c80-82fa-8ccd91ff5cd0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6rznq" Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.260743 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwphs\" (UniqueName: \"kubernetes.io/projected/e842e9a3-2897-414d-8606-46bb70b207d9-kube-api-access-jwphs\") pod \"control-plane-machine-set-operator-78cbb6b69f-n4kjh\" (UID: \"e842e9a3-2897-414d-8606-46bb70b207d9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n4kjh" Feb 23 17:33:38 crc kubenswrapper[4724]: W0223 17:33:38.282462 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod415b36c5_1000_4ff9_9640_1ec29a3728c1.slice/crio-10750ff0bb3772f3f795d07b8e66c6d26726228c4d1a66a559705a2dfbcf7f3f WatchSource:0}: Error finding container 10750ff0bb3772f3f795d07b8e66c6d26726228c4d1a66a559705a2dfbcf7f3f: Status 404 returned error can't find the container with id 10750ff0bb3772f3f795d07b8e66c6d26726228c4d1a66a559705a2dfbcf7f3f Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.283887 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdvgw\" (UniqueName: \"kubernetes.io/projected/edfc2b29-2ed6-4d6f-aed5-61bdfd205dc6-kube-api-access-wdvgw\") pod \"packageserver-d55dfcdfc-sxnxc\" (UID: \"edfc2b29-2ed6-4d6f-aed5-61bdfd205dc6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sxnxc" Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.291362 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwfqr\" (UniqueName: \"kubernetes.io/projected/0880746f-1a97-4302-8bd8-062a1f849e23-kube-api-access-qwfqr\") pod \"apiserver-76f77b778f-xtsjf\" (UID: \"0880746f-1a97-4302-8bd8-062a1f849e23\") " pod="openshift-apiserver/apiserver-76f77b778f-xtsjf" Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.313738 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjfq2\" (UniqueName: \"kubernetes.io/projected/4a5ae8b2-dd3e-49f0-97d2-790cc9b76107-kube-api-access-rjfq2\") pod \"olm-operator-6b444d44fb-45j55\" (UID: \"4a5ae8b2-dd3e-49f0-97d2-790cc9b76107\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-45j55" Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.315235 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:38 crc kubenswrapper[4724]: E0223 17:33:38.317135 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:38.81710779 +0000 UTC m=+174.633307380 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.347133 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-45j55" Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.352215 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gddsl" Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.358597 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-xtsjf" Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.366946 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-n984k" Feb 23 17:33:38 crc kubenswrapper[4724]: W0223 17:33:38.371280 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3301d9a4_0f26_4331_b29d_d38fec4a60c7.slice/crio-dc548ec7e81a49370bf64b38d36715a2ecd8073614d9d69687a03f811f4cd525 WatchSource:0}: Error finding container dc548ec7e81a49370bf64b38d36715a2ecd8073614d9d69687a03f811f4cd525: Status 404 returned error can't find the container with id dc548ec7e81a49370bf64b38d36715a2ecd8073614d9d69687a03f811f4cd525 Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.377640 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sxnxc" Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.389843 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9vlcc" Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.398085 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531130-wbshf" Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.411208 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n4kjh" Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.417137 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-c4l9z"] Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.417724 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6rznq" Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.418028 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:38 crc kubenswrapper[4724]: E0223 17:33:38.418463 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:38.918442574 +0000 UTC m=+174.734642374 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.439231 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dlfsv" Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.468353 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgdx8"] Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.521227 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:38 crc kubenswrapper[4724]: E0223 17:33:38.521700 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:39.021675045 +0000 UTC m=+174.837874645 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.528382 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5jdvd"] Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.587967 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-6nv99"] Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.614269 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ddvrv"] Feb 23 17:33:38 crc kubenswrapper[4724]: W0223 17:33:38.620664 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7746d0a1_242b_4afc_b968_36853a4ad1ac.slice/crio-42351a7c651bfaa51b765d71b07ffa1a57b3332a746f1ecb76b54394c9f341fa WatchSource:0}: Error finding container 42351a7c651bfaa51b765d71b07ffa1a57b3332a746f1ecb76b54394c9f341fa: Status 404 returned error can't find the container with id 42351a7c651bfaa51b765d71b07ffa1a57b3332a746f1ecb76b54394c9f341fa Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.622885 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:38 crc kubenswrapper[4724]: E0223 17:33:38.623310 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:39.123294656 +0000 UTC m=+174.939494256 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:38 crc kubenswrapper[4724]: W0223 17:33:38.664582 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c14acfb_83f3_4782_84df_6558dde9c268.slice/crio-f8d37f779af01468aef03f4bbe1e684ed646d95220c6de1a88b1acf020e775f7 WatchSource:0}: Error finding container f8d37f779af01468aef03f4bbe1e684ed646d95220c6de1a88b1acf020e775f7: Status 404 returned error can't find the container with id f8d37f779af01468aef03f4bbe1e684ed646d95220c6de1a88b1acf020e775f7 Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.724141 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:38 crc kubenswrapper[4724]: E0223 17:33:38.724781 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:39.224762184 +0000 UTC m=+175.040961784 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.825809 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:38 crc kubenswrapper[4724]: E0223 17:33:38.828667 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:39.328363474 +0000 UTC m=+175.144563064 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.927466 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:38 crc kubenswrapper[4724]: E0223 17:33:38.927653 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:39.427604486 +0000 UTC m=+175.243804086 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:38 crc kubenswrapper[4724]: I0223 17:33:38.928192 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:38 crc kubenswrapper[4724]: E0223 17:33:38.928623 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:39.428607211 +0000 UTC m=+175.244806811 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.032262 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:39 crc kubenswrapper[4724]: E0223 17:33:39.037352 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:39.537098013 +0000 UTC m=+175.353297803 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.039846 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:39 crc kubenswrapper[4724]: E0223 17:33:39.040937 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:39.540915607 +0000 UTC m=+175.357115227 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.141007 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:39 crc kubenswrapper[4724]: E0223 17:33:39.141685 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:39.641643266 +0000 UTC m=+175.457842866 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.142179 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-4kmgv"] Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.146671 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mr2c4"] Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.152986 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r59dr"] Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.197961 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-sjpqd"] Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.204695 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d487c"] Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.209346 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t7nzb"] Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.232461 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgdx8" event={"ID":"7746d0a1-242b-4afc-b968-36853a4ad1ac","Type":"ContainerStarted","Data":"42351a7c651bfaa51b765d71b07ffa1a57b3332a746f1ecb76b54394c9f341fa"} Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.235668 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-sg2k6" event={"ID":"415b36c5-1000-4ff9-9640-1ec29a3728c1","Type":"ContainerStarted","Data":"6805b23172e9c8ae16c0dc947be27164780a770c8be30743c164c5e929bf0dcd"} Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.235722 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-sg2k6" event={"ID":"415b36c5-1000-4ff9-9640-1ec29a3728c1","Type":"ContainerStarted","Data":"10750ff0bb3772f3f795d07b8e66c6d26726228c4d1a66a559705a2dfbcf7f3f"} Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.240467 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8nh95" event={"ID":"66b7c770-0864-43b0-8be8-8c9e26cedb5f","Type":"ContainerStarted","Data":"1f548261ee9595c10ef25e74e273d59bafa28c87f9352828475a3f355654c7c9"} Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.242857 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.242906 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.242946 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.243143 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.243839 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-p4vpc" event={"ID":"000721f3-4213-4d68-b390-d172a0fea797","Type":"ContainerStarted","Data":"1c54d55591e7280aff81a585429d8c6b608f67e0b4068d38a77301d7c7f21da3"} Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.244080 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:33:39 crc kubenswrapper[4724]: E0223 17:33:39.245326 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:39.745308318 +0000 UTC m=+175.561507918 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.249802 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ddvrv" event={"ID":"5470bb85-cb17-49bf-ae67-bf41931ee055","Type":"ContainerStarted","Data":"5ec3cc2a1afdf92ef0bf3b2d1831af364c5f36f49771340bf7fb201a8f115b93"} Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.251272 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.251890 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.258056 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.258130 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.259704 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9k7fm" event={"ID":"f1ba78f6-528b-46c5-b908-a0b5e69d4787","Type":"ContainerStarted","Data":"125b739096702e4b5d8063f7c8729e3b55b11fc2ddb9fb01e0399e58a079214b"} Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.264606 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-s77tw" event={"ID":"facb437a-7568-41fc-a922-644ad2cfdda2","Type":"ContainerStarted","Data":"06d3826d428ffbb19e8d3a771a9ad605c95439a6a31f8408b14fd3d0469f4d11"} Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.264670 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-s77tw" event={"ID":"facb437a-7568-41fc-a922-644ad2cfdda2","Type":"ContainerStarted","Data":"2923b2025e07ba0a2a654fd55089d2be07776e502121faa0bea921750b4b92e8"} Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.273118 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-5jdvd" event={"ID":"2c14acfb-83f3-4782-84df-6558dde9c268","Type":"ContainerStarted","Data":"f8d37f779af01468aef03f4bbe1e684ed646d95220c6de1a88b1acf020e775f7"} Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.276305 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4kcvg" event={"ID":"757355b8-9b0f-4c38-9560-a0281e0fa332","Type":"ContainerStarted","Data":"a6e946e37ee6d22768fafc08fbb8ed082d5b9dac186b570f6caa39f8f4bb28ca"} Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.279433 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-4kcvg" Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.282246 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.285079 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-5xvjh" event={"ID":"3301d9a4-0f26-4331-b29d-d38fec4a60c7","Type":"ContainerStarted","Data":"dc548ec7e81a49370bf64b38d36715a2ecd8073614d9d69687a03f811f4cd525"} Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.291578 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.299961 4724 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-4kcvg container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.15:6443/healthz\": dial tcp 10.217.0.15:6443: connect: connection refused" start-of-body= Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.300034 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-4kcvg" podUID="757355b8-9b0f-4c38-9560-a0281e0fa332" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.15:6443/healthz\": dial tcp 10.217.0.15:6443: connect: connection refused" Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.300315 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-plsvj" event={"ID":"334ae0f0-e733-430f-b670-4ed4244bfa22","Type":"ContainerStarted","Data":"d45dc20dbfd669481aebb7cf4423c32a3f382d5e5abacb5c45be5eaf856cface"} Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.331452 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hhhgx" event={"ID":"9cfbc29a-94db-48d8-9393-38c581d767a5","Type":"ContainerStarted","Data":"6c63d1463550eb24c0475dc2232dc991b1491ecf1377c6d3a53abe5ca78d71b7"} Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.344737 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:39 crc kubenswrapper[4724]: E0223 17:33:39.346750 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:39.846725824 +0000 UTC m=+175.662925424 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.367341 4724 generic.go:334] "Generic (PLEG): container finished" podID="a71874e2-c4df-47f7-af47-b85d817995bf" containerID="8747741baf0b7563927b1a2da21f90af8a24c5636abc8f36e383fc6edb24b0c3" exitCode=0 Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.367567 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p78b5" event={"ID":"a71874e2-c4df-47f7-af47-b85d817995bf","Type":"ContainerDied","Data":"8747741baf0b7563927b1a2da21f90af8a24c5636abc8f36e383fc6edb24b0c3"} Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.392898 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6nv99" event={"ID":"a5519032-42eb-483d-8bc4-a1fad9b5dc28","Type":"ContainerStarted","Data":"ad3f9a0557c43848cdd4675f2e5641ee43e9a821f24c7558f98b37867e474c68"} Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.408296 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-c4l9z" event={"ID":"a43a88c1-27e9-46ab-a605-3aed976d512c","Type":"ContainerStarted","Data":"58755712590faa60ed1882b5d970d986f3e8b4b687a014afc15e95424fc65a83"} Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.409703 4724 patch_prober.go:28] interesting pod/downloads-7954f5f757-8hzn4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.409773 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8hzn4" podUID="fe2c617a-30bc-4095-b085-d6306827fcce" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.447525 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:39 crc kubenswrapper[4724]: E0223 17:33:39.448166 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:39.948149941 +0000 UTC m=+175.764349541 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.476698 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.501267 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-ch92v"] Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.520175 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q4lmq"] Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.525933 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-72w5z"] Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.525996 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-qbqjf"] Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.549060 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:39 crc kubenswrapper[4724]: E0223 17:33:39.550470 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:40.050422338 +0000 UTC m=+175.866621948 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:39 crc kubenswrapper[4724]: W0223 17:33:39.561575 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod254acf08_7bbb_4e84_95ed_21ce84733817.slice/crio-b87efe9762f0ab5c81be575697d127c90dd91d2f009b21c2926e40556aae4650 WatchSource:0}: Error finding container b87efe9762f0ab5c81be575697d127c90dd91d2f009b21c2926e40556aae4650: Status 404 returned error can't find the container with id b87efe9762f0ab5c81be575697d127c90dd91d2f009b21c2926e40556aae4650 Feb 23 17:33:39 crc kubenswrapper[4724]: W0223 17:33:39.564658 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b52ca21_3afa_490c_aa78_a60b67dc0c52.slice/crio-e21af88a578d10a8914cc2b71c8dcf03a8de87eb101bd81610b3a8e7455125d8 WatchSource:0}: Error finding container e21af88a578d10a8914cc2b71c8dcf03a8de87eb101bd81610b3a8e7455125d8: Status 404 returned error can't find the container with id e21af88a578d10a8914cc2b71c8dcf03a8de87eb101bd81610b3a8e7455125d8 Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.652925 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.653125 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-fknnv"] Feb 23 17:33:39 crc kubenswrapper[4724]: E0223 17:33:39.653891 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:40.153845294 +0000 UTC m=+175.970044894 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.669728 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-vrsk4"] Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.671539 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sxnxc"] Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.677541 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-bdrlz"] Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.758302 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:39 crc kubenswrapper[4724]: E0223 17:33:39.759301 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:40.258524881 +0000 UTC m=+176.074724481 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.760579 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:39 crc kubenswrapper[4724]: E0223 17:33:39.760940 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:40.26093337 +0000 UTC m=+176.077132970 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:39 crc kubenswrapper[4724]: W0223 17:33:39.780361 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod997b5710_9b99_4207_92da_28b7a1923db2.slice/crio-60e9628fccca08824d603e3c61e2b10981b7f61c45137c8b029895ea222fac7a WatchSource:0}: Error finding container 60e9628fccca08824d603e3c61e2b10981b7f61c45137c8b029895ea222fac7a: Status 404 returned error can't find the container with id 60e9628fccca08824d603e3c61e2b10981b7f61c45137c8b029895ea222fac7a Feb 23 17:33:39 crc kubenswrapper[4724]: W0223 17:33:39.785712 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podedfc2b29_2ed6_4d6f_aed5_61bdfd205dc6.slice/crio-a5a8ad9e2c1f8ebc4af4855b25da75905a0b543235934bf710c6af7103bbfe0b WatchSource:0}: Error finding container a5a8ad9e2c1f8ebc4af4855b25da75905a0b543235934bf710c6af7103bbfe0b: Status 404 returned error can't find the container with id a5a8ad9e2c1f8ebc4af4855b25da75905a0b543235934bf710c6af7103bbfe0b Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.828040 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xtsjf"] Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.851662 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9vlcc"] Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.859849 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-gddsl"] Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.861443 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:39 crc kubenswrapper[4724]: E0223 17:33:39.861871 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:40.361849814 +0000 UTC m=+176.178049414 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.865997 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-45j55"] Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.868245 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531130-wbshf"] Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.889439 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dlfsv"] Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.910008 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n4kjh"] Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.930920 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-8hzn4" podStartSLOduration=118.930895777 podStartE2EDuration="1m58.930895777s" podCreationTimestamp="2026-02-23 17:31:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:33:39.923261548 +0000 UTC m=+175.739461148" watchObservedRunningTime="2026-02-23 17:33:39.930895777 +0000 UTC m=+175.747095397" Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.935485 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-6rznq"] Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.946701 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-n984k"] Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.957293 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rrb7b" podStartSLOduration=119.957256441 podStartE2EDuration="1m59.957256441s" podCreationTimestamp="2026-02-23 17:31:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:33:39.950082303 +0000 UTC m=+175.766281903" watchObservedRunningTime="2026-02-23 17:33:39.957256441 +0000 UTC m=+175.773456051" Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.962949 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:39 crc kubenswrapper[4724]: E0223 17:33:39.963635 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:40.463620089 +0000 UTC m=+176.279819689 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:39 crc kubenswrapper[4724]: I0223 17:33:39.992575 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-4kcvg" podStartSLOduration=119.992548737 podStartE2EDuration="1m59.992548737s" podCreationTimestamp="2026-02-23 17:31:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:33:39.989014079 +0000 UTC m=+175.805213679" watchObservedRunningTime="2026-02-23 17:33:39.992548737 +0000 UTC m=+175.808748337" Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.064235 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:40 crc kubenswrapper[4724]: E0223 17:33:40.064901 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:40.564861901 +0000 UTC m=+176.381061501 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:40 crc kubenswrapper[4724]: W0223 17:33:40.068746 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc553fb55_ce10_4c80_82fa_8ccd91ff5cd0.slice/crio-7c662dbf30c5f403f3a0ec8ec335afaaa5b5f7a1fe8a3087b4c32a6d68dfa119 WatchSource:0}: Error finding container 7c662dbf30c5f403f3a0ec8ec335afaaa5b5f7a1fe8a3087b4c32a6d68dfa119: Status 404 returned error can't find the container with id 7c662dbf30c5f403f3a0ec8ec335afaaa5b5f7a1fe8a3087b4c32a6d68dfa119 Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.084711 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hhhgx" podStartSLOduration=120.084683592 podStartE2EDuration="2m0.084683592s" podCreationTimestamp="2026-02-23 17:31:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:33:40.081800561 +0000 UTC m=+175.898000161" watchObservedRunningTime="2026-02-23 17:33:40.084683592 +0000 UTC m=+175.900883192" Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.167278 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:40 crc kubenswrapper[4724]: E0223 17:33:40.170996 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:40.670961033 +0000 UTC m=+176.487160633 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:40 crc kubenswrapper[4724]: W0223 17:33:40.226686 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-7b8859f9257955d023f76c76c2bd71b2f914649008b1447a80db2cf7bc11cc97 WatchSource:0}: Error finding container 7b8859f9257955d023f76c76c2bd71b2f914649008b1447a80db2cf7bc11cc97: Status 404 returned error can't find the container with id 7b8859f9257955d023f76c76c2bd71b2f914649008b1447a80db2cf7bc11cc97 Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.268939 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:40 crc kubenswrapper[4724]: E0223 17:33:40.269096 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:40.769056627 +0000 UTC m=+176.585256227 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.269223 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:40 crc kubenswrapper[4724]: E0223 17:33:40.270270 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:40.770250566 +0000 UTC m=+176.586450166 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.371201 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:40 crc kubenswrapper[4724]: E0223 17:33:40.371379 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:40.871345414 +0000 UTC m=+176.687545014 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.371443 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:40 crc kubenswrapper[4724]: E0223 17:33:40.371842 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:40.871826486 +0000 UTC m=+176.688026086 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.416120 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-72w5z" event={"ID":"627d0ee4-906a-4e10-9350-80074b99e9f4","Type":"ContainerStarted","Data":"ec769f678d45e8f73850ed563de634a9763d37472110940e3bc6fc58aa4c6b28"} Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.417436 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgdx8" event={"ID":"7746d0a1-242b-4afc-b968-36853a4ad1ac","Type":"ContainerStarted","Data":"a57eb595fa93ecaedb32a080094709af0ecc7a1433b861be3244510d99225e53"} Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.417933 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgdx8" Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.421257 4724 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-wgdx8 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.421417 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgdx8" podUID="7746d0a1-242b-4afc-b968-36853a4ad1ac" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.421917 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-5jdvd" event={"ID":"2c14acfb-83f3-4782-84df-6558dde9c268","Type":"ContainerStarted","Data":"35a4e5e1ed3010b4c084c7200b6b2bd0e4e9d13275a81c92b6cbdc70da6aadd7"} Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.422216 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-5jdvd" Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.424126 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xtsjf" event={"ID":"0880746f-1a97-4302-8bd8-062a1f849e23","Type":"ContainerStarted","Data":"005d48dd45d925be547dada58d06a390ec7af43c3163126697fb759367f080fb"} Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.424144 4724 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-5jdvd container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.424236 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-5jdvd" podUID="2c14acfb-83f3-4782-84df-6558dde9c268" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.430171 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-4kmgv" event={"ID":"1a5f0220-44b0-4db7-8849-0846e57a8730","Type":"ContainerStarted","Data":"3f84e18a9696e3a21c3e5a7c85be187abbfabf310bae27295144959ad42f1300"} Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.430227 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-4kmgv" event={"ID":"1a5f0220-44b0-4db7-8849-0846e57a8730","Type":"ContainerStarted","Data":"2c81110039432c6739fe785566097cbc6b495fdae0f62917e793fb5cd7679cf8"} Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.431550 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vrsk4" event={"ID":"32ce877e-4675-4f92-a2b3-7be9a27b36d2","Type":"ContainerStarted","Data":"bc9d7310e7edfefa66636700269ee854b17fd9ec8194b46ea480d221d5952320"} Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.433327 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"9dcb7a857d0bbf977db2d35cf5fc7eb8d96165315c448cbf36648d15f18494e8"} Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.434682 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-sjpqd" event={"ID":"bfa1e7f2-fdc3-47cb-b906-0e138164c57d","Type":"ContainerStarted","Data":"d7aeef0e7770b0f5fd173512dc3dd39abc562f04197f52a52379d00abd786cf3"} Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.434709 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-sjpqd" event={"ID":"bfa1e7f2-fdc3-47cb-b906-0e138164c57d","Type":"ContainerStarted","Data":"5850433d945d161f6fcb9d569cab3be190038a6e4dc6451690d0087081d2423a"} Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.437678 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-n984k" event={"ID":"e63a5cc4-56f4-414c-87e9-4ec6ff77de47","Type":"ContainerStarted","Data":"586e2d3c4e00972ee9bd9a3eed2aa86c35a1ca5aa8c3baba5e5f19065d0186ca"} Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.446272 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgdx8" podStartSLOduration=119.446246313 podStartE2EDuration="1m59.446246313s" podCreationTimestamp="2026-02-23 17:31:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:33:40.444967591 +0000 UTC m=+176.261167191" watchObservedRunningTime="2026-02-23 17:33:40.446246313 +0000 UTC m=+176.262445913" Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.447685 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-xttsp" event={"ID":"4a9e0634-64a7-4106-8a10-bfed1ab672da","Type":"ContainerStarted","Data":"2b50018621d3b7323abb8f9f3eae07f131bd16fb6735787349e8970a68c13deb"} Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.455297 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ddvrv" event={"ID":"5470bb85-cb17-49bf-ae67-bf41931ee055","Type":"ContainerStarted","Data":"6c0c5046ca5829761c3616ac5c0ec6c6a6161def5804406a02b618b7aeb24013"} Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.456998 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sxnxc" event={"ID":"edfc2b29-2ed6-4d6f-aed5-61bdfd205dc6","Type":"ContainerStarted","Data":"a5a8ad9e2c1f8ebc4af4855b25da75905a0b543235934bf710c6af7103bbfe0b"} Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.464786 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6rznq" event={"ID":"c553fb55-ce10-4c80-82fa-8ccd91ff5cd0","Type":"ContainerStarted","Data":"7c662dbf30c5f403f3a0ec8ec335afaaa5b5f7a1fe8a3087b4c32a6d68dfa119"} Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.472923 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t7nzb" event={"ID":"3a10ea92-fb0c-4819-92d9-e3703c3dbe09","Type":"ContainerStarted","Data":"0069473c137b3e20b54c582ed9fac066c7e84cf166c2190322db613544e66f80"} Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.472970 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t7nzb" event={"ID":"3a10ea92-fb0c-4819-92d9-e3703c3dbe09","Type":"ContainerStarted","Data":"ebd9ddf1d74dc9b29b0c4721fdd50fec641b0259faa5476e2d72742f2b6fb3cb"} Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.473930 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:40 crc kubenswrapper[4724]: E0223 17:33:40.474315 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:40.974261128 +0000 UTC m=+176.790460738 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.475336 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-45j55" event={"ID":"4a5ae8b2-dd3e-49f0-97d2-790cc9b76107","Type":"ContainerStarted","Data":"f77a3da52f399bec0bf169b13362881efc25fb8a17e21520c97a2f8d216ca02e"} Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.486301 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"7b8859f9257955d023f76c76c2bd71b2f914649008b1447a80db2cf7bc11cc97"} Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.488821 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-5jdvd" podStartSLOduration=119.488805418 podStartE2EDuration="1m59.488805418s" podCreationTimestamp="2026-02-23 17:31:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:33:40.485941047 +0000 UTC m=+176.302140647" watchObservedRunningTime="2026-02-23 17:33:40.488805418 +0000 UTC m=+176.305005008" Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.507789 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"875ab3602a2f35e8689e33d51b74ce31d4506070a7b47504150e96358b9b4a5f"} Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.530694 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-4kmgv" podStartSLOduration=119.530673557 podStartE2EDuration="1m59.530673557s" podCreationTimestamp="2026-02-23 17:31:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:33:40.529921768 +0000 UTC m=+176.346121368" watchObservedRunningTime="2026-02-23 17:33:40.530673557 +0000 UTC m=+176.346873157" Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.557707 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531130-wbshf" event={"ID":"67dbde4d-5c0f-45cf-82ae-435b16e17121","Type":"ContainerStarted","Data":"9d9379332d336fde1a922524ec31bbfa172fc5302318b2256b79d9e4745c379a"} Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.571951 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-qbqjf" event={"ID":"0b52ca21-3afa-490c-aa78-a60b67dc0c52","Type":"ContainerStarted","Data":"9d9229c019ba82a636ddbcd19ac21693ad09c07daf8eeb0cf50439b24470e85e"} Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.572083 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-qbqjf" event={"ID":"0b52ca21-3afa-490c-aa78-a60b67dc0c52","Type":"ContainerStarted","Data":"e21af88a578d10a8914cc2b71c8dcf03a8de87eb101bd81610b3a8e7455125d8"} Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.575926 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:40 crc kubenswrapper[4724]: E0223 17:33:40.577714 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:41.077699484 +0000 UTC m=+176.893899084 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.615327 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-xttsp" podStartSLOduration=119.615299097 podStartE2EDuration="1m59.615299097s" podCreationTimestamp="2026-02-23 17:31:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:33:40.613515692 +0000 UTC m=+176.429715302" watchObservedRunningTime="2026-02-23 17:33:40.615299097 +0000 UTC m=+176.431498697" Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.615752 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9k7fm" event={"ID":"f1ba78f6-528b-46c5-b908-a0b5e69d4787","Type":"ContainerStarted","Data":"cfd2bc44d8791087588b95121886b3c823690104fcd71553b9c2d9523941f960"} Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.627813 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t7nzb" podStartSLOduration=119.627780756 podStartE2EDuration="1m59.627780756s" podCreationTimestamp="2026-02-23 17:31:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:33:40.58197092 +0000 UTC m=+176.398170540" watchObservedRunningTime="2026-02-23 17:33:40.627780756 +0000 UTC m=+176.443980356" Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.632652 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dlfsv" event={"ID":"dbbfa5dc-5393-48b1-bc93-3d6e6ccf188d","Type":"ContainerStarted","Data":"85aaa9a38dbae8cbc044eafeb9e9952e71450cd9cfe50357465df8fd92b94823"} Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.638426 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gddsl" event={"ID":"e870c417-07ff-4c42-8e8f-7db6078f3b5d","Type":"ContainerStarted","Data":"c565451077528a1928ef6eb97192b25d661ece382cee706ccf149b3764bedef0"} Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.648305 4724 generic.go:334] "Generic (PLEG): container finished" podID="66b7c770-0864-43b0-8be8-8c9e26cedb5f" containerID="1f548261ee9595c10ef25e74e273d59bafa28c87f9352828475a3f355654c7c9" exitCode=0 Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.648516 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8nh95" event={"ID":"66b7c770-0864-43b0-8be8-8c9e26cedb5f","Type":"ContainerDied","Data":"1f548261ee9595c10ef25e74e273d59bafa28c87f9352828475a3f355654c7c9"} Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.649447 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ddvrv" podStartSLOduration=119.649368862 podStartE2EDuration="1m59.649368862s" podCreationTimestamp="2026-02-23 17:31:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:33:40.640987874 +0000 UTC m=+176.457187494" watchObservedRunningTime="2026-02-23 17:33:40.649368862 +0000 UTC m=+176.465568472" Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.663228 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6nv99" event={"ID":"a5519032-42eb-483d-8bc4-a1fad9b5dc28","Type":"ContainerStarted","Data":"9fd719dc85bd09312f0565f28c8fb40c1d58aaea74e67e2f308494ed8361edbf"} Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.664351 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fknnv" event={"ID":"997b5710-9b99-4207-92da-28b7a1923db2","Type":"ContainerStarted","Data":"60e9628fccca08824d603e3c61e2b10981b7f61c45137c8b029895ea222fac7a"} Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.665372 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-bdrlz" event={"ID":"689a8ef9-8892-4a61-b050-540f2e13ac4c","Type":"ContainerStarted","Data":"fc9e30221560bf51bfc577da585a7d659609c3cb3cf9c5cfecb1d24ec7faf195"} Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.667098 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mr2c4" event={"ID":"b5895e15-1275-4b7b-9f0d-0a3baf72490b","Type":"ContainerStarted","Data":"c27b9ead5d888141ce7517fab56383b8886a40b8558c40ee157f4ee0d2fe874f"} Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.667137 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mr2c4" event={"ID":"b5895e15-1275-4b7b-9f0d-0a3baf72490b","Type":"ContainerStarted","Data":"b2bcfe18b1d221e89ac0e2b59f19af8854bcbb0b4bb308315b5a2cc233a7adc0"} Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.670922 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-qbqjf" podStartSLOduration=5.670900966 podStartE2EDuration="5.670900966s" podCreationTimestamp="2026-02-23 17:33:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:33:40.668885196 +0000 UTC m=+176.485084796" watchObservedRunningTime="2026-02-23 17:33:40.670900966 +0000 UTC m=+176.487100566" Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.676815 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:40 crc kubenswrapper[4724]: E0223 17:33:40.678709 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:41.178667789 +0000 UTC m=+176.994867389 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.694709 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-ch92v" event={"ID":"254acf08-7bbb-4e84-95ed-21ce84733817","Type":"ContainerStarted","Data":"13239f4e268de7653a6898375eea41b7964476dcccfac0ac30fc8832fb117119"} Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.694780 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-ch92v" event={"ID":"254acf08-7bbb-4e84-95ed-21ce84733817","Type":"ContainerStarted","Data":"b87efe9762f0ab5c81be575697d127c90dd91d2f009b21c2926e40556aae4650"} Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.698004 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9k7fm" podStartSLOduration=120.697992508 podStartE2EDuration="2m0.697992508s" podCreationTimestamp="2026-02-23 17:31:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:33:40.696571223 +0000 UTC m=+176.512770823" watchObservedRunningTime="2026-02-23 17:33:40.697992508 +0000 UTC m=+176.514192108" Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.698517 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-ch92v" Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.701681 4724 patch_prober.go:28] interesting pod/console-operator-58897d9998-ch92v container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.701791 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-ch92v" podUID="254acf08-7bbb-4e84-95ed-21ce84733817" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.712928 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-plsvj" event={"ID":"334ae0f0-e733-430f-b670-4ed4244bfa22","Type":"ContainerStarted","Data":"6727dd73e5694bbda82dc177cd6f55cf7fc78111c31c238e0a83ad809b9a8fad"} Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.723378 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mr2c4" podStartSLOduration=119.723344467 podStartE2EDuration="1m59.723344467s" podCreationTimestamp="2026-02-23 17:31:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:33:40.72023388 +0000 UTC m=+176.536433480" watchObservedRunningTime="2026-02-23 17:33:40.723344467 +0000 UTC m=+176.539544067" Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.736982 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d487c" event={"ID":"8ea86b52-00b1-458f-8c02-4baaf402d190","Type":"ContainerStarted","Data":"5b85dedf0718922b96683ffae2987c3d7b3efb58627b31bfb37407d41c02ec68"} Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.737045 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d487c" event={"ID":"8ea86b52-00b1-458f-8c02-4baaf402d190","Type":"ContainerStarted","Data":"df135a6e3460e81965200ee8809a1f623287d7a258a23125f71afef2762d247d"} Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.745221 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-ch92v" podStartSLOduration=120.745191099 podStartE2EDuration="2m0.745191099s" podCreationTimestamp="2026-02-23 17:31:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:33:40.742768719 +0000 UTC m=+176.558968319" watchObservedRunningTime="2026-02-23 17:33:40.745191099 +0000 UTC m=+176.561390709" Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.755664 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r59dr" event={"ID":"613a3da0-fd65-48f7-a750-b53e06ec39d8","Type":"ContainerStarted","Data":"e44e159b1fc557c342199cb1574e0959b33923e92e03f4b6c8847e8887ca46b6"} Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.755729 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r59dr" event={"ID":"613a3da0-fd65-48f7-a750-b53e06ec39d8","Type":"ContainerStarted","Data":"9ef4749bf0702d25ac86f3f99a21dfa8a03b22e82949544dc6646d7daff7ba2a"} Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.762288 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9vlcc" event={"ID":"c084a005-fee9-4c88-b875-8b5ddaf06820","Type":"ContainerStarted","Data":"41389bb9bac2781bf79d032c2a92a125838b271d50ead91ab37436986b4f5c80"} Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.764283 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-c4l9z" event={"ID":"a43a88c1-27e9-46ab-a605-3aed976d512c","Type":"ContainerStarted","Data":"f083a012a7e8004ac02151dd9c2ccab51321650767f9cda470942dba1502101d"} Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.773825 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n4kjh" event={"ID":"e842e9a3-2897-414d-8606-46bb70b207d9","Type":"ContainerStarted","Data":"3c5c04771a3fdf13be294b56ae5b929d1868faf5199d9f11b33272b9872a470e"} Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.779359 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q4lmq" event={"ID":"a43d9fa2-d037-4b14-a90e-30b81108b214","Type":"ContainerStarted","Data":"d9d1e4eccddfebbcaa64f6b1a10005ea2376a5854dcae5b3f03848875f093890"} Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.780266 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:40 crc kubenswrapper[4724]: E0223 17:33:40.780603 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:41.280590457 +0000 UTC m=+177.096790057 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.788929 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-5xvjh" event={"ID":"3301d9a4-0f26-4331-b29d-d38fec4a60c7","Type":"ContainerStarted","Data":"2d22e68aef6fbd55bd613f373372c56590bb69021b4ad247e286deec054c215e"} Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.794839 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r59dr" podStartSLOduration=119.79481847 podStartE2EDuration="1m59.79481847s" podCreationTimestamp="2026-02-23 17:31:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:33:40.793761154 +0000 UTC m=+176.609960754" watchObservedRunningTime="2026-02-23 17:33:40.79481847 +0000 UTC m=+176.611018070" Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.796931 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-p4vpc" event={"ID":"000721f3-4213-4d68-b390-d172a0fea797","Type":"ContainerStarted","Data":"300283d16821fc6d87aa6859cffa503f72e9c98db8f3274d7a76aa4b05888969"} Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.797995 4724 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-4kcvg container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.15:6443/healthz\": dial tcp 10.217.0.15:6443: connect: connection refused" start-of-body= Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.798049 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-4kcvg" podUID="757355b8-9b0f-4c38-9560-a0281e0fa332" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.15:6443/healthz\": dial tcp 10.217.0.15:6443: connect: connection refused" Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.820960 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d487c" podStartSLOduration=119.820910188 podStartE2EDuration="1m59.820910188s" podCreationTimestamp="2026-02-23 17:31:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:33:40.818916218 +0000 UTC m=+176.635115818" watchObservedRunningTime="2026-02-23 17:33:40.820910188 +0000 UTC m=+176.637109788" Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.859772 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-c4l9z" podStartSLOduration=119.859745151 podStartE2EDuration="1m59.859745151s" podCreationTimestamp="2026-02-23 17:31:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:33:40.839846537 +0000 UTC m=+176.656046147" watchObservedRunningTime="2026-02-23 17:33:40.859745151 +0000 UTC m=+176.675944751" Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.860547 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-s77tw" podStartSLOduration=119.860540951 podStartE2EDuration="1m59.860540951s" podCreationTimestamp="2026-02-23 17:31:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:33:40.858772937 +0000 UTC m=+176.674972547" watchObservedRunningTime="2026-02-23 17:33:40.860540951 +0000 UTC m=+176.676740551" Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.893834 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:40 crc kubenswrapper[4724]: E0223 17:33:40.894416 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:41.394345209 +0000 UTC m=+177.210544819 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.894727 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:40 crc kubenswrapper[4724]: E0223 17:33:40.901614 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:41.401584699 +0000 UTC m=+177.217784299 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.923109 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-sg2k6" podStartSLOduration=120.923079912 podStartE2EDuration="2m0.923079912s" podCreationTimestamp="2026-02-23 17:31:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:33:40.893042447 +0000 UTC m=+176.709242067" watchObservedRunningTime="2026-02-23 17:33:40.923079912 +0000 UTC m=+176.739279522" Feb 23 17:33:40 crc kubenswrapper[4724]: I0223 17:33:40.937713 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-5xvjh" podStartSLOduration=5.937683995 podStartE2EDuration="5.937683995s" podCreationTimestamp="2026-02-23 17:33:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:33:40.936808253 +0000 UTC m=+176.753007863" watchObservedRunningTime="2026-02-23 17:33:40.937683995 +0000 UTC m=+176.753883595" Feb 23 17:33:41 crc kubenswrapper[4724]: I0223 17:33:41.000124 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:41 crc kubenswrapper[4724]: E0223 17:33:41.000432 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:41.50036746 +0000 UTC m=+177.316567070 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:41 crc kubenswrapper[4724]: I0223 17:33:41.000513 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:41 crc kubenswrapper[4724]: E0223 17:33:41.001165 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:41.501156749 +0000 UTC m=+177.317356349 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:41 crc kubenswrapper[4724]: I0223 17:33:41.020737 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-s77tw" Feb 23 17:33:41 crc kubenswrapper[4724]: I0223 17:33:41.024889 4724 patch_prober.go:28] interesting pod/router-default-5444994796-s77tw container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 23 17:33:41 crc kubenswrapper[4724]: I0223 17:33:41.024970 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s77tw" podUID="facb437a-7568-41fc-a922-644ad2cfdda2" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 23 17:33:41 crc kubenswrapper[4724]: I0223 17:33:41.102170 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:41 crc kubenswrapper[4724]: E0223 17:33:41.102759 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:41.602733278 +0000 UTC m=+177.418932878 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:41 crc kubenswrapper[4724]: I0223 17:33:41.205701 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:41 crc kubenswrapper[4724]: E0223 17:33:41.206373 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:41.706345289 +0000 UTC m=+177.522544959 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:41 crc kubenswrapper[4724]: I0223 17:33:41.307092 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:41 crc kubenswrapper[4724]: E0223 17:33:41.307568 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:41.80754272 +0000 UTC m=+177.623742320 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:41 crc kubenswrapper[4724]: I0223 17:33:41.415355 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:41 crc kubenswrapper[4724]: E0223 17:33:41.415884 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:41.915867287 +0000 UTC m=+177.732066887 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:41 crc kubenswrapper[4724]: I0223 17:33:41.516379 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:41 crc kubenswrapper[4724]: E0223 17:33:41.516843 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:42.016822292 +0000 UTC m=+177.833021892 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:41 crc kubenswrapper[4724]: I0223 17:33:41.618107 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:41 crc kubenswrapper[4724]: E0223 17:33:41.618543 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:42.118529585 +0000 UTC m=+177.934729185 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:41 crc kubenswrapper[4724]: I0223 17:33:41.719091 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:41 crc kubenswrapper[4724]: E0223 17:33:41.719556 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:42.219529951 +0000 UTC m=+178.035729551 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:41 crc kubenswrapper[4724]: I0223 17:33:41.799011 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9vlcc" event={"ID":"c084a005-fee9-4c88-b875-8b5ddaf06820","Type":"ContainerStarted","Data":"164c7a46b3f97a50430b81a1ba8ee41a0d7a6b5defe3a1869074bf9af9c895b6"} Feb 23 17:33:41 crc kubenswrapper[4724]: I0223 17:33:41.800440 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dlfsv" event={"ID":"dbbfa5dc-5393-48b1-bc93-3d6e6ccf188d","Type":"ContainerStarted","Data":"bb39df5d2e1c474dc8839a6fd680c033de5290612df52a53198a062cca72efa4"} Feb 23 17:33:41 crc kubenswrapper[4724]: I0223 17:33:41.802223 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fknnv" event={"ID":"997b5710-9b99-4207-92da-28b7a1923db2","Type":"ContainerStarted","Data":"5f3cb84d271e733bef79fc17cc127f47657cda1d8996903130f4b9174dee5bb6"} Feb 23 17:33:41 crc kubenswrapper[4724]: I0223 17:33:41.803686 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q4lmq" event={"ID":"a43d9fa2-d037-4b14-a90e-30b81108b214","Type":"ContainerStarted","Data":"a48e691ae4f29dee811bcd07bb6c2417e8843933b8cf0a99450a6f1d542b288c"} Feb 23 17:33:41 crc kubenswrapper[4724]: I0223 17:33:41.806196 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n4kjh" event={"ID":"e842e9a3-2897-414d-8606-46bb70b207d9","Type":"ContainerStarted","Data":"516d4eb626c7fd4df3e882778bf8fb30c6e45baa2f9fc3a0ad15c87253c6408d"} Feb 23 17:33:41 crc kubenswrapper[4724]: I0223 17:33:41.807465 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sxnxc" event={"ID":"edfc2b29-2ed6-4d6f-aed5-61bdfd205dc6","Type":"ContainerStarted","Data":"be24da387c9b713c69794ff7348889a1f57047ba3df027d70aabcac9bcd1f15e"} Feb 23 17:33:41 crc kubenswrapper[4724]: I0223 17:33:41.808171 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sxnxc" Feb 23 17:33:41 crc kubenswrapper[4724]: I0223 17:33:41.809533 4724 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-sxnxc container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Feb 23 17:33:41 crc kubenswrapper[4724]: I0223 17:33:41.809580 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sxnxc" podUID="edfc2b29-2ed6-4d6f-aed5-61bdfd205dc6" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Feb 23 17:33:41 crc kubenswrapper[4724]: I0223 17:33:41.816884 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-72w5z" event={"ID":"627d0ee4-906a-4e10-9350-80074b99e9f4","Type":"ContainerStarted","Data":"7551c6242bce45cf2223d9b7b028e403e63c078d9703702d93f6821d5b53da05"} Feb 23 17:33:41 crc kubenswrapper[4724]: I0223 17:33:41.818762 4724 generic.go:334] "Generic (PLEG): container finished" podID="0880746f-1a97-4302-8bd8-062a1f849e23" containerID="1d5822b3698d01a02dfcb45860ef0a998894decd017fe19407c34f581fda3b37" exitCode=0 Feb 23 17:33:41 crc kubenswrapper[4724]: I0223 17:33:41.818851 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xtsjf" event={"ID":"0880746f-1a97-4302-8bd8-062a1f849e23","Type":"ContainerDied","Data":"1d5822b3698d01a02dfcb45860ef0a998894decd017fe19407c34f581fda3b37"} Feb 23 17:33:41 crc kubenswrapper[4724]: I0223 17:33:41.821597 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vrsk4" event={"ID":"32ce877e-4675-4f92-a2b3-7be9a27b36d2","Type":"ContainerStarted","Data":"41405573b74d830f4100570d3dae33b5ebb83d24f0f9af23038c7662a4c77d34"} Feb 23 17:33:41 crc kubenswrapper[4724]: I0223 17:33:41.822263 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:41 crc kubenswrapper[4724]: E0223 17:33:41.822344 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:42.322318941 +0000 UTC m=+178.138518541 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:41 crc kubenswrapper[4724]: I0223 17:33:41.825254 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"77179f5ec70ea97a5efa00f3646d578477de1effefcdf3516bf9c15a9c6bfcea"} Feb 23 17:33:41 crc kubenswrapper[4724]: I0223 17:33:41.826942 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531130-wbshf" event={"ID":"67dbde4d-5c0f-45cf-82ae-435b16e17121","Type":"ContainerStarted","Data":"ee5f8314678a01afc418a25852abecc282f30eba6f14ba505c8be9808761db1e"} Feb 23 17:33:41 crc kubenswrapper[4724]: I0223 17:33:41.831603 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-plsvj" event={"ID":"334ae0f0-e733-430f-b670-4ed4244bfa22","Type":"ContainerStarted","Data":"92b844187dd0bc71abc6930070c2c1f70cf6865999ed810387eab0616ea643ce"} Feb 23 17:33:41 crc kubenswrapper[4724]: I0223 17:33:41.832256 4724 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-4kcvg container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.15:6443/healthz\": dial tcp 10.217.0.15:6443: connect: connection refused" start-of-body= Feb 23 17:33:41 crc kubenswrapper[4724]: I0223 17:33:41.832303 4724 patch_prober.go:28] interesting pod/console-operator-58897d9998-ch92v container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 23 17:33:41 crc kubenswrapper[4724]: I0223 17:33:41.832407 4724 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-wgdx8 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Feb 23 17:33:41 crc kubenswrapper[4724]: I0223 17:33:41.832313 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-4kcvg" podUID="757355b8-9b0f-4c38-9560-a0281e0fa332" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.15:6443/healthz\": dial tcp 10.217.0.15:6443: connect: connection refused" Feb 23 17:33:41 crc kubenswrapper[4724]: I0223 17:33:41.832451 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgdx8" podUID="7746d0a1-242b-4afc-b968-36853a4ad1ac" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Feb 23 17:33:41 crc kubenswrapper[4724]: I0223 17:33:41.832374 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-ch92v" podUID="254acf08-7bbb-4e84-95ed-21ce84733817" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 23 17:33:41 crc kubenswrapper[4724]: I0223 17:33:41.832582 4724 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-5jdvd container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 23 17:33:41 crc kubenswrapper[4724]: I0223 17:33:41.832613 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-5jdvd" podUID="2c14acfb-83f3-4782-84df-6558dde9c268" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 23 17:33:41 crc kubenswrapper[4724]: I0223 17:33:41.847301 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q4lmq" podStartSLOduration=120.84726973 podStartE2EDuration="2m0.84726973s" podCreationTimestamp="2026-02-23 17:31:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:33:41.821924341 +0000 UTC m=+177.638123941" watchObservedRunningTime="2026-02-23 17:33:41.84726973 +0000 UTC m=+177.663469330" Feb 23 17:33:41 crc kubenswrapper[4724]: I0223 17:33:41.923749 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:41 crc kubenswrapper[4724]: E0223 17:33:41.924156 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:42.424085756 +0000 UTC m=+178.240285366 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:41 crc kubenswrapper[4724]: I0223 17:33:41.927812 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:41 crc kubenswrapper[4724]: E0223 17:33:41.928376 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:42.428357091 +0000 UTC m=+178.244556691 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:42 crc kubenswrapper[4724]: I0223 17:33:42.021838 4724 patch_prober.go:28] interesting pod/router-default-5444994796-s77tw container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 23 17:33:42 crc kubenswrapper[4724]: I0223 17:33:42.021924 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s77tw" podUID="facb437a-7568-41fc-a922-644ad2cfdda2" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 23 17:33:42 crc kubenswrapper[4724]: I0223 17:33:42.030219 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:42 crc kubenswrapper[4724]: E0223 17:33:42.030427 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:42.530398663 +0000 UTC m=+178.346598273 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:42 crc kubenswrapper[4724]: I0223 17:33:42.030618 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:42 crc kubenswrapper[4724]: E0223 17:33:42.030975 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:42.530964757 +0000 UTC m=+178.347164357 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:42 crc kubenswrapper[4724]: I0223 17:33:42.132503 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:42 crc kubenswrapper[4724]: E0223 17:33:42.132773 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:42.632726572 +0000 UTC m=+178.448926182 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:42 crc kubenswrapper[4724]: I0223 17:33:42.132841 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:42 crc kubenswrapper[4724]: E0223 17:33:42.133309 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:42.633297216 +0000 UTC m=+178.449496896 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:42 crc kubenswrapper[4724]: I0223 17:33:42.234007 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:42 crc kubenswrapper[4724]: E0223 17:33:42.234566 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:42.734545928 +0000 UTC m=+178.550745528 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:42 crc kubenswrapper[4724]: I0223 17:33:42.336448 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:42 crc kubenswrapper[4724]: E0223 17:33:42.336763 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:42.836746673 +0000 UTC m=+178.652946273 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:42 crc kubenswrapper[4724]: I0223 17:33:42.437224 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:42 crc kubenswrapper[4724]: E0223 17:33:42.437491 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:42.937452112 +0000 UTC m=+178.753651712 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:42 crc kubenswrapper[4724]: I0223 17:33:42.437551 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:42 crc kubenswrapper[4724]: E0223 17:33:42.437937 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:42.937929504 +0000 UTC m=+178.754129094 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:42 crc kubenswrapper[4724]: I0223 17:33:42.539197 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:42 crc kubenswrapper[4724]: E0223 17:33:42.539566 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:43.039525874 +0000 UTC m=+178.855725474 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:42 crc kubenswrapper[4724]: I0223 17:33:42.539787 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:42 crc kubenswrapper[4724]: E0223 17:33:42.540189 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:43.04017415 +0000 UTC m=+178.856373750 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:42 crc kubenswrapper[4724]: I0223 17:33:42.641107 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:42 crc kubenswrapper[4724]: E0223 17:33:42.641411 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:43.14134787 +0000 UTC m=+178.957547470 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:42 crc kubenswrapper[4724]: I0223 17:33:42.641480 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:42 crc kubenswrapper[4724]: E0223 17:33:42.641827 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:43.141819322 +0000 UTC m=+178.958018922 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:42 crc kubenswrapper[4724]: I0223 17:33:42.742874 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:42 crc kubenswrapper[4724]: E0223 17:33:42.743067 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:43.243027993 +0000 UTC m=+179.059227593 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:42 crc kubenswrapper[4724]: I0223 17:33:42.743228 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:42 crc kubenswrapper[4724]: E0223 17:33:42.743653 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:43.243645278 +0000 UTC m=+179.059844878 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:42 crc kubenswrapper[4724]: I0223 17:33:42.839942 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6rznq" event={"ID":"c553fb55-ce10-4c80-82fa-8ccd91ff5cd0","Type":"ContainerStarted","Data":"dcaf5ead96f137fb3b4ddd9a5f4161a04701b8f6aac027d80826660ea7712de7"} Feb 23 17:33:42 crc kubenswrapper[4724]: I0223 17:33:42.842974 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-n984k" event={"ID":"e63a5cc4-56f4-414c-87e9-4ec6ff77de47","Type":"ContainerStarted","Data":"2a622a9edfe599272f64a126a8abb9947d66beef0ae978d3ac916027ef1086cf"} Feb 23 17:33:42 crc kubenswrapper[4724]: I0223 17:33:42.843193 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-n984k" Feb 23 17:33:42 crc kubenswrapper[4724]: I0223 17:33:42.843837 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:42 crc kubenswrapper[4724]: E0223 17:33:42.844380 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:43.344354587 +0000 UTC m=+179.160554197 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:42 crc kubenswrapper[4724]: I0223 17:33:42.844927 4724 csr.go:261] certificate signing request csr-2dbpk is approved, waiting to be issued Feb 23 17:33:42 crc kubenswrapper[4724]: I0223 17:33:42.845004 4724 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-n984k container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" start-of-body= Feb 23 17:33:42 crc kubenswrapper[4724]: I0223 17:33:42.845182 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-n984k" podUID="e63a5cc4-56f4-414c-87e9-4ec6ff77de47" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" Feb 23 17:33:42 crc kubenswrapper[4724]: I0223 17:33:42.849461 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-72w5z" event={"ID":"627d0ee4-906a-4e10-9350-80074b99e9f4","Type":"ContainerStarted","Data":"db968cc2ddb6e317a36f7cc4a6b20130a1afb4b3af200c5634d28767abbd5e32"} Feb 23 17:33:42 crc kubenswrapper[4724]: I0223 17:33:42.849777 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-72w5z" Feb 23 17:33:42 crc kubenswrapper[4724]: I0223 17:33:42.852846 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"40abc738ebf11dd8e5f4af60d990b0ab8f9b30efa5d6307fdd876b93a0042d5b"} Feb 23 17:33:42 crc kubenswrapper[4724]: I0223 17:33:42.853726 4724 csr.go:257] certificate signing request csr-2dbpk is issued Feb 23 17:33:42 crc kubenswrapper[4724]: I0223 17:33:42.856297 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gddsl" event={"ID":"e870c417-07ff-4c42-8e8f-7db6078f3b5d","Type":"ContainerStarted","Data":"022a60120881d992ac4e8b7aced75327e97d2040be9320a25e680167a9c7da0d"} Feb 23 17:33:42 crc kubenswrapper[4724]: I0223 17:33:42.860582 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"3f28e966ea805fb30b6d64c4db1bcc808a6189aa57dbed22e914a8b813404bcf"} Feb 23 17:33:42 crc kubenswrapper[4724]: I0223 17:33:42.863817 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8nh95" event={"ID":"66b7c770-0864-43b0-8be8-8c9e26cedb5f","Type":"ContainerStarted","Data":"196b3c4f96f54687b21fc8b18a736886e6a7844a3d98f4deb6503eeb1cc4820d"} Feb 23 17:33:42 crc kubenswrapper[4724]: I0223 17:33:42.863947 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8nh95" Feb 23 17:33:42 crc kubenswrapper[4724]: I0223 17:33:42.879610 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6rznq" podStartSLOduration=121.879584091 podStartE2EDuration="2m1.879584091s" podCreationTimestamp="2026-02-23 17:31:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:33:42.878506954 +0000 UTC m=+178.694706554" watchObservedRunningTime="2026-02-23 17:33:42.879584091 +0000 UTC m=+178.695783691" Feb 23 17:33:42 crc kubenswrapper[4724]: I0223 17:33:42.880561 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sxnxc" podStartSLOduration=121.880513784 podStartE2EDuration="2m1.880513784s" podCreationTimestamp="2026-02-23 17:31:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:33:41.848994263 +0000 UTC m=+177.665193883" watchObservedRunningTime="2026-02-23 17:33:42.880513784 +0000 UTC m=+178.696713384" Feb 23 17:33:42 crc kubenswrapper[4724]: I0223 17:33:42.886738 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-sjpqd" event={"ID":"bfa1e7f2-fdc3-47cb-b906-0e138164c57d","Type":"ContainerStarted","Data":"61c81dc05db3b8d954376802c27e162ed3dbc89c13a1cfbfd7f2703fb7bfc7a0"} Feb 23 17:33:42 crc kubenswrapper[4724]: I0223 17:33:42.903761 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6nv99" event={"ID":"a5519032-42eb-483d-8bc4-a1fad9b5dc28","Type":"ContainerStarted","Data":"e668d565861915e4061b2317dddb760340522af5518fa4087b967a89ac2fb4cd"} Feb 23 17:33:42 crc kubenswrapper[4724]: I0223 17:33:42.913241 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-45j55" event={"ID":"4a5ae8b2-dd3e-49f0-97d2-790cc9b76107","Type":"ContainerStarted","Data":"5fa1aeda5936c1673eb5ccca7d60576ec54e19eb8022ff377c0a1618bd0aa887"} Feb 23 17:33:42 crc kubenswrapper[4724]: I0223 17:33:42.914366 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-45j55" Feb 23 17:33:42 crc kubenswrapper[4724]: I0223 17:33:42.918057 4724 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-45j55 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Feb 23 17:33:42 crc kubenswrapper[4724]: I0223 17:33:42.918134 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-45j55" podUID="4a5ae8b2-dd3e-49f0-97d2-790cc9b76107" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Feb 23 17:33:42 crc kubenswrapper[4724]: I0223 17:33:42.924022 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p78b5" event={"ID":"a71874e2-c4df-47f7-af47-b85d817995bf","Type":"ContainerStarted","Data":"1ba7f4d00bf5f5afa6e45a2b95d42998278bc5c463bceeb0d665db0c6c30a76c"} Feb 23 17:33:42 crc kubenswrapper[4724]: I0223 17:33:42.930812 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-p4vpc" event={"ID":"000721f3-4213-4d68-b390-d172a0fea797","Type":"ContainerStarted","Data":"a8bc3479c419f34f1bf166910ecdf79b35543891ee66f5dd0ff76373b8f47f91"} Feb 23 17:33:42 crc kubenswrapper[4724]: I0223 17:33:42.931273 4724 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-sxnxc container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Feb 23 17:33:42 crc kubenswrapper[4724]: I0223 17:33:42.931325 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sxnxc" podUID="edfc2b29-2ed6-4d6f-aed5-61bdfd205dc6" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Feb 23 17:33:42 crc kubenswrapper[4724]: I0223 17:33:42.932060 4724 patch_prober.go:28] interesting pod/console-operator-58897d9998-ch92v container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 23 17:33:42 crc kubenswrapper[4724]: I0223 17:33:42.932118 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-ch92v" podUID="254acf08-7bbb-4e84-95ed-21ce84733817" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 23 17:33:42 crc kubenswrapper[4724]: I0223 17:33:42.945409 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:42 crc kubenswrapper[4724]: E0223 17:33:42.947009 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:43.446992373 +0000 UTC m=+179.263191973 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:42 crc kubenswrapper[4724]: I0223 17:33:42.958239 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-72w5z" podStartSLOduration=7.958215551 podStartE2EDuration="7.958215551s" podCreationTimestamp="2026-02-23 17:33:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:33:42.953457433 +0000 UTC m=+178.769657043" watchObservedRunningTime="2026-02-23 17:33:42.958215551 +0000 UTC m=+178.774415151" Feb 23 17:33:42 crc kubenswrapper[4724]: I0223 17:33:42.975340 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-n984k" podStartSLOduration=121.975316776 podStartE2EDuration="2m1.975316776s" podCreationTimestamp="2026-02-23 17:31:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:33:42.974601358 +0000 UTC m=+178.790800968" watchObservedRunningTime="2026-02-23 17:33:42.975316776 +0000 UTC m=+178.791516376" Feb 23 17:33:43 crc kubenswrapper[4724]: I0223 17:33:42.999847 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8nh95" podStartSLOduration=122.999825794 podStartE2EDuration="2m2.999825794s" podCreationTimestamp="2026-02-23 17:31:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:33:42.997839734 +0000 UTC m=+178.814039344" watchObservedRunningTime="2026-02-23 17:33:42.999825794 +0000 UTC m=+178.816025394" Feb 23 17:33:43 crc kubenswrapper[4724]: I0223 17:33:43.027166 4724 patch_prober.go:28] interesting pod/router-default-5444994796-s77tw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 17:33:43 crc kubenswrapper[4724]: [-]has-synced failed: reason withheld Feb 23 17:33:43 crc kubenswrapper[4724]: [+]process-running ok Feb 23 17:33:43 crc kubenswrapper[4724]: healthz check failed Feb 23 17:33:43 crc kubenswrapper[4724]: I0223 17:33:43.027253 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s77tw" podUID="facb437a-7568-41fc-a922-644ad2cfdda2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 17:33:43 crc kubenswrapper[4724]: I0223 17:33:43.047692 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:43 crc kubenswrapper[4724]: E0223 17:33:43.049811 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:43.549770473 +0000 UTC m=+179.365970073 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:43 crc kubenswrapper[4724]: I0223 17:33:43.075611 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-p4vpc" podStartSLOduration=122.075579513 podStartE2EDuration="2m2.075579513s" podCreationTimestamp="2026-02-23 17:31:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:33:43.04319321 +0000 UTC m=+178.859392810" watchObservedRunningTime="2026-02-23 17:33:43.075579513 +0000 UTC m=+178.891779103" Feb 23 17:33:43 crc kubenswrapper[4724]: I0223 17:33:43.076986 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-fknnv" podStartSLOduration=123.076978918 podStartE2EDuration="2m3.076978918s" podCreationTimestamp="2026-02-23 17:31:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:33:43.0734379 +0000 UTC m=+178.889637500" watchObservedRunningTime="2026-02-23 17:33:43.076978918 +0000 UTC m=+178.893178518" Feb 23 17:33:43 crc kubenswrapper[4724]: I0223 17:33:43.152018 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29531130-wbshf" podStartSLOduration=123.151985549 podStartE2EDuration="2m3.151985549s" podCreationTimestamp="2026-02-23 17:31:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:33:43.140222367 +0000 UTC m=+178.956421977" watchObservedRunningTime="2026-02-23 17:33:43.151985549 +0000 UTC m=+178.968185149" Feb 23 17:33:43 crc kubenswrapper[4724]: I0223 17:33:43.153803 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:43 crc kubenswrapper[4724]: E0223 17:33:43.156367 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:43.656351937 +0000 UTC m=+179.472551537 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:43 crc kubenswrapper[4724]: I0223 17:33:43.176811 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n4kjh" podStartSLOduration=122.176782154 podStartE2EDuration="2m2.176782154s" podCreationTimestamp="2026-02-23 17:31:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:33:43.165865013 +0000 UTC m=+178.982064633" watchObservedRunningTime="2026-02-23 17:33:43.176782154 +0000 UTC m=+178.992981754" Feb 23 17:33:43 crc kubenswrapper[4724]: I0223 17:33:43.198783 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6nv99" podStartSLOduration=122.198759529 podStartE2EDuration="2m2.198759529s" podCreationTimestamp="2026-02-23 17:31:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:33:43.192647317 +0000 UTC m=+179.008846917" watchObservedRunningTime="2026-02-23 17:33:43.198759529 +0000 UTC m=+179.014959129" Feb 23 17:33:43 crc kubenswrapper[4724]: I0223 17:33:43.220863 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9vlcc" podStartSLOduration=122.220834977 podStartE2EDuration="2m2.220834977s" podCreationTimestamp="2026-02-23 17:31:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:33:43.219794941 +0000 UTC m=+179.035994531" watchObservedRunningTime="2026-02-23 17:33:43.220834977 +0000 UTC m=+179.037034577" Feb 23 17:33:43 crc kubenswrapper[4724]: I0223 17:33:43.258020 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:43 crc kubenswrapper[4724]: E0223 17:33:43.258565 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:43.758541672 +0000 UTC m=+179.574741272 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:43 crc kubenswrapper[4724]: I0223 17:33:43.309253 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-plsvj" podStartSLOduration=122.30922968 podStartE2EDuration="2m2.30922968s" podCreationTimestamp="2026-02-23 17:31:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:33:43.306520132 +0000 UTC m=+179.122719722" watchObservedRunningTime="2026-02-23 17:33:43.30922968 +0000 UTC m=+179.125429270" Feb 23 17:33:43 crc kubenswrapper[4724]: I0223 17:33:43.309560 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-45j55" podStartSLOduration=122.309555188 podStartE2EDuration="2m2.309555188s" podCreationTimestamp="2026-02-23 17:31:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:33:43.267574406 +0000 UTC m=+179.083774016" watchObservedRunningTime="2026-02-23 17:33:43.309555188 +0000 UTC m=+179.125754788" Feb 23 17:33:43 crc kubenswrapper[4724]: I0223 17:33:43.351778 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-sjpqd" podStartSLOduration=122.351750195 podStartE2EDuration="2m2.351750195s" podCreationTimestamp="2026-02-23 17:31:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:33:43.329221236 +0000 UTC m=+179.145420836" watchObservedRunningTime="2026-02-23 17:33:43.351750195 +0000 UTC m=+179.167949795" Feb 23 17:33:43 crc kubenswrapper[4724]: I0223 17:33:43.359323 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:43 crc kubenswrapper[4724]: E0223 17:33:43.359793 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:43.859779874 +0000 UTC m=+179.675979474 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:43 crc kubenswrapper[4724]: I0223 17:33:43.380600 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p78b5" podStartSLOduration=122.38057332 podStartE2EDuration="2m2.38057332s" podCreationTimestamp="2026-02-23 17:31:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:33:43.378514119 +0000 UTC m=+179.194713719" watchObservedRunningTime="2026-02-23 17:33:43.38057332 +0000 UTC m=+179.196772920" Feb 23 17:33:43 crc kubenswrapper[4724]: I0223 17:33:43.460954 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:43 crc kubenswrapper[4724]: E0223 17:33:43.461168 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:43.961118228 +0000 UTC m=+179.777317838 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:43 crc kubenswrapper[4724]: I0223 17:33:43.461259 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:43 crc kubenswrapper[4724]: E0223 17:33:43.461654 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:43.961636291 +0000 UTC m=+179.777835881 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:43 crc kubenswrapper[4724]: I0223 17:33:43.562657 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:43 crc kubenswrapper[4724]: E0223 17:33:43.562879 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:44.062850482 +0000 UTC m=+179.879050092 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:43 crc kubenswrapper[4724]: I0223 17:33:43.563985 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:43 crc kubenswrapper[4724]: E0223 17:33:43.564630 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:44.064608266 +0000 UTC m=+179.880807866 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:43 crc kubenswrapper[4724]: I0223 17:33:43.675646 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:43 crc kubenswrapper[4724]: E0223 17:33:43.678124 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:44.178081831 +0000 UTC m=+179.994281431 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:43 crc kubenswrapper[4724]: I0223 17:33:43.779930 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:43 crc kubenswrapper[4724]: E0223 17:33:43.780406 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:44.280374949 +0000 UTC m=+180.096574549 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:43 crc kubenswrapper[4724]: I0223 17:33:43.855826 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-23 17:28:42 +0000 UTC, rotation deadline is 2026-11-29 10:52:05.89818738 +0000 UTC Feb 23 17:33:43 crc kubenswrapper[4724]: I0223 17:33:43.855879 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6689h18m22.042315368s for next certificate rotation Feb 23 17:33:43 crc kubenswrapper[4724]: I0223 17:33:43.881090 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:43 crc kubenswrapper[4724]: E0223 17:33:43.881452 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:44.381433386 +0000 UTC m=+180.197632986 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:43 crc kubenswrapper[4724]: I0223 17:33:43.943449 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gddsl" event={"ID":"e870c417-07ff-4c42-8e8f-7db6078f3b5d","Type":"ContainerStarted","Data":"9f1bb630344349c6858c2dbed2702e37c08e1968d11be23892a0dbd2936f26b1"} Feb 23 17:33:43 crc kubenswrapper[4724]: I0223 17:33:43.950175 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vrsk4" event={"ID":"32ce877e-4675-4f92-a2b3-7be9a27b36d2","Type":"ContainerStarted","Data":"b8241846b0f289e844ac313882ec8b00c2d3290e644cd434dd556dd6eca8644c"} Feb 23 17:33:43 crc kubenswrapper[4724]: I0223 17:33:43.954006 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xtsjf" event={"ID":"0880746f-1a97-4302-8bd8-062a1f849e23","Type":"ContainerStarted","Data":"677a271233bfedc89cae40f92662aaed47bcaede75a2b2e29a05a349531737eb"} Feb 23 17:33:43 crc kubenswrapper[4724]: I0223 17:33:43.954049 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xtsjf" event={"ID":"0880746f-1a97-4302-8bd8-062a1f849e23","Type":"ContainerStarted","Data":"5a335b9bf9bff1dbd98f9ff42b5ff3e0a66577a5078ab9be539c2e3c080a04c2"} Feb 23 17:33:43 crc kubenswrapper[4724]: I0223 17:33:43.956204 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dlfsv" event={"ID":"dbbfa5dc-5393-48b1-bc93-3d6e6ccf188d","Type":"ContainerStarted","Data":"7c9d2d12c7317963f4f85c91b4d6f490976eff2ab51ae3b863db9ff6c2babaa4"} Feb 23 17:33:43 crc kubenswrapper[4724]: I0223 17:33:43.958370 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dlfsv" Feb 23 17:33:43 crc kubenswrapper[4724]: I0223 17:33:43.958767 4724 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-sxnxc container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Feb 23 17:33:43 crc kubenswrapper[4724]: I0223 17:33:43.958803 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sxnxc" podUID="edfc2b29-2ed6-4d6f-aed5-61bdfd205dc6" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Feb 23 17:33:43 crc kubenswrapper[4724]: I0223 17:33:43.958837 4724 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-45j55 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Feb 23 17:33:43 crc kubenswrapper[4724]: I0223 17:33:43.958911 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-45j55" podUID="4a5ae8b2-dd3e-49f0-97d2-790cc9b76107" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Feb 23 17:33:43 crc kubenswrapper[4724]: I0223 17:33:43.958999 4724 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-n984k container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" start-of-body= Feb 23 17:33:43 crc kubenswrapper[4724]: I0223 17:33:43.959022 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-n984k" podUID="e63a5cc4-56f4-414c-87e9-4ec6ff77de47" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" Feb 23 17:33:43 crc kubenswrapper[4724]: I0223 17:33:43.978521 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gddsl" podStartSLOduration=122.978490644 podStartE2EDuration="2m2.978490644s" podCreationTimestamp="2026-02-23 17:31:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:33:43.97512717 +0000 UTC m=+179.791327120" watchObservedRunningTime="2026-02-23 17:33:43.978490644 +0000 UTC m=+179.794690264" Feb 23 17:33:43 crc kubenswrapper[4724]: I0223 17:33:43.982473 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:43 crc kubenswrapper[4724]: E0223 17:33:43.983273 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:44.483261512 +0000 UTC m=+180.299461112 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:44 crc kubenswrapper[4724]: I0223 17:33:44.024436 4724 patch_prober.go:28] interesting pod/router-default-5444994796-s77tw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 17:33:44 crc kubenswrapper[4724]: [-]has-synced failed: reason withheld Feb 23 17:33:44 crc kubenswrapper[4724]: [+]process-running ok Feb 23 17:33:44 crc kubenswrapper[4724]: healthz check failed Feb 23 17:33:44 crc kubenswrapper[4724]: I0223 17:33:44.024521 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s77tw" podUID="facb437a-7568-41fc-a922-644ad2cfdda2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 17:33:44 crc kubenswrapper[4724]: I0223 17:33:44.045818 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vrsk4" podStartSLOduration=123.045790423 podStartE2EDuration="2m3.045790423s" podCreationTimestamp="2026-02-23 17:31:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:33:44.042613785 +0000 UTC m=+179.858813395" watchObservedRunningTime="2026-02-23 17:33:44.045790423 +0000 UTC m=+179.861990023" Feb 23 17:33:44 crc kubenswrapper[4724]: I0223 17:33:44.077368 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-xtsjf" podStartSLOduration=124.077336796 podStartE2EDuration="2m4.077336796s" podCreationTimestamp="2026-02-23 17:31:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:33:44.073826609 +0000 UTC m=+179.890026209" watchObservedRunningTime="2026-02-23 17:33:44.077336796 +0000 UTC m=+179.893536386" Feb 23 17:33:44 crc kubenswrapper[4724]: I0223 17:33:44.084378 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:44 crc kubenswrapper[4724]: E0223 17:33:44.084519 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:44.584495934 +0000 UTC m=+180.400695534 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:44 crc kubenswrapper[4724]: I0223 17:33:44.085540 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:44 crc kubenswrapper[4724]: E0223 17:33:44.086705 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:44.586689848 +0000 UTC m=+180.402889448 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:44 crc kubenswrapper[4724]: I0223 17:33:44.117801 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dlfsv" podStartSLOduration=123.117772099 podStartE2EDuration="2m3.117772099s" podCreationTimestamp="2026-02-23 17:31:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:33:44.114236711 +0000 UTC m=+179.930436311" watchObservedRunningTime="2026-02-23 17:33:44.117772099 +0000 UTC m=+179.933971699" Feb 23 17:33:44 crc kubenswrapper[4724]: I0223 17:33:44.187169 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:44 crc kubenswrapper[4724]: E0223 17:33:44.187285 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:44.687263413 +0000 UTC m=+180.503463013 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:44 crc kubenswrapper[4724]: I0223 17:33:44.187552 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:44 crc kubenswrapper[4724]: E0223 17:33:44.187940 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:44.68793059 +0000 UTC m=+180.504130190 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:44 crc kubenswrapper[4724]: I0223 17:33:44.307818 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:44 crc kubenswrapper[4724]: E0223 17:33:44.308291 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:44.808237014 +0000 UTC m=+180.624436614 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:44 crc kubenswrapper[4724]: I0223 17:33:44.312788 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:44 crc kubenswrapper[4724]: E0223 17:33:44.313379 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:44.813363272 +0000 UTC m=+180.629562872 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:44 crc kubenswrapper[4724]: I0223 17:33:44.414270 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:44 crc kubenswrapper[4724]: E0223 17:33:44.414828 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:44.914803828 +0000 UTC m=+180.731003428 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:44 crc kubenswrapper[4724]: I0223 17:33:44.415085 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:44 crc kubenswrapper[4724]: E0223 17:33:44.415579 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:44.915556087 +0000 UTC m=+180.731755687 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:44 crc kubenswrapper[4724]: I0223 17:33:44.516646 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:44 crc kubenswrapper[4724]: E0223 17:33:44.516799 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:45.016771078 +0000 UTC m=+180.832970678 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:44 crc kubenswrapper[4724]: I0223 17:33:44.617924 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:44 crc kubenswrapper[4724]: E0223 17:33:44.618492 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:45.118469011 +0000 UTC m=+180.934668611 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:44 crc kubenswrapper[4724]: I0223 17:33:44.718979 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:44 crc kubenswrapper[4724]: E0223 17:33:44.719240 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:45.21921925 +0000 UTC m=+181.035418850 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:44 crc kubenswrapper[4724]: I0223 17:33:44.820521 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:44 crc kubenswrapper[4724]: E0223 17:33:44.820998 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:45.320976744 +0000 UTC m=+181.137176344 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:44 crc kubenswrapper[4724]: I0223 17:33:44.922127 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:44 crc kubenswrapper[4724]: E0223 17:33:44.922249 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:45.422228336 +0000 UTC m=+181.238427936 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:44 crc kubenswrapper[4724]: I0223 17:33:44.922651 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:44 crc kubenswrapper[4724]: E0223 17:33:44.923157 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:45.423134729 +0000 UTC m=+181.239334329 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:45 crc kubenswrapper[4724]: I0223 17:33:45.007548 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-bdrlz" event={"ID":"689a8ef9-8892-4a61-b050-540f2e13ac4c","Type":"ContainerStarted","Data":"fd10077cf67ec4b4cf6de4b8b39e1f0782166bd8eb6bddce366d221cb67d9419"} Feb 23 17:33:45 crc kubenswrapper[4724]: I0223 17:33:45.023693 4724 patch_prober.go:28] interesting pod/router-default-5444994796-s77tw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 17:33:45 crc kubenswrapper[4724]: [-]has-synced failed: reason withheld Feb 23 17:33:45 crc kubenswrapper[4724]: [+]process-running ok Feb 23 17:33:45 crc kubenswrapper[4724]: healthz check failed Feb 23 17:33:45 crc kubenswrapper[4724]: I0223 17:33:45.023771 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s77tw" podUID="facb437a-7568-41fc-a922-644ad2cfdda2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 17:33:45 crc kubenswrapper[4724]: I0223 17:33:45.025000 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:45 crc kubenswrapper[4724]: I0223 17:33:45.025384 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e106e1ec-19f4-4d6b-b71f-dc04dcc437b4-metrics-certs\") pod \"network-metrics-daemon-q2jvs\" (UID: \"e106e1ec-19f4-4d6b-b71f-dc04dcc437b4\") " pod="openshift-multus/network-metrics-daemon-q2jvs" Feb 23 17:33:45 crc kubenswrapper[4724]: E0223 17:33:45.025865 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:45.525839687 +0000 UTC m=+181.342039277 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:45 crc kubenswrapper[4724]: I0223 17:33:45.029656 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-45j55" Feb 23 17:33:45 crc kubenswrapper[4724]: I0223 17:33:45.030110 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 23 17:33:45 crc kubenswrapper[4724]: I0223 17:33:45.068137 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e106e1ec-19f4-4d6b-b71f-dc04dcc437b4-metrics-certs\") pod \"network-metrics-daemon-q2jvs\" (UID: \"e106e1ec-19f4-4d6b-b71f-dc04dcc437b4\") " pod="openshift-multus/network-metrics-daemon-q2jvs" Feb 23 17:33:45 crc kubenswrapper[4724]: I0223 17:33:45.130402 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:45 crc kubenswrapper[4724]: E0223 17:33:45.130788 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:45.63077419 +0000 UTC m=+181.446973790 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:45 crc kubenswrapper[4724]: I0223 17:33:45.234963 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:45 crc kubenswrapper[4724]: E0223 17:33:45.235346 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:45.735325004 +0000 UTC m=+181.551524604 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:45 crc kubenswrapper[4724]: I0223 17:33:45.275793 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 23 17:33:45 crc kubenswrapper[4724]: I0223 17:33:45.282499 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q2jvs" Feb 23 17:33:45 crc kubenswrapper[4724]: I0223 17:33:45.338104 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:45 crc kubenswrapper[4724]: E0223 17:33:45.338595 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:45.838576115 +0000 UTC m=+181.654775715 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:45 crc kubenswrapper[4724]: I0223 17:33:45.438937 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:45 crc kubenswrapper[4724]: E0223 17:33:45.439222 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:45.939183181 +0000 UTC m=+181.755382781 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:45 crc kubenswrapper[4724]: I0223 17:33:45.439376 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:45 crc kubenswrapper[4724]: E0223 17:33:45.439800 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:45.939781626 +0000 UTC m=+181.755981226 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:45 crc kubenswrapper[4724]: I0223 17:33:45.549907 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:45 crc kubenswrapper[4724]: E0223 17:33:45.550378 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:46.05036042 +0000 UTC m=+181.866560020 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:45 crc kubenswrapper[4724]: I0223 17:33:45.652378 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:45 crc kubenswrapper[4724]: E0223 17:33:45.652907 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:46.152890193 +0000 UTC m=+181.969089793 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:45 crc kubenswrapper[4724]: I0223 17:33:45.756796 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:45 crc kubenswrapper[4724]: E0223 17:33:45.757446 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:46.257425427 +0000 UTC m=+182.073625027 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:45 crc kubenswrapper[4724]: I0223 17:33:45.761238 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 23 17:33:45 crc kubenswrapper[4724]: I0223 17:33:45.762348 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 23 17:33:45 crc kubenswrapper[4724]: I0223 17:33:45.770260 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 23 17:33:45 crc kubenswrapper[4724]: I0223 17:33:45.770632 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 23 17:33:45 crc kubenswrapper[4724]: I0223 17:33:45.799407 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 23 17:33:45 crc kubenswrapper[4724]: I0223 17:33:45.845222 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-q2jvs"] Feb 23 17:33:45 crc kubenswrapper[4724]: I0223 17:33:45.863121 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/25854d47-4d3e-4817-b27c-a186432e8c32-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"25854d47-4d3e-4817-b27c-a186432e8c32\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 23 17:33:45 crc kubenswrapper[4724]: I0223 17:33:45.863200 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/25854d47-4d3e-4817-b27c-a186432e8c32-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"25854d47-4d3e-4817-b27c-a186432e8c32\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 23 17:33:45 crc kubenswrapper[4724]: I0223 17:33:45.863238 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:45 crc kubenswrapper[4724]: E0223 17:33:45.863682 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:46.363666492 +0000 UTC m=+182.179866092 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:45 crc kubenswrapper[4724]: I0223 17:33:45.964398 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:45 crc kubenswrapper[4724]: E0223 17:33:45.964522 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:46.464504834 +0000 UTC m=+182.280704434 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:45 crc kubenswrapper[4724]: I0223 17:33:45.964797 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/25854d47-4d3e-4817-b27c-a186432e8c32-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"25854d47-4d3e-4817-b27c-a186432e8c32\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 23 17:33:45 crc kubenswrapper[4724]: I0223 17:33:45.964833 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:45 crc kubenswrapper[4724]: I0223 17:33:45.964884 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/25854d47-4d3e-4817-b27c-a186432e8c32-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"25854d47-4d3e-4817-b27c-a186432e8c32\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 23 17:33:45 crc kubenswrapper[4724]: I0223 17:33:45.965232 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/25854d47-4d3e-4817-b27c-a186432e8c32-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"25854d47-4d3e-4817-b27c-a186432e8c32\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 23 17:33:45 crc kubenswrapper[4724]: E0223 17:33:45.965518 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:46.465510439 +0000 UTC m=+182.281710039 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.009435 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jrhf2"] Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.010582 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jrhf2" Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.024537 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.041760 4724 patch_prober.go:28] interesting pod/router-default-5444994796-s77tw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 17:33:46 crc kubenswrapper[4724]: [-]has-synced failed: reason withheld Feb 23 17:33:46 crc kubenswrapper[4724]: [+]process-running ok Feb 23 17:33:46 crc kubenswrapper[4724]: healthz check failed Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.042206 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s77tw" podUID="facb437a-7568-41fc-a922-644ad2cfdda2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.045308 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jrhf2"] Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.046581 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/25854d47-4d3e-4817-b27c-a186432e8c32-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"25854d47-4d3e-4817-b27c-a186432e8c32\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.070370 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:46 crc kubenswrapper[4724]: E0223 17:33:46.071676 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:46.571606901 +0000 UTC m=+182.387806501 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.081932 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-bdrlz" event={"ID":"689a8ef9-8892-4a61-b050-540f2e13ac4c","Type":"ContainerStarted","Data":"100acfdc0ec80fb7725df386f5a39ae4e23d053fb1b3f4f3a2a403651cf1e1ef"} Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.097717 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.117511 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-q2jvs" event={"ID":"e106e1ec-19f4-4d6b-b71f-dc04dcc437b4","Type":"ContainerStarted","Data":"3285d81f6df6f8dae8ce65d27f2975ab94712112d8dc155314c02bd961bcad5b"} Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.171678 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjf6l\" (UniqueName: \"kubernetes.io/projected/4e2b6f03-51f9-4434-b1ae-f7f7ce8d2004-kube-api-access-rjf6l\") pod \"certified-operators-jrhf2\" (UID: \"4e2b6f03-51f9-4434-b1ae-f7f7ce8d2004\") " pod="openshift-marketplace/certified-operators-jrhf2" Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.171729 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.171761 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e2b6f03-51f9-4434-b1ae-f7f7ce8d2004-utilities\") pod \"certified-operators-jrhf2\" (UID: \"4e2b6f03-51f9-4434-b1ae-f7f7ce8d2004\") " pod="openshift-marketplace/certified-operators-jrhf2" Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.171809 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e2b6f03-51f9-4434-b1ae-f7f7ce8d2004-catalog-content\") pod \"certified-operators-jrhf2\" (UID: \"4e2b6f03-51f9-4434-b1ae-f7f7ce8d2004\") " pod="openshift-marketplace/certified-operators-jrhf2" Feb 23 17:33:46 crc kubenswrapper[4724]: E0223 17:33:46.172146 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:46.672132595 +0000 UTC m=+182.488332185 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.238137 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ft7cc"] Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.244346 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ft7cc" Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.250897 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.268100 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ft7cc"] Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.273963 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.274274 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e2b6f03-51f9-4434-b1ae-f7f7ce8d2004-catalog-content\") pod \"certified-operators-jrhf2\" (UID: \"4e2b6f03-51f9-4434-b1ae-f7f7ce8d2004\") " pod="openshift-marketplace/certified-operators-jrhf2" Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.274366 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjf6l\" (UniqueName: \"kubernetes.io/projected/4e2b6f03-51f9-4434-b1ae-f7f7ce8d2004-kube-api-access-rjf6l\") pod \"certified-operators-jrhf2\" (UID: \"4e2b6f03-51f9-4434-b1ae-f7f7ce8d2004\") " pod="openshift-marketplace/certified-operators-jrhf2" Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.274428 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e2b6f03-51f9-4434-b1ae-f7f7ce8d2004-utilities\") pod \"certified-operators-jrhf2\" (UID: \"4e2b6f03-51f9-4434-b1ae-f7f7ce8d2004\") " pod="openshift-marketplace/certified-operators-jrhf2" Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.274869 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e2b6f03-51f9-4434-b1ae-f7f7ce8d2004-utilities\") pod \"certified-operators-jrhf2\" (UID: \"4e2b6f03-51f9-4434-b1ae-f7f7ce8d2004\") " pod="openshift-marketplace/certified-operators-jrhf2" Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.277015 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e2b6f03-51f9-4434-b1ae-f7f7ce8d2004-catalog-content\") pod \"certified-operators-jrhf2\" (UID: \"4e2b6f03-51f9-4434-b1ae-f7f7ce8d2004\") " pod="openshift-marketplace/certified-operators-jrhf2" Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.305483 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjf6l\" (UniqueName: \"kubernetes.io/projected/4e2b6f03-51f9-4434-b1ae-f7f7ce8d2004-kube-api-access-rjf6l\") pod \"certified-operators-jrhf2\" (UID: \"4e2b6f03-51f9-4434-b1ae-f7f7ce8d2004\") " pod="openshift-marketplace/certified-operators-jrhf2" Feb 23 17:33:46 crc kubenswrapper[4724]: E0223 17:33:46.333189 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:46.83315417 +0000 UTC m=+182.649353770 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.365027 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jrhf2" Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.380981 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg5md\" (UniqueName: \"kubernetes.io/projected/7a5401c3-8e65-4b1f-89a5-4bd1628b149c-kube-api-access-vg5md\") pod \"community-operators-ft7cc\" (UID: \"7a5401c3-8e65-4b1f-89a5-4bd1628b149c\") " pod="openshift-marketplace/community-operators-ft7cc" Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.381504 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.381535 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a5401c3-8e65-4b1f-89a5-4bd1628b149c-catalog-content\") pod \"community-operators-ft7cc\" (UID: \"7a5401c3-8e65-4b1f-89a5-4bd1628b149c\") " pod="openshift-marketplace/community-operators-ft7cc" Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.381594 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a5401c3-8e65-4b1f-89a5-4bd1628b149c-utilities\") pod \"community-operators-ft7cc\" (UID: \"7a5401c3-8e65-4b1f-89a5-4bd1628b149c\") " pod="openshift-marketplace/community-operators-ft7cc" Feb 23 17:33:46 crc kubenswrapper[4724]: E0223 17:33:46.381963 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:46.881949131 +0000 UTC m=+182.698148731 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.426385 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dfqzd"] Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.428020 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dfqzd" Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.453214 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dfqzd"] Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.484178 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.484585 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a5401c3-8e65-4b1f-89a5-4bd1628b149c-catalog-content\") pod \"community-operators-ft7cc\" (UID: \"7a5401c3-8e65-4b1f-89a5-4bd1628b149c\") " pod="openshift-marketplace/community-operators-ft7cc" Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.484675 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a5401c3-8e65-4b1f-89a5-4bd1628b149c-utilities\") pod \"community-operators-ft7cc\" (UID: \"7a5401c3-8e65-4b1f-89a5-4bd1628b149c\") " pod="openshift-marketplace/community-operators-ft7cc" Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.484731 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vg5md\" (UniqueName: \"kubernetes.io/projected/7a5401c3-8e65-4b1f-89a5-4bd1628b149c-kube-api-access-vg5md\") pod \"community-operators-ft7cc\" (UID: \"7a5401c3-8e65-4b1f-89a5-4bd1628b149c\") " pod="openshift-marketplace/community-operators-ft7cc" Feb 23 17:33:46 crc kubenswrapper[4724]: E0223 17:33:46.484838 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:46.984806182 +0000 UTC m=+182.801005782 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.485682 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a5401c3-8e65-4b1f-89a5-4bd1628b149c-catalog-content\") pod \"community-operators-ft7cc\" (UID: \"7a5401c3-8e65-4b1f-89a5-4bd1628b149c\") " pod="openshift-marketplace/community-operators-ft7cc" Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.485828 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a5401c3-8e65-4b1f-89a5-4bd1628b149c-utilities\") pod \"community-operators-ft7cc\" (UID: \"7a5401c3-8e65-4b1f-89a5-4bd1628b149c\") " pod="openshift-marketplace/community-operators-ft7cc" Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.564637 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8nh95" Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.570507 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vg5md\" (UniqueName: \"kubernetes.io/projected/7a5401c3-8e65-4b1f-89a5-4bd1628b149c-kube-api-access-vg5md\") pod \"community-operators-ft7cc\" (UID: \"7a5401c3-8e65-4b1f-89a5-4bd1628b149c\") " pod="openshift-marketplace/community-operators-ft7cc" Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.592164 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.592244 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klh5w\" (UniqueName: \"kubernetes.io/projected/5d8744a3-347d-4260-963c-5629092380fe-kube-api-access-klh5w\") pod \"certified-operators-dfqzd\" (UID: \"5d8744a3-347d-4260-963c-5629092380fe\") " pod="openshift-marketplace/certified-operators-dfqzd" Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.592283 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d8744a3-347d-4260-963c-5629092380fe-catalog-content\") pod \"certified-operators-dfqzd\" (UID: \"5d8744a3-347d-4260-963c-5629092380fe\") " pod="openshift-marketplace/certified-operators-dfqzd" Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.592303 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d8744a3-347d-4260-963c-5629092380fe-utilities\") pod \"certified-operators-dfqzd\" (UID: \"5d8744a3-347d-4260-963c-5629092380fe\") " pod="openshift-marketplace/certified-operators-dfqzd" Feb 23 17:33:46 crc kubenswrapper[4724]: E0223 17:33:46.592558 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:47.092536945 +0000 UTC m=+182.908736545 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.607525 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ft7cc" Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.641065 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-j966w"] Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.642309 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j966w" Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.693039 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j966w"] Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.693937 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:46 crc kubenswrapper[4724]: E0223 17:33:46.695251 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:47.195221293 +0000 UTC m=+183.011420883 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.712930 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.713173 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klh5w\" (UniqueName: \"kubernetes.io/projected/5d8744a3-347d-4260-963c-5629092380fe-kube-api-access-klh5w\") pod \"certified-operators-dfqzd\" (UID: \"5d8744a3-347d-4260-963c-5629092380fe\") " pod="openshift-marketplace/certified-operators-dfqzd" Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.713266 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d8744a3-347d-4260-963c-5629092380fe-catalog-content\") pod \"certified-operators-dfqzd\" (UID: \"5d8744a3-347d-4260-963c-5629092380fe\") " pod="openshift-marketplace/certified-operators-dfqzd" Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.713307 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d8744a3-347d-4260-963c-5629092380fe-utilities\") pod \"certified-operators-dfqzd\" (UID: \"5d8744a3-347d-4260-963c-5629092380fe\") " pod="openshift-marketplace/certified-operators-dfqzd" Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.713982 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d8744a3-347d-4260-963c-5629092380fe-utilities\") pod \"certified-operators-dfqzd\" (UID: \"5d8744a3-347d-4260-963c-5629092380fe\") " pod="openshift-marketplace/certified-operators-dfqzd" Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.715552 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d8744a3-347d-4260-963c-5629092380fe-catalog-content\") pod \"certified-operators-dfqzd\" (UID: \"5d8744a3-347d-4260-963c-5629092380fe\") " pod="openshift-marketplace/certified-operators-dfqzd" Feb 23 17:33:46 crc kubenswrapper[4724]: E0223 17:33:46.715933 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:47.215909506 +0000 UTC m=+183.032109316 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.827082 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klh5w\" (UniqueName: \"kubernetes.io/projected/5d8744a3-347d-4260-963c-5629092380fe-kube-api-access-klh5w\") pod \"certified-operators-dfqzd\" (UID: \"5d8744a3-347d-4260-963c-5629092380fe\") " pod="openshift-marketplace/certified-operators-dfqzd" Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.828440 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.828612 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/170d3970-9dce-48c5-9b25-9d30d5780282-catalog-content\") pod \"community-operators-j966w\" (UID: \"170d3970-9dce-48c5-9b25-9d30d5780282\") " pod="openshift-marketplace/community-operators-j966w" Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.828647 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfq8g\" (UniqueName: \"kubernetes.io/projected/170d3970-9dce-48c5-9b25-9d30d5780282-kube-api-access-nfq8g\") pod \"community-operators-j966w\" (UID: \"170d3970-9dce-48c5-9b25-9d30d5780282\") " pod="openshift-marketplace/community-operators-j966w" Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.828685 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/170d3970-9dce-48c5-9b25-9d30d5780282-utilities\") pod \"community-operators-j966w\" (UID: \"170d3970-9dce-48c5-9b25-9d30d5780282\") " pod="openshift-marketplace/community-operators-j966w" Feb 23 17:33:46 crc kubenswrapper[4724]: E0223 17:33:46.828768 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:47.328753305 +0000 UTC m=+183.144952905 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.857541 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dfqzd" Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.930031 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.930088 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/170d3970-9dce-48c5-9b25-9d30d5780282-utilities\") pod \"community-operators-j966w\" (UID: \"170d3970-9dce-48c5-9b25-9d30d5780282\") " pod="openshift-marketplace/community-operators-j966w" Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.930157 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/170d3970-9dce-48c5-9b25-9d30d5780282-catalog-content\") pod \"community-operators-j966w\" (UID: \"170d3970-9dce-48c5-9b25-9d30d5780282\") " pod="openshift-marketplace/community-operators-j966w" Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.930192 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfq8g\" (UniqueName: \"kubernetes.io/projected/170d3970-9dce-48c5-9b25-9d30d5780282-kube-api-access-nfq8g\") pod \"community-operators-j966w\" (UID: \"170d3970-9dce-48c5-9b25-9d30d5780282\") " pod="openshift-marketplace/community-operators-j966w" Feb 23 17:33:46 crc kubenswrapper[4724]: E0223 17:33:46.930856 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:47.430844918 +0000 UTC m=+183.247044518 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.931368 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/170d3970-9dce-48c5-9b25-9d30d5780282-utilities\") pod \"community-operators-j966w\" (UID: \"170d3970-9dce-48c5-9b25-9d30d5780282\") " pod="openshift-marketplace/community-operators-j966w" Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.931595 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/170d3970-9dce-48c5-9b25-9d30d5780282-catalog-content\") pod \"community-operators-j966w\" (UID: \"170d3970-9dce-48c5-9b25-9d30d5780282\") " pod="openshift-marketplace/community-operators-j966w" Feb 23 17:33:46 crc kubenswrapper[4724]: I0223 17:33:46.987534 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfq8g\" (UniqueName: \"kubernetes.io/projected/170d3970-9dce-48c5-9b25-9d30d5780282-kube-api-access-nfq8g\") pod \"community-operators-j966w\" (UID: \"170d3970-9dce-48c5-9b25-9d30d5780282\") " pod="openshift-marketplace/community-operators-j966w" Feb 23 17:33:47 crc kubenswrapper[4724]: I0223 17:33:47.000927 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j966w" Feb 23 17:33:47 crc kubenswrapper[4724]: I0223 17:33:47.028704 4724 patch_prober.go:28] interesting pod/router-default-5444994796-s77tw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 17:33:47 crc kubenswrapper[4724]: [-]has-synced failed: reason withheld Feb 23 17:33:47 crc kubenswrapper[4724]: [+]process-running ok Feb 23 17:33:47 crc kubenswrapper[4724]: healthz check failed Feb 23 17:33:47 crc kubenswrapper[4724]: I0223 17:33:47.028768 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s77tw" podUID="facb437a-7568-41fc-a922-644ad2cfdda2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 17:33:47 crc kubenswrapper[4724]: I0223 17:33:47.032088 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:47 crc kubenswrapper[4724]: E0223 17:33:47.032863 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:47.532831939 +0000 UTC m=+183.349031539 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:47 crc kubenswrapper[4724]: I0223 17:33:47.033262 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:47 crc kubenswrapper[4724]: E0223 17:33:47.033657 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:47.533646629 +0000 UTC m=+183.349846229 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:47 crc kubenswrapper[4724]: I0223 17:33:47.054121 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 23 17:33:47 crc kubenswrapper[4724]: I0223 17:33:47.054791 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 23 17:33:47 crc kubenswrapper[4724]: I0223 17:33:47.078028 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgdx8"] Feb 23 17:33:47 crc kubenswrapper[4724]: I0223 17:33:47.078715 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgdx8" podUID="7746d0a1-242b-4afc-b968-36853a4ad1ac" containerName="route-controller-manager" containerID="cri-o://a57eb595fa93ecaedb32a080094709af0ecc7a1433b861be3244510d99225e53" gracePeriod=30 Feb 23 17:33:47 crc kubenswrapper[4724]: I0223 17:33:47.093843 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgdx8" Feb 23 17:33:47 crc kubenswrapper[4724]: I0223 17:33:47.106775 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5jdvd"] Feb 23 17:33:47 crc kubenswrapper[4724]: I0223 17:33:47.107164 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-5jdvd" podUID="2c14acfb-83f3-4782-84df-6558dde9c268" containerName="controller-manager" containerID="cri-o://35a4e5e1ed3010b4c084c7200b6b2bd0e4e9d13275a81c92b6cbdc70da6aadd7" gracePeriod=30 Feb 23 17:33:47 crc kubenswrapper[4724]: I0223 17:33:47.118759 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 23 17:33:47 crc kubenswrapper[4724]: I0223 17:33:47.118850 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-5jdvd" Feb 23 17:33:47 crc kubenswrapper[4724]: I0223 17:33:47.118853 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 23 17:33:47 crc kubenswrapper[4724]: I0223 17:33:47.131827 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 23 17:33:47 crc kubenswrapper[4724]: I0223 17:33:47.135102 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:47 crc kubenswrapper[4724]: E0223 17:33:47.138893 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:47.638862709 +0000 UTC m=+183.455062309 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:47 crc kubenswrapper[4724]: I0223 17:33:47.138994 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:47 crc kubenswrapper[4724]: I0223 17:33:47.139131 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f94958ec-8484-4e01-b05c-f00b60bf4554-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"f94958ec-8484-4e01-b05c-f00b60bf4554\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 23 17:33:47 crc kubenswrapper[4724]: I0223 17:33:47.139157 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f94958ec-8484-4e01-b05c-f00b60bf4554-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"f94958ec-8484-4e01-b05c-f00b60bf4554\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 23 17:33:47 crc kubenswrapper[4724]: E0223 17:33:47.139493 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:47.639484304 +0000 UTC m=+183.455683904 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:47 crc kubenswrapper[4724]: I0223 17:33:47.148964 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 23 17:33:47 crc kubenswrapper[4724]: I0223 17:33:47.168833 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-q2jvs" event={"ID":"e106e1ec-19f4-4d6b-b71f-dc04dcc437b4","Type":"ContainerStarted","Data":"a482666262a58a4291619e515261e07af11eadb895b111ae745558d6595d359c"} Feb 23 17:33:47 crc kubenswrapper[4724]: I0223 17:33:47.179317 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-bdrlz" event={"ID":"689a8ef9-8892-4a61-b050-540f2e13ac4c","Type":"ContainerStarted","Data":"9c52d177e04b9fb9767241c483dd4c4fd713c276e1be84cbc70f96fb90663ae4"} Feb 23 17:33:47 crc kubenswrapper[4724]: I0223 17:33:47.232365 4724 patch_prober.go:28] interesting pod/downloads-7954f5f757-8hzn4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 23 17:33:47 crc kubenswrapper[4724]: I0223 17:33:47.232424 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8hzn4" podUID="fe2c617a-30bc-4095-b085-d6306827fcce" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 23 17:33:47 crc kubenswrapper[4724]: I0223 17:33:47.232630 4724 patch_prober.go:28] interesting pod/downloads-7954f5f757-8hzn4 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 23 17:33:47 crc kubenswrapper[4724]: I0223 17:33:47.232681 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-8hzn4" podUID="fe2c617a-30bc-4095-b085-d6306827fcce" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 23 17:33:47 crc kubenswrapper[4724]: I0223 17:33:47.235290 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-4kcvg" Feb 23 17:33:47 crc kubenswrapper[4724]: I0223 17:33:47.240958 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:47 crc kubenswrapper[4724]: E0223 17:33:47.241137 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:47.741104306 +0000 UTC m=+183.557303906 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:47 crc kubenswrapper[4724]: I0223 17:33:47.241192 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:47 crc kubenswrapper[4724]: I0223 17:33:47.241333 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f94958ec-8484-4e01-b05c-f00b60bf4554-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"f94958ec-8484-4e01-b05c-f00b60bf4554\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 23 17:33:47 crc kubenswrapper[4724]: I0223 17:33:47.241366 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f94958ec-8484-4e01-b05c-f00b60bf4554-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"f94958ec-8484-4e01-b05c-f00b60bf4554\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 23 17:33:47 crc kubenswrapper[4724]: I0223 17:33:47.266650 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f94958ec-8484-4e01-b05c-f00b60bf4554-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"f94958ec-8484-4e01-b05c-f00b60bf4554\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 23 17:33:47 crc kubenswrapper[4724]: E0223 17:33:47.267921 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:47.76789815 +0000 UTC m=+183.584097750 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:47 crc kubenswrapper[4724]: I0223 17:33:47.314893 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f94958ec-8484-4e01-b05c-f00b60bf4554-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"f94958ec-8484-4e01-b05c-f00b60bf4554\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 23 17:33:47 crc kubenswrapper[4724]: I0223 17:33:47.368202 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:47 crc kubenswrapper[4724]: E0223 17:33:47.368591 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:47.868569288 +0000 UTC m=+183.684768888 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:47 crc kubenswrapper[4724]: I0223 17:33:47.374527 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p78b5" Feb 23 17:33:47 crc kubenswrapper[4724]: I0223 17:33:47.375806 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p78b5" Feb 23 17:33:47 crc kubenswrapper[4724]: I0223 17:33:47.405208 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 23 17:33:47 crc kubenswrapper[4724]: I0223 17:33:47.420624 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p78b5" Feb 23 17:33:47 crc kubenswrapper[4724]: I0223 17:33:47.469585 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:47 crc kubenswrapper[4724]: E0223 17:33:47.470313 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:47.970289941 +0000 UTC m=+183.786489531 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:47 crc kubenswrapper[4724]: I0223 17:33:47.512053 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jrhf2"] Feb 23 17:33:47 crc kubenswrapper[4724]: I0223 17:33:47.559300 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ft7cc"] Feb 23 17:33:47 crc kubenswrapper[4724]: I0223 17:33:47.577836 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dfqzd"] Feb 23 17:33:47 crc kubenswrapper[4724]: I0223 17:33:47.581042 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:47 crc kubenswrapper[4724]: E0223 17:33:47.581873 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:48.081845789 +0000 UTC m=+183.898045389 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:47 crc kubenswrapper[4724]: W0223 17:33:47.660016 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d8744a3_347d_4260_963c_5629092380fe.slice/crio-cde10bc33a70f3a714116ba206774c8228222e391a33449aa6523f9dbe91fcf1 WatchSource:0}: Error finding container cde10bc33a70f3a714116ba206774c8228222e391a33449aa6523f9dbe91fcf1: Status 404 returned error can't find the container with id cde10bc33a70f3a714116ba206774c8228222e391a33449aa6523f9dbe91fcf1 Feb 23 17:33:47 crc kubenswrapper[4724]: I0223 17:33:47.683283 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:47 crc kubenswrapper[4724]: E0223 17:33:47.683697 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:48.183683376 +0000 UTC m=+183.999882976 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:47 crc kubenswrapper[4724]: I0223 17:33:47.719936 4724 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-wgdx8 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Feb 23 17:33:47 crc kubenswrapper[4724]: I0223 17:33:47.720032 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgdx8" podUID="7746d0a1-242b-4afc-b968-36853a4ad1ac" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Feb 23 17:33:47 crc kubenswrapper[4724]: I0223 17:33:47.792176 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:47 crc kubenswrapper[4724]: E0223 17:33:47.792920 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:48.292903045 +0000 UTC m=+184.109102645 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:47 crc kubenswrapper[4724]: I0223 17:33:47.894026 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:47 crc kubenswrapper[4724]: E0223 17:33:47.894479 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:48.394460205 +0000 UTC m=+184.210659805 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:47 crc kubenswrapper[4724]: I0223 17:33:47.912409 4724 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 23 17:33:47 crc kubenswrapper[4724]: I0223 17:33:47.921930 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j966w"] Feb 23 17:33:47 crc kubenswrapper[4724]: I0223 17:33:47.983785 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 23 17:33:47 crc kubenswrapper[4724]: I0223 17:33:47.984941 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-g5s4k"] Feb 23 17:33:47 crc kubenswrapper[4724]: I0223 17:33:47.991133 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g5s4k" Feb 23 17:33:47 crc kubenswrapper[4724]: I0223 17:33:47.995817 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 23 17:33:47 crc kubenswrapper[4724]: I0223 17:33:47.998716 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:47 crc kubenswrapper[4724]: E0223 17:33:47.999155 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:48.499139142 +0000 UTC m=+184.315338742 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.019502 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-s77tw" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.020732 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g5s4k"] Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.024807 4724 patch_prober.go:28] interesting pod/router-default-5444994796-s77tw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 17:33:48 crc kubenswrapper[4724]: [-]has-synced failed: reason withheld Feb 23 17:33:48 crc kubenswrapper[4724]: [+]process-running ok Feb 23 17:33:48 crc kubenswrapper[4724]: healthz check failed Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.024859 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s77tw" podUID="facb437a-7568-41fc-a922-644ad2cfdda2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.100432 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/828827aa-9a76-4ba6-962f-ad0ac278bd72-utilities\") pod \"redhat-marketplace-g5s4k\" (UID: \"828827aa-9a76-4ba6-962f-ad0ac278bd72\") " pod="openshift-marketplace/redhat-marketplace-g5s4k" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.100477 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mdh6\" (UniqueName: \"kubernetes.io/projected/828827aa-9a76-4ba6-962f-ad0ac278bd72-kube-api-access-5mdh6\") pod \"redhat-marketplace-g5s4k\" (UID: \"828827aa-9a76-4ba6-962f-ad0ac278bd72\") " pod="openshift-marketplace/redhat-marketplace-g5s4k" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.100591 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/828827aa-9a76-4ba6-962f-ad0ac278bd72-catalog-content\") pod \"redhat-marketplace-g5s4k\" (UID: \"828827aa-9a76-4ba6-962f-ad0ac278bd72\") " pod="openshift-marketplace/redhat-marketplace-g5s4k" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.100616 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:48 crc kubenswrapper[4724]: E0223 17:33:48.101163 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:48.601150263 +0000 UTC m=+184.417349863 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.199134 4724 generic.go:334] "Generic (PLEG): container finished" podID="7746d0a1-242b-4afc-b968-36853a4ad1ac" containerID="a57eb595fa93ecaedb32a080094709af0ecc7a1433b861be3244510d99225e53" exitCode=0 Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.199219 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgdx8" event={"ID":"7746d0a1-242b-4afc-b968-36853a4ad1ac","Type":"ContainerDied","Data":"a57eb595fa93ecaedb32a080094709af0ecc7a1433b861be3244510d99225e53"} Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.199372 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgdx8" event={"ID":"7746d0a1-242b-4afc-b968-36853a4ad1ac","Type":"ContainerDied","Data":"42351a7c651bfaa51b765d71b07ffa1a57b3332a746f1ecb76b54394c9f341fa"} Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.199405 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42351a7c651bfaa51b765d71b07ffa1a57b3332a746f1ecb76b54394c9f341fa" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.202119 4724 generic.go:334] "Generic (PLEG): container finished" podID="4e2b6f03-51f9-4434-b1ae-f7f7ce8d2004" containerID="621ac4e2803ca7848c717b297d3ba017183bd5928cba8dcc3cf13a7ba8289cc4" exitCode=0 Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.202169 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jrhf2" event={"ID":"4e2b6f03-51f9-4434-b1ae-f7f7ce8d2004","Type":"ContainerDied","Data":"621ac4e2803ca7848c717b297d3ba017183bd5928cba8dcc3cf13a7ba8289cc4"} Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.202184 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jrhf2" event={"ID":"4e2b6f03-51f9-4434-b1ae-f7f7ce8d2004","Type":"ContainerStarted","Data":"96cd3d5abffa7e7cbcc02c362f821ea4b33a25ecf642a9550312fb30c3736aac"} Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.202351 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.202873 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/828827aa-9a76-4ba6-962f-ad0ac278bd72-catalog-content\") pod \"redhat-marketplace-g5s4k\" (UID: \"828827aa-9a76-4ba6-962f-ad0ac278bd72\") " pod="openshift-marketplace/redhat-marketplace-g5s4k" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.202924 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/828827aa-9a76-4ba6-962f-ad0ac278bd72-utilities\") pod \"redhat-marketplace-g5s4k\" (UID: \"828827aa-9a76-4ba6-962f-ad0ac278bd72\") " pod="openshift-marketplace/redhat-marketplace-g5s4k" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.202950 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mdh6\" (UniqueName: \"kubernetes.io/projected/828827aa-9a76-4ba6-962f-ad0ac278bd72-kube-api-access-5mdh6\") pod \"redhat-marketplace-g5s4k\" (UID: \"828827aa-9a76-4ba6-962f-ad0ac278bd72\") " pod="openshift-marketplace/redhat-marketplace-g5s4k" Feb 23 17:33:48 crc kubenswrapper[4724]: E0223 17:33:48.203343 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:48.703324017 +0000 UTC m=+184.519523617 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.204144 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/828827aa-9a76-4ba6-962f-ad0ac278bd72-catalog-content\") pod \"redhat-marketplace-g5s4k\" (UID: \"828827aa-9a76-4ba6-962f-ad0ac278bd72\") " pod="openshift-marketplace/redhat-marketplace-g5s4k" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.204189 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/828827aa-9a76-4ba6-962f-ad0ac278bd72-utilities\") pod \"redhat-marketplace-g5s4k\" (UID: \"828827aa-9a76-4ba6-962f-ad0ac278bd72\") " pod="openshift-marketplace/redhat-marketplace-g5s4k" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.207808 4724 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.207929 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"25854d47-4d3e-4817-b27c-a186432e8c32","Type":"ContainerStarted","Data":"2f73619746c4693959280709543409fa88165c0f43dba0ceb302ccf1266ea3f2"} Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.207957 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"25854d47-4d3e-4817-b27c-a186432e8c32","Type":"ContainerStarted","Data":"a64761a53dbed5793fced40c45dbdd0d3e44ccd21ff3a9a8b490a39fdb08e6ed"} Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.209775 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j966w" event={"ID":"170d3970-9dce-48c5-9b25-9d30d5780282","Type":"ContainerStarted","Data":"31b18875f763481e8458b72f028ac2d78d4f087dd5c254ae069a2495c4568a63"} Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.212934 4724 generic.go:334] "Generic (PLEG): container finished" podID="5d8744a3-347d-4260-963c-5629092380fe" containerID="7b48ee1428d997761f99509db6b07456e658316500ff913242f4f0a2eb494781" exitCode=0 Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.212996 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dfqzd" event={"ID":"5d8744a3-347d-4260-963c-5629092380fe","Type":"ContainerDied","Data":"7b48ee1428d997761f99509db6b07456e658316500ff913242f4f0a2eb494781"} Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.213025 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dfqzd" event={"ID":"5d8744a3-347d-4260-963c-5629092380fe","Type":"ContainerStarted","Data":"cde10bc33a70f3a714116ba206774c8228222e391a33449aa6523f9dbe91fcf1"} Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.215515 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-ch92v" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.222513 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-bdrlz" event={"ID":"689a8ef9-8892-4a61-b050-540f2e13ac4c","Type":"ContainerStarted","Data":"d94b26da097062572cd6176817e5f612541f849d76e4b9315dba3fb03ecf8a2a"} Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.226684 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-fknnv" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.226755 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-fknnv" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.234812 4724 patch_prober.go:28] interesting pod/console-f9d7485db-fknnv container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.21:8443/health\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.234882 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-fknnv" podUID="997b5710-9b99-4207-92da-28b7a1923db2" containerName="console" probeResult="failure" output="Get \"https://10.217.0.21:8443/health\": dial tcp 10.217.0.21:8443: connect: connection refused" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.237405 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mdh6\" (UniqueName: \"kubernetes.io/projected/828827aa-9a76-4ba6-962f-ad0ac278bd72-kube-api-access-5mdh6\") pod \"redhat-marketplace-g5s4k\" (UID: \"828827aa-9a76-4ba6-962f-ad0ac278bd72\") " pod="openshift-marketplace/redhat-marketplace-g5s4k" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.240866 4724 generic.go:334] "Generic (PLEG): container finished" podID="67dbde4d-5c0f-45cf-82ae-435b16e17121" containerID="ee5f8314678a01afc418a25852abecc282f30eba6f14ba505c8be9808761db1e" exitCode=0 Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.241088 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531130-wbshf" event={"ID":"67dbde4d-5c0f-45cf-82ae-435b16e17121","Type":"ContainerDied","Data":"ee5f8314678a01afc418a25852abecc282f30eba6f14ba505c8be9808761db1e"} Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.247692 4724 generic.go:334] "Generic (PLEG): container finished" podID="7a5401c3-8e65-4b1f-89a5-4bd1628b149c" containerID="b39c7a459045b0197a5009823d0f79af76e70dbb5a8ce221b4a5ffdfa5581dae" exitCode=0 Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.247787 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ft7cc" event={"ID":"7a5401c3-8e65-4b1f-89a5-4bd1628b149c","Type":"ContainerDied","Data":"b39c7a459045b0197a5009823d0f79af76e70dbb5a8ce221b4a5ffdfa5581dae"} Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.247825 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ft7cc" event={"ID":"7a5401c3-8e65-4b1f-89a5-4bd1628b149c","Type":"ContainerStarted","Data":"de45fc482deeeed1b70f7bec0cef22f0148d7de5560804af93f887fefa59596a"} Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.251482 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"f94958ec-8484-4e01-b05c-f00b60bf4554","Type":"ContainerStarted","Data":"1565d80f2a21d86910f05521a8f67bc59a7099edf36f0c7f1ff20c9feebbe74c"} Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.257838 4724 generic.go:334] "Generic (PLEG): container finished" podID="2c14acfb-83f3-4782-84df-6558dde9c268" containerID="35a4e5e1ed3010b4c084c7200b6b2bd0e4e9d13275a81c92b6cbdc70da6aadd7" exitCode=0 Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.257951 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-5jdvd" event={"ID":"2c14acfb-83f3-4782-84df-6558dde9c268","Type":"ContainerDied","Data":"35a4e5e1ed3010b4c084c7200b6b2bd0e4e9d13275a81c92b6cbdc70da6aadd7"} Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.257981 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-5jdvd" event={"ID":"2c14acfb-83f3-4782-84df-6558dde9c268","Type":"ContainerDied","Data":"f8d37f779af01468aef03f4bbe1e684ed646d95220c6de1a88b1acf020e775f7"} Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.257992 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8d37f779af01468aef03f4bbe1e684ed646d95220c6de1a88b1acf020e775f7" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.267901 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-q2jvs" event={"ID":"e106e1ec-19f4-4d6b-b71f-dc04dcc437b4","Type":"ContainerStarted","Data":"e600a983d3fc87153fa35950d667518a668fc1c221c9a62a846d8cb59bdd8492"} Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.280569 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p78b5" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.282142 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgdx8" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.304593 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:48 crc kubenswrapper[4724]: E0223 17:33:48.308092 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:48.808076105 +0000 UTC m=+184.624275705 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.308632 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-5jdvd" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.346899 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-bdrlz" podStartSLOduration=13.346867138 podStartE2EDuration="13.346867138s" podCreationTimestamp="2026-02-23 17:33:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:33:48.335535787 +0000 UTC m=+184.151735387" watchObservedRunningTime="2026-02-23 17:33:48.346867138 +0000 UTC m=+184.163066738" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.360269 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-xtsjf" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.360351 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-xtsjf" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.372509 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.372477913 podStartE2EDuration="3.372477913s" podCreationTimestamp="2026-02-23 17:33:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:33:48.350781695 +0000 UTC m=+184.166981295" watchObservedRunningTime="2026-02-23 17:33:48.372477913 +0000 UTC m=+184.188677513" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.376331 4724 patch_prober.go:28] interesting pod/apiserver-76f77b778f-xtsjf container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 23 17:33:48 crc kubenswrapper[4724]: [+]log ok Feb 23 17:33:48 crc kubenswrapper[4724]: [+]etcd ok Feb 23 17:33:48 crc kubenswrapper[4724]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 23 17:33:48 crc kubenswrapper[4724]: [+]poststarthook/generic-apiserver-start-informers ok Feb 23 17:33:48 crc kubenswrapper[4724]: [+]poststarthook/max-in-flight-filter ok Feb 23 17:33:48 crc kubenswrapper[4724]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 23 17:33:48 crc kubenswrapper[4724]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 23 17:33:48 crc kubenswrapper[4724]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 23 17:33:48 crc kubenswrapper[4724]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 23 17:33:48 crc kubenswrapper[4724]: [+]poststarthook/project.openshift.io-projectcache ok Feb 23 17:33:48 crc kubenswrapper[4724]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 23 17:33:48 crc kubenswrapper[4724]: [+]poststarthook/openshift.io-startinformers ok Feb 23 17:33:48 crc kubenswrapper[4724]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 23 17:33:48 crc kubenswrapper[4724]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 23 17:33:48 crc kubenswrapper[4724]: livez check failed Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.376461 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-xtsjf" podUID="0880746f-1a97-4302-8bd8-062a1f849e23" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.381652 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g5s4k" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.385129 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-n984k" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.390861 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9vlcc" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.391801 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sxnxc" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.397870 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7l6ld"] Feb 23 17:33:48 crc kubenswrapper[4724]: E0223 17:33:48.399959 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7746d0a1-242b-4afc-b968-36853a4ad1ac" containerName="route-controller-manager" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.399995 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="7746d0a1-242b-4afc-b968-36853a4ad1ac" containerName="route-controller-manager" Feb 23 17:33:48 crc kubenswrapper[4724]: E0223 17:33:48.400041 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c14acfb-83f3-4782-84df-6558dde9c268" containerName="controller-manager" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.400051 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c14acfb-83f3-4782-84df-6558dde9c268" containerName="controller-manager" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.402293 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="7746d0a1-242b-4afc-b968-36853a4ad1ac" containerName="route-controller-manager" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.402373 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c14acfb-83f3-4782-84df-6558dde9c268" containerName="controller-manager" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.405302 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7l6ld" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.407052 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c14acfb-83f3-4782-84df-6558dde9c268-serving-cert\") pod \"2c14acfb-83f3-4782-84df-6558dde9c268\" (UID: \"2c14acfb-83f3-4782-84df-6558dde9c268\") " Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.407097 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8kf8\" (UniqueName: \"kubernetes.io/projected/2c14acfb-83f3-4782-84df-6558dde9c268-kube-api-access-w8kf8\") pod \"2c14acfb-83f3-4782-84df-6558dde9c268\" (UID: \"2c14acfb-83f3-4782-84df-6558dde9c268\") " Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.407141 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8nmb\" (UniqueName: \"kubernetes.io/projected/7746d0a1-242b-4afc-b968-36853a4ad1ac-kube-api-access-w8nmb\") pod \"7746d0a1-242b-4afc-b968-36853a4ad1ac\" (UID: \"7746d0a1-242b-4afc-b968-36853a4ad1ac\") " Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.407166 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7746d0a1-242b-4afc-b968-36853a4ad1ac-config\") pod \"7746d0a1-242b-4afc-b968-36853a4ad1ac\" (UID: \"7746d0a1-242b-4afc-b968-36853a4ad1ac\") " Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.407218 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7746d0a1-242b-4afc-b968-36853a4ad1ac-client-ca\") pod \"7746d0a1-242b-4afc-b968-36853a4ad1ac\" (UID: \"7746d0a1-242b-4afc-b968-36853a4ad1ac\") " Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.407668 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.407777 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2c14acfb-83f3-4782-84df-6558dde9c268-proxy-ca-bundles\") pod \"2c14acfb-83f3-4782-84df-6558dde9c268\" (UID: \"2c14acfb-83f3-4782-84df-6558dde9c268\") " Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.407811 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7746d0a1-242b-4afc-b968-36853a4ad1ac-serving-cert\") pod \"7746d0a1-242b-4afc-b968-36853a4ad1ac\" (UID: \"7746d0a1-242b-4afc-b968-36853a4ad1ac\") " Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.408625 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c14acfb-83f3-4782-84df-6558dde9c268-config\") pod \"2c14acfb-83f3-4782-84df-6558dde9c268\" (UID: \"2c14acfb-83f3-4782-84df-6558dde9c268\") " Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.408683 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2c14acfb-83f3-4782-84df-6558dde9c268-client-ca\") pod \"2c14acfb-83f3-4782-84df-6558dde9c268\" (UID: \"2c14acfb-83f3-4782-84df-6558dde9c268\") " Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.409202 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7746d0a1-242b-4afc-b968-36853a4ad1ac-client-ca" (OuterVolumeSpecName: "client-ca") pod "7746d0a1-242b-4afc-b968-36853a4ad1ac" (UID: "7746d0a1-242b-4afc-b968-36853a4ad1ac"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:33:48 crc kubenswrapper[4724]: E0223 17:33:48.410031 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 17:33:48.910013414 +0000 UTC m=+184.726213014 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.410518 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c14acfb-83f3-4782-84df-6558dde9c268-client-ca" (OuterVolumeSpecName: "client-ca") pod "2c14acfb-83f3-4782-84df-6558dde9c268" (UID: "2c14acfb-83f3-4782-84df-6558dde9c268"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.411525 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c14acfb-83f3-4782-84df-6558dde9c268-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "2c14acfb-83f3-4782-84df-6558dde9c268" (UID: "2c14acfb-83f3-4782-84df-6558dde9c268"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.411620 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c14acfb-83f3-4782-84df-6558dde9c268-config" (OuterVolumeSpecName: "config") pod "2c14acfb-83f3-4782-84df-6558dde9c268" (UID: "2c14acfb-83f3-4782-84df-6558dde9c268"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.412569 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.413214 4724 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2c14acfb-83f3-4782-84df-6558dde9c268-client-ca\") on node \"crc\" DevicePath \"\"" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.413234 4724 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7746d0a1-242b-4afc-b968-36853a4ad1ac-client-ca\") on node \"crc\" DevicePath \"\"" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.413244 4724 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2c14acfb-83f3-4782-84df-6558dde9c268-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.413255 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c14acfb-83f3-4782-84df-6558dde9c268-config\") on node \"crc\" DevicePath \"\"" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.414326 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9vlcc" Feb 23 17:33:48 crc kubenswrapper[4724]: E0223 17:33:48.417521 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 17:33:48.91750651 +0000 UTC m=+184.733706100 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qqsg7" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.417668 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7746d0a1-242b-4afc-b968-36853a4ad1ac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7746d0a1-242b-4afc-b968-36853a4ad1ac" (UID: "7746d0a1-242b-4afc-b968-36853a4ad1ac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.418222 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7746d0a1-242b-4afc-b968-36853a4ad1ac-config" (OuterVolumeSpecName: "config") pod "7746d0a1-242b-4afc-b968-36853a4ad1ac" (UID: "7746d0a1-242b-4afc-b968-36853a4ad1ac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.444410 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c14acfb-83f3-4782-84df-6558dde9c268-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2c14acfb-83f3-4782-84df-6558dde9c268" (UID: "2c14acfb-83f3-4782-84df-6558dde9c268"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.444518 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7746d0a1-242b-4afc-b968-36853a4ad1ac-kube-api-access-w8nmb" (OuterVolumeSpecName: "kube-api-access-w8nmb") pod "7746d0a1-242b-4afc-b968-36853a4ad1ac" (UID: "7746d0a1-242b-4afc-b968-36853a4ad1ac"). InnerVolumeSpecName "kube-api-access-w8nmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.448229 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7l6ld"] Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.451876 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c14acfb-83f3-4782-84df-6558dde9c268-kube-api-access-w8kf8" (OuterVolumeSpecName: "kube-api-access-w8kf8") pod "2c14acfb-83f3-4782-84df-6558dde9c268" (UID: "2c14acfb-83f3-4782-84df-6558dde9c268"). InnerVolumeSpecName "kube-api-access-w8kf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.477108 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-q2jvs" podStartSLOduration=128.477086088 podStartE2EDuration="2m8.477086088s" podCreationTimestamp="2026-02-23 17:31:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:33:48.476789181 +0000 UTC m=+184.292988781" watchObservedRunningTime="2026-02-23 17:33:48.477086088 +0000 UTC m=+184.293285688" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.491155 4724 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-23T17:33:47.912435251Z","Handler":null,"Name":""} Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.496213 4724 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.496656 4724 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.515331 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.515636 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgdvh\" (UniqueName: \"kubernetes.io/projected/b2f027f8-9cac-49ef-87b4-6a6d0d2aee31-kube-api-access-tgdvh\") pod \"redhat-marketplace-7l6ld\" (UID: \"b2f027f8-9cac-49ef-87b4-6a6d0d2aee31\") " pod="openshift-marketplace/redhat-marketplace-7l6ld" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.515758 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2f027f8-9cac-49ef-87b4-6a6d0d2aee31-catalog-content\") pod \"redhat-marketplace-7l6ld\" (UID: \"b2f027f8-9cac-49ef-87b4-6a6d0d2aee31\") " pod="openshift-marketplace/redhat-marketplace-7l6ld" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.515786 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2f027f8-9cac-49ef-87b4-6a6d0d2aee31-utilities\") pod \"redhat-marketplace-7l6ld\" (UID: \"b2f027f8-9cac-49ef-87b4-6a6d0d2aee31\") " pod="openshift-marketplace/redhat-marketplace-7l6ld" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.515977 4724 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c14acfb-83f3-4782-84df-6558dde9c268-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.515992 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8kf8\" (UniqueName: \"kubernetes.io/projected/2c14acfb-83f3-4782-84df-6558dde9c268-kube-api-access-w8kf8\") on node \"crc\" DevicePath \"\"" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.516002 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8nmb\" (UniqueName: \"kubernetes.io/projected/7746d0a1-242b-4afc-b968-36853a4ad1ac-kube-api-access-w8nmb\") on node \"crc\" DevicePath \"\"" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.516011 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7746d0a1-242b-4afc-b968-36853a4ad1ac-config\") on node \"crc\" DevicePath \"\"" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.516019 4724 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7746d0a1-242b-4afc-b968-36853a4ad1ac-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.549849 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.552522 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7bd75d5955-5vvll"] Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.553290 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bd75d5955-5vvll" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.556278 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77f99799b5-558d5"] Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.557324 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77f99799b5-558d5" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.570375 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7bd75d5955-5vvll"] Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.580548 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77f99799b5-558d5"] Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.620575 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/57071a98-7587-4bd9-90a5-eb4ee3f86979-client-ca\") pod \"controller-manager-7bd75d5955-5vvll\" (UID: \"57071a98-7587-4bd9-90a5-eb4ee3f86979\") " pod="openshift-controller-manager/controller-manager-7bd75d5955-5vvll" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.620969 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57071a98-7587-4bd9-90a5-eb4ee3f86979-config\") pod \"controller-manager-7bd75d5955-5vvll\" (UID: \"57071a98-7587-4bd9-90a5-eb4ee3f86979\") " pod="openshift-controller-manager/controller-manager-7bd75d5955-5vvll" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.621004 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.621025 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/57071a98-7587-4bd9-90a5-eb4ee3f86979-proxy-ca-bundles\") pod \"controller-manager-7bd75d5955-5vvll\" (UID: \"57071a98-7587-4bd9-90a5-eb4ee3f86979\") " pod="openshift-controller-manager/controller-manager-7bd75d5955-5vvll" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.621055 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgdvh\" (UniqueName: \"kubernetes.io/projected/b2f027f8-9cac-49ef-87b4-6a6d0d2aee31-kube-api-access-tgdvh\") pod \"redhat-marketplace-7l6ld\" (UID: \"b2f027f8-9cac-49ef-87b4-6a6d0d2aee31\") " pod="openshift-marketplace/redhat-marketplace-7l6ld" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.621083 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ee8459a-b10f-4b12-9222-b3d7407d98a8-client-ca\") pod \"route-controller-manager-77f99799b5-558d5\" (UID: \"5ee8459a-b10f-4b12-9222-b3d7407d98a8\") " pod="openshift-route-controller-manager/route-controller-manager-77f99799b5-558d5" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.621119 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ee8459a-b10f-4b12-9222-b3d7407d98a8-config\") pod \"route-controller-manager-77f99799b5-558d5\" (UID: \"5ee8459a-b10f-4b12-9222-b3d7407d98a8\") " pod="openshift-route-controller-manager/route-controller-manager-77f99799b5-558d5" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.621148 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6jvb\" (UniqueName: \"kubernetes.io/projected/5ee8459a-b10f-4b12-9222-b3d7407d98a8-kube-api-access-n6jvb\") pod \"route-controller-manager-77f99799b5-558d5\" (UID: \"5ee8459a-b10f-4b12-9222-b3d7407d98a8\") " pod="openshift-route-controller-manager/route-controller-manager-77f99799b5-558d5" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.621174 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57071a98-7587-4bd9-90a5-eb4ee3f86979-serving-cert\") pod \"controller-manager-7bd75d5955-5vvll\" (UID: \"57071a98-7587-4bd9-90a5-eb4ee3f86979\") " pod="openshift-controller-manager/controller-manager-7bd75d5955-5vvll" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.621208 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ee8459a-b10f-4b12-9222-b3d7407d98a8-serving-cert\") pod \"route-controller-manager-77f99799b5-558d5\" (UID: \"5ee8459a-b10f-4b12-9222-b3d7407d98a8\") " pod="openshift-route-controller-manager/route-controller-manager-77f99799b5-558d5" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.621264 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2f027f8-9cac-49ef-87b4-6a6d0d2aee31-catalog-content\") pod \"redhat-marketplace-7l6ld\" (UID: \"b2f027f8-9cac-49ef-87b4-6a6d0d2aee31\") " pod="openshift-marketplace/redhat-marketplace-7l6ld" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.621293 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2f027f8-9cac-49ef-87b4-6a6d0d2aee31-utilities\") pod \"redhat-marketplace-7l6ld\" (UID: \"b2f027f8-9cac-49ef-87b4-6a6d0d2aee31\") " pod="openshift-marketplace/redhat-marketplace-7l6ld" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.621331 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87wxj\" (UniqueName: \"kubernetes.io/projected/57071a98-7587-4bd9-90a5-eb4ee3f86979-kube-api-access-87wxj\") pod \"controller-manager-7bd75d5955-5vvll\" (UID: \"57071a98-7587-4bd9-90a5-eb4ee3f86979\") " pod="openshift-controller-manager/controller-manager-7bd75d5955-5vvll" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.626601 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2f027f8-9cac-49ef-87b4-6a6d0d2aee31-catalog-content\") pod \"redhat-marketplace-7l6ld\" (UID: \"b2f027f8-9cac-49ef-87b4-6a6d0d2aee31\") " pod="openshift-marketplace/redhat-marketplace-7l6ld" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.627065 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2f027f8-9cac-49ef-87b4-6a6d0d2aee31-utilities\") pod \"redhat-marketplace-7l6ld\" (UID: \"b2f027f8-9cac-49ef-87b4-6a6d0d2aee31\") " pod="openshift-marketplace/redhat-marketplace-7l6ld" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.633732 4724 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.633783 4724 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.657508 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgdvh\" (UniqueName: \"kubernetes.io/projected/b2f027f8-9cac-49ef-87b4-6a6d0d2aee31-kube-api-access-tgdvh\") pod \"redhat-marketplace-7l6ld\" (UID: \"b2f027f8-9cac-49ef-87b4-6a6d0d2aee31\") " pod="openshift-marketplace/redhat-marketplace-7l6ld" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.715358 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qqsg7\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.723756 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/57071a98-7587-4bd9-90a5-eb4ee3f86979-client-ca\") pod \"controller-manager-7bd75d5955-5vvll\" (UID: \"57071a98-7587-4bd9-90a5-eb4ee3f86979\") " pod="openshift-controller-manager/controller-manager-7bd75d5955-5vvll" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.725547 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/57071a98-7587-4bd9-90a5-eb4ee3f86979-client-ca\") pod \"controller-manager-7bd75d5955-5vvll\" (UID: \"57071a98-7587-4bd9-90a5-eb4ee3f86979\") " pod="openshift-controller-manager/controller-manager-7bd75d5955-5vvll" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.725632 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57071a98-7587-4bd9-90a5-eb4ee3f86979-config\") pod \"controller-manager-7bd75d5955-5vvll\" (UID: \"57071a98-7587-4bd9-90a5-eb4ee3f86979\") " pod="openshift-controller-manager/controller-manager-7bd75d5955-5vvll" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.726717 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57071a98-7587-4bd9-90a5-eb4ee3f86979-config\") pod \"controller-manager-7bd75d5955-5vvll\" (UID: \"57071a98-7587-4bd9-90a5-eb4ee3f86979\") " pod="openshift-controller-manager/controller-manager-7bd75d5955-5vvll" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.726788 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/57071a98-7587-4bd9-90a5-eb4ee3f86979-proxy-ca-bundles\") pod \"controller-manager-7bd75d5955-5vvll\" (UID: \"57071a98-7587-4bd9-90a5-eb4ee3f86979\") " pod="openshift-controller-manager/controller-manager-7bd75d5955-5vvll" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.726871 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ee8459a-b10f-4b12-9222-b3d7407d98a8-client-ca\") pod \"route-controller-manager-77f99799b5-558d5\" (UID: \"5ee8459a-b10f-4b12-9222-b3d7407d98a8\") " pod="openshift-route-controller-manager/route-controller-manager-77f99799b5-558d5" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.727675 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ee8459a-b10f-4b12-9222-b3d7407d98a8-config\") pod \"route-controller-manager-77f99799b5-558d5\" (UID: \"5ee8459a-b10f-4b12-9222-b3d7407d98a8\") " pod="openshift-route-controller-manager/route-controller-manager-77f99799b5-558d5" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.728915 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6jvb\" (UniqueName: \"kubernetes.io/projected/5ee8459a-b10f-4b12-9222-b3d7407d98a8-kube-api-access-n6jvb\") pod \"route-controller-manager-77f99799b5-558d5\" (UID: \"5ee8459a-b10f-4b12-9222-b3d7407d98a8\") " pod="openshift-route-controller-manager/route-controller-manager-77f99799b5-558d5" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.729332 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57071a98-7587-4bd9-90a5-eb4ee3f86979-serving-cert\") pod \"controller-manager-7bd75d5955-5vvll\" (UID: \"57071a98-7587-4bd9-90a5-eb4ee3f86979\") " pod="openshift-controller-manager/controller-manager-7bd75d5955-5vvll" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.728859 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ee8459a-b10f-4b12-9222-b3d7407d98a8-config\") pod \"route-controller-manager-77f99799b5-558d5\" (UID: \"5ee8459a-b10f-4b12-9222-b3d7407d98a8\") " pod="openshift-route-controller-manager/route-controller-manager-77f99799b5-558d5" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.727837 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/57071a98-7587-4bd9-90a5-eb4ee3f86979-proxy-ca-bundles\") pod \"controller-manager-7bd75d5955-5vvll\" (UID: \"57071a98-7587-4bd9-90a5-eb4ee3f86979\") " pod="openshift-controller-manager/controller-manager-7bd75d5955-5vvll" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.729059 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ee8459a-b10f-4b12-9222-b3d7407d98a8-client-ca\") pod \"route-controller-manager-77f99799b5-558d5\" (UID: \"5ee8459a-b10f-4b12-9222-b3d7407d98a8\") " pod="openshift-route-controller-manager/route-controller-manager-77f99799b5-558d5" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.729527 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ee8459a-b10f-4b12-9222-b3d7407d98a8-serving-cert\") pod \"route-controller-manager-77f99799b5-558d5\" (UID: \"5ee8459a-b10f-4b12-9222-b3d7407d98a8\") " pod="openshift-route-controller-manager/route-controller-manager-77f99799b5-558d5" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.729633 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87wxj\" (UniqueName: \"kubernetes.io/projected/57071a98-7587-4bd9-90a5-eb4ee3f86979-kube-api-access-87wxj\") pod \"controller-manager-7bd75d5955-5vvll\" (UID: \"57071a98-7587-4bd9-90a5-eb4ee3f86979\") " pod="openshift-controller-manager/controller-manager-7bd75d5955-5vvll" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.733103 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ee8459a-b10f-4b12-9222-b3d7407d98a8-serving-cert\") pod \"route-controller-manager-77f99799b5-558d5\" (UID: \"5ee8459a-b10f-4b12-9222-b3d7407d98a8\") " pod="openshift-route-controller-manager/route-controller-manager-77f99799b5-558d5" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.734832 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57071a98-7587-4bd9-90a5-eb4ee3f86979-serving-cert\") pod \"controller-manager-7bd75d5955-5vvll\" (UID: \"57071a98-7587-4bd9-90a5-eb4ee3f86979\") " pod="openshift-controller-manager/controller-manager-7bd75d5955-5vvll" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.738812 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.747642 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87wxj\" (UniqueName: \"kubernetes.io/projected/57071a98-7587-4bd9-90a5-eb4ee3f86979-kube-api-access-87wxj\") pod \"controller-manager-7bd75d5955-5vvll\" (UID: \"57071a98-7587-4bd9-90a5-eb4ee3f86979\") " pod="openshift-controller-manager/controller-manager-7bd75d5955-5vvll" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.748236 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6jvb\" (UniqueName: \"kubernetes.io/projected/5ee8459a-b10f-4b12-9222-b3d7407d98a8-kube-api-access-n6jvb\") pod \"route-controller-manager-77f99799b5-558d5\" (UID: \"5ee8459a-b10f-4b12-9222-b3d7407d98a8\") " pod="openshift-route-controller-manager/route-controller-manager-77f99799b5-558d5" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.752088 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.794164 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g5s4k"] Feb 23 17:33:48 crc kubenswrapper[4724]: W0223 17:33:48.808888 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod828827aa_9a76_4ba6_962f_ad0ac278bd72.slice/crio-6046f6c31ebda1bea92284c2c4e0ba48ea721268593e638e97380d80e789300c WatchSource:0}: Error finding container 6046f6c31ebda1bea92284c2c4e0ba48ea721268593e638e97380d80e789300c: Status 404 returned error can't find the container with id 6046f6c31ebda1bea92284c2c4e0ba48ea721268593e638e97380d80e789300c Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.928048 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7l6ld" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.934719 4724 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-5jdvd container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.934811 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-5jdvd" podUID="2c14acfb-83f3-4782-84df-6558dde9c268" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.939545 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bd75d5955-5vvll" Feb 23 17:33:48 crc kubenswrapper[4724]: I0223 17:33:48.984741 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77f99799b5-558d5" Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.002954 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.006129 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qqsg7"] Feb 23 17:33:49 crc kubenswrapper[4724]: W0223 17:33:49.022176 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d52ec05_b283_48f8_aed2_50c0a6dcc9e3.slice/crio-6094ca6d4531c565a0f33aadc2dc17cb69ab55c666b6a51abec7b48e55731764 WatchSource:0}: Error finding container 6094ca6d4531c565a0f33aadc2dc17cb69ab55c666b6a51abec7b48e55731764: Status 404 returned error can't find the container with id 6094ca6d4531c565a0f33aadc2dc17cb69ab55c666b6a51abec7b48e55731764 Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.023345 4724 patch_prober.go:28] interesting pod/router-default-5444994796-s77tw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 17:33:49 crc kubenswrapper[4724]: [-]has-synced failed: reason withheld Feb 23 17:33:49 crc kubenswrapper[4724]: [+]process-running ok Feb 23 17:33:49 crc kubenswrapper[4724]: healthz check failed Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.023619 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s77tw" podUID="facb437a-7568-41fc-a922-644ad2cfdda2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.232051 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7bd75d5955-5vvll"] Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.285544 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.319116 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"f94958ec-8484-4e01-b05c-f00b60bf4554","Type":"ContainerStarted","Data":"ddc5eaf5ae41d7f0b816ff2ede60b4f499d6917eefa8365a4659a77e9216481c"} Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.329753 4724 generic.go:334] "Generic (PLEG): container finished" podID="25854d47-4d3e-4817-b27c-a186432e8c32" containerID="2f73619746c4693959280709543409fa88165c0f43dba0ceb302ccf1266ea3f2" exitCode=0 Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.330337 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"25854d47-4d3e-4817-b27c-a186432e8c32","Type":"ContainerDied","Data":"2f73619746c4693959280709543409fa88165c0f43dba0ceb302ccf1266ea3f2"} Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.340891 4724 generic.go:334] "Generic (PLEG): container finished" podID="828827aa-9a76-4ba6-962f-ad0ac278bd72" containerID="23d9db63a7c114c6335d5e0eee86be89ed2475b44750dc18bdb1fe08c01d1dec" exitCode=0 Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.341049 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g5s4k" event={"ID":"828827aa-9a76-4ba6-962f-ad0ac278bd72","Type":"ContainerDied","Data":"23d9db63a7c114c6335d5e0eee86be89ed2475b44750dc18bdb1fe08c01d1dec"} Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.341086 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g5s4k" event={"ID":"828827aa-9a76-4ba6-962f-ad0ac278bd72","Type":"ContainerStarted","Data":"6046f6c31ebda1bea92284c2c4e0ba48ea721268593e638e97380d80e789300c"} Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.377526 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.377498907 podStartE2EDuration="3.377498907s" podCreationTimestamp="2026-02-23 17:33:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:33:49.341429262 +0000 UTC m=+185.157628862" watchObservedRunningTime="2026-02-23 17:33:49.377498907 +0000 UTC m=+185.193698517" Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.385822 4724 generic.go:334] "Generic (PLEG): container finished" podID="170d3970-9dce-48c5-9b25-9d30d5780282" containerID="3c64859c1b0e8eebd07578b754c4353737bfc8b8e796f282339781ddff1b31e3" exitCode=0 Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.388586 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j966w" event={"ID":"170d3970-9dce-48c5-9b25-9d30d5780282","Type":"ContainerDied","Data":"3c64859c1b0e8eebd07578b754c4353737bfc8b8e796f282339781ddff1b31e3"} Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.411224 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kbxv5"] Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.414450 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kbxv5" Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.418648 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-5jdvd" Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.422512 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" event={"ID":"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3","Type":"ContainerStarted","Data":"e502cc0d195ea248d7175d235ccec9cc0041327716c5a06092f4dccc680da6bc"} Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.422569 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" event={"ID":"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3","Type":"ContainerStarted","Data":"6094ca6d4531c565a0f33aadc2dc17cb69ab55c666b6a51abec7b48e55731764"} Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.422629 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgdx8" Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.423747 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.432218 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kbxv5"] Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.433127 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 23 17:33:49 crc kubenswrapper[4724]: W0223 17:33:49.437926 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57071a98_7587_4bd9_90a5_eb4ee3f86979.slice/crio-61e9b7173e29bba4f89b76f9fb728a12dc2524e99325050dfd3b6774336c2776 WatchSource:0}: Error finding container 61e9b7173e29bba4f89b76f9fb728a12dc2524e99325050dfd3b6774336c2776: Status 404 returned error can't find the container with id 61e9b7173e29bba4f89b76f9fb728a12dc2524e99325050dfd3b6774336c2776 Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.476299 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" podStartSLOduration=128.476276677 podStartE2EDuration="2m8.476276677s" podCreationTimestamp="2026-02-23 17:31:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:33:49.475470137 +0000 UTC m=+185.291669727" watchObservedRunningTime="2026-02-23 17:33:49.476276677 +0000 UTC m=+185.292476277" Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.527827 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5jdvd"] Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.540257 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5jdvd"] Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.559185 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7l6ld"] Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.560276 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4ebec31-2766-49b2-9f05-9e6de41cf161-catalog-content\") pod \"redhat-operators-kbxv5\" (UID: \"b4ebec31-2766-49b2-9f05-9e6de41cf161\") " pod="openshift-marketplace/redhat-operators-kbxv5" Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.560646 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnv75\" (UniqueName: \"kubernetes.io/projected/b4ebec31-2766-49b2-9f05-9e6de41cf161-kube-api-access-nnv75\") pod \"redhat-operators-kbxv5\" (UID: \"b4ebec31-2766-49b2-9f05-9e6de41cf161\") " pod="openshift-marketplace/redhat-operators-kbxv5" Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.560703 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4ebec31-2766-49b2-9f05-9e6de41cf161-utilities\") pod \"redhat-operators-kbxv5\" (UID: \"b4ebec31-2766-49b2-9f05-9e6de41cf161\") " pod="openshift-marketplace/redhat-operators-kbxv5" Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.562972 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgdx8"] Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.568868 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgdx8"] Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.650942 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77f99799b5-558d5"] Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.666589 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4ebec31-2766-49b2-9f05-9e6de41cf161-utilities\") pod \"redhat-operators-kbxv5\" (UID: \"b4ebec31-2766-49b2-9f05-9e6de41cf161\") " pod="openshift-marketplace/redhat-operators-kbxv5" Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.666661 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4ebec31-2766-49b2-9f05-9e6de41cf161-catalog-content\") pod \"redhat-operators-kbxv5\" (UID: \"b4ebec31-2766-49b2-9f05-9e6de41cf161\") " pod="openshift-marketplace/redhat-operators-kbxv5" Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.669298 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4ebec31-2766-49b2-9f05-9e6de41cf161-utilities\") pod \"redhat-operators-kbxv5\" (UID: \"b4ebec31-2766-49b2-9f05-9e6de41cf161\") " pod="openshift-marketplace/redhat-operators-kbxv5" Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.669917 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnv75\" (UniqueName: \"kubernetes.io/projected/b4ebec31-2766-49b2-9f05-9e6de41cf161-kube-api-access-nnv75\") pod \"redhat-operators-kbxv5\" (UID: \"b4ebec31-2766-49b2-9f05-9e6de41cf161\") " pod="openshift-marketplace/redhat-operators-kbxv5" Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.670221 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4ebec31-2766-49b2-9f05-9e6de41cf161-catalog-content\") pod \"redhat-operators-kbxv5\" (UID: \"b4ebec31-2766-49b2-9f05-9e6de41cf161\") " pod="openshift-marketplace/redhat-operators-kbxv5" Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.694105 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnv75\" (UniqueName: \"kubernetes.io/projected/b4ebec31-2766-49b2-9f05-9e6de41cf161-kube-api-access-nnv75\") pod \"redhat-operators-kbxv5\" (UID: \"b4ebec31-2766-49b2-9f05-9e6de41cf161\") " pod="openshift-marketplace/redhat-operators-kbxv5" Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.776626 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kbxv5" Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.784729 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vklmk"] Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.786232 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vklmk" Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.800340 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531130-wbshf" Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.810928 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vklmk"] Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.871755 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/67dbde4d-5c0f-45cf-82ae-435b16e17121-secret-volume\") pod \"67dbde4d-5c0f-45cf-82ae-435b16e17121\" (UID: \"67dbde4d-5c0f-45cf-82ae-435b16e17121\") " Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.871840 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67dbde4d-5c0f-45cf-82ae-435b16e17121-config-volume\") pod \"67dbde4d-5c0f-45cf-82ae-435b16e17121\" (UID: \"67dbde4d-5c0f-45cf-82ae-435b16e17121\") " Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.871883 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4msdm\" (UniqueName: \"kubernetes.io/projected/67dbde4d-5c0f-45cf-82ae-435b16e17121-kube-api-access-4msdm\") pod \"67dbde4d-5c0f-45cf-82ae-435b16e17121\" (UID: \"67dbde4d-5c0f-45cf-82ae-435b16e17121\") " Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.873214 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9mhb\" (UniqueName: \"kubernetes.io/projected/a5ebf13d-dedd-42b9-8308-4d8cbf8ea9c8-kube-api-access-x9mhb\") pod \"redhat-operators-vklmk\" (UID: \"a5ebf13d-dedd-42b9-8308-4d8cbf8ea9c8\") " pod="openshift-marketplace/redhat-operators-vklmk" Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.873245 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5ebf13d-dedd-42b9-8308-4d8cbf8ea9c8-catalog-content\") pod \"redhat-operators-vklmk\" (UID: \"a5ebf13d-dedd-42b9-8308-4d8cbf8ea9c8\") " pod="openshift-marketplace/redhat-operators-vklmk" Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.873282 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5ebf13d-dedd-42b9-8308-4d8cbf8ea9c8-utilities\") pod \"redhat-operators-vklmk\" (UID: \"a5ebf13d-dedd-42b9-8308-4d8cbf8ea9c8\") " pod="openshift-marketplace/redhat-operators-vklmk" Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.873491 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67dbde4d-5c0f-45cf-82ae-435b16e17121-config-volume" (OuterVolumeSpecName: "config-volume") pod "67dbde4d-5c0f-45cf-82ae-435b16e17121" (UID: "67dbde4d-5c0f-45cf-82ae-435b16e17121"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.878077 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67dbde4d-5c0f-45cf-82ae-435b16e17121-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "67dbde4d-5c0f-45cf-82ae-435b16e17121" (UID: "67dbde4d-5c0f-45cf-82ae-435b16e17121"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.879078 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67dbde4d-5c0f-45cf-82ae-435b16e17121-kube-api-access-4msdm" (OuterVolumeSpecName: "kube-api-access-4msdm") pod "67dbde4d-5c0f-45cf-82ae-435b16e17121" (UID: "67dbde4d-5c0f-45cf-82ae-435b16e17121"). InnerVolumeSpecName "kube-api-access-4msdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.974950 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9mhb\" (UniqueName: \"kubernetes.io/projected/a5ebf13d-dedd-42b9-8308-4d8cbf8ea9c8-kube-api-access-x9mhb\") pod \"redhat-operators-vklmk\" (UID: \"a5ebf13d-dedd-42b9-8308-4d8cbf8ea9c8\") " pod="openshift-marketplace/redhat-operators-vklmk" Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.975426 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5ebf13d-dedd-42b9-8308-4d8cbf8ea9c8-catalog-content\") pod \"redhat-operators-vklmk\" (UID: \"a5ebf13d-dedd-42b9-8308-4d8cbf8ea9c8\") " pod="openshift-marketplace/redhat-operators-vklmk" Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.975469 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5ebf13d-dedd-42b9-8308-4d8cbf8ea9c8-utilities\") pod \"redhat-operators-vklmk\" (UID: \"a5ebf13d-dedd-42b9-8308-4d8cbf8ea9c8\") " pod="openshift-marketplace/redhat-operators-vklmk" Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.975553 4724 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/67dbde4d-5c0f-45cf-82ae-435b16e17121-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.975566 4724 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67dbde4d-5c0f-45cf-82ae-435b16e17121-config-volume\") on node \"crc\" DevicePath \"\"" Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.975576 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4msdm\" (UniqueName: \"kubernetes.io/projected/67dbde4d-5c0f-45cf-82ae-435b16e17121-kube-api-access-4msdm\") on node \"crc\" DevicePath \"\"" Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.976121 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5ebf13d-dedd-42b9-8308-4d8cbf8ea9c8-utilities\") pod \"redhat-operators-vklmk\" (UID: \"a5ebf13d-dedd-42b9-8308-4d8cbf8ea9c8\") " pod="openshift-marketplace/redhat-operators-vklmk" Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.976128 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5ebf13d-dedd-42b9-8308-4d8cbf8ea9c8-catalog-content\") pod \"redhat-operators-vklmk\" (UID: \"a5ebf13d-dedd-42b9-8308-4d8cbf8ea9c8\") " pod="openshift-marketplace/redhat-operators-vklmk" Feb 23 17:33:49 crc kubenswrapper[4724]: I0223 17:33:49.998692 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9mhb\" (UniqueName: \"kubernetes.io/projected/a5ebf13d-dedd-42b9-8308-4d8cbf8ea9c8-kube-api-access-x9mhb\") pod \"redhat-operators-vklmk\" (UID: \"a5ebf13d-dedd-42b9-8308-4d8cbf8ea9c8\") " pod="openshift-marketplace/redhat-operators-vklmk" Feb 23 17:33:50 crc kubenswrapper[4724]: I0223 17:33:50.025628 4724 patch_prober.go:28] interesting pod/router-default-5444994796-s77tw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 17:33:50 crc kubenswrapper[4724]: [-]has-synced failed: reason withheld Feb 23 17:33:50 crc kubenswrapper[4724]: [+]process-running ok Feb 23 17:33:50 crc kubenswrapper[4724]: healthz check failed Feb 23 17:33:50 crc kubenswrapper[4724]: I0223 17:33:50.027586 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s77tw" podUID="facb437a-7568-41fc-a922-644ad2cfdda2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 17:33:50 crc kubenswrapper[4724]: I0223 17:33:50.118869 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vklmk" Feb 23 17:33:50 crc kubenswrapper[4724]: I0223 17:33:50.164099 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-72w5z" Feb 23 17:33:50 crc kubenswrapper[4724]: I0223 17:33:50.317977 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kbxv5"] Feb 23 17:33:50 crc kubenswrapper[4724]: W0223 17:33:50.386205 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4ebec31_2766_49b2_9f05_9e6de41cf161.slice/crio-601b85ecbec86bec05bb9bbc25566657aacae5742e5b5be1d2a6c7c4f9b3936f WatchSource:0}: Error finding container 601b85ecbec86bec05bb9bbc25566657aacae5742e5b5be1d2a6c7c4f9b3936f: Status 404 returned error can't find the container with id 601b85ecbec86bec05bb9bbc25566657aacae5742e5b5be1d2a6c7c4f9b3936f Feb 23 17:33:50 crc kubenswrapper[4724]: I0223 17:33:50.565066 4724 generic.go:334] "Generic (PLEG): container finished" podID="f94958ec-8484-4e01-b05c-f00b60bf4554" containerID="ddc5eaf5ae41d7f0b816ff2ede60b4f499d6917eefa8365a4659a77e9216481c" exitCode=0 Feb 23 17:33:50 crc kubenswrapper[4724]: I0223 17:33:50.565184 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"f94958ec-8484-4e01-b05c-f00b60bf4554","Type":"ContainerDied","Data":"ddc5eaf5ae41d7f0b816ff2ede60b4f499d6917eefa8365a4659a77e9216481c"} Feb 23 17:33:50 crc kubenswrapper[4724]: I0223 17:33:50.582912 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7bd75d5955-5vvll" event={"ID":"57071a98-7587-4bd9-90a5-eb4ee3f86979","Type":"ContainerStarted","Data":"c0e023da4e634a008f8ee753e420cc4c02cbca36534b2e38487f668ae2c72742"} Feb 23 17:33:50 crc kubenswrapper[4724]: I0223 17:33:50.582984 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7bd75d5955-5vvll" event={"ID":"57071a98-7587-4bd9-90a5-eb4ee3f86979","Type":"ContainerStarted","Data":"61e9b7173e29bba4f89b76f9fb728a12dc2524e99325050dfd3b6774336c2776"} Feb 23 17:33:50 crc kubenswrapper[4724]: I0223 17:33:50.583948 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7bd75d5955-5vvll" Feb 23 17:33:50 crc kubenswrapper[4724]: I0223 17:33:50.593903 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7bd75d5955-5vvll" Feb 23 17:33:50 crc kubenswrapper[4724]: I0223 17:33:50.612185 4724 generic.go:334] "Generic (PLEG): container finished" podID="b2f027f8-9cac-49ef-87b4-6a6d0d2aee31" containerID="109a6c2d4c359bbe19d0e24a1a1ad8869048c1e8fc3062b5295de54b77cf8eb0" exitCode=0 Feb 23 17:33:50 crc kubenswrapper[4724]: I0223 17:33:50.612342 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7l6ld" event={"ID":"b2f027f8-9cac-49ef-87b4-6a6d0d2aee31","Type":"ContainerDied","Data":"109a6c2d4c359bbe19d0e24a1a1ad8869048c1e8fc3062b5295de54b77cf8eb0"} Feb 23 17:33:50 crc kubenswrapper[4724]: I0223 17:33:50.612376 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7l6ld" event={"ID":"b2f027f8-9cac-49ef-87b4-6a6d0d2aee31","Type":"ContainerStarted","Data":"d574c3f749e1d9f5286ba8995c6ad3da64b08fcf820575575a93e46f8c3da70a"} Feb 23 17:33:50 crc kubenswrapper[4724]: I0223 17:33:50.653200 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77f99799b5-558d5" event={"ID":"5ee8459a-b10f-4b12-9222-b3d7407d98a8","Type":"ContainerStarted","Data":"d54cbd485d8ec387dd3230159a071b413fdab9cfe83a27e04ce481daeee2b352"} Feb 23 17:33:50 crc kubenswrapper[4724]: I0223 17:33:50.653271 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77f99799b5-558d5" event={"ID":"5ee8459a-b10f-4b12-9222-b3d7407d98a8","Type":"ContainerStarted","Data":"770e1e0374709054d5724f5abcfeef300ee1c5d7f29f64d3287e156888e71f23"} Feb 23 17:33:50 crc kubenswrapper[4724]: I0223 17:33:50.654542 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-77f99799b5-558d5" Feb 23 17:33:50 crc kubenswrapper[4724]: I0223 17:33:50.683004 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-77f99799b5-558d5" Feb 23 17:33:50 crc kubenswrapper[4724]: I0223 17:33:50.685997 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531130-wbshf" event={"ID":"67dbde4d-5c0f-45cf-82ae-435b16e17121","Type":"ContainerDied","Data":"9d9379332d336fde1a922524ec31bbfa172fc5302318b2256b79d9e4745c379a"} Feb 23 17:33:50 crc kubenswrapper[4724]: I0223 17:33:50.686052 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d9379332d336fde1a922524ec31bbfa172fc5302318b2256b79d9e4745c379a" Feb 23 17:33:50 crc kubenswrapper[4724]: I0223 17:33:50.686126 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531130-wbshf" Feb 23 17:33:50 crc kubenswrapper[4724]: I0223 17:33:50.701242 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vklmk"] Feb 23 17:33:50 crc kubenswrapper[4724]: I0223 17:33:50.755797 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7bd75d5955-5vvll" podStartSLOduration=3.7553835810000002 podStartE2EDuration="3.755383581s" podCreationTimestamp="2026-02-23 17:33:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:33:50.749867764 +0000 UTC m=+186.566067384" watchObservedRunningTime="2026-02-23 17:33:50.755383581 +0000 UTC m=+186.571583181" Feb 23 17:33:50 crc kubenswrapper[4724]: I0223 17:33:50.764974 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kbxv5" event={"ID":"b4ebec31-2766-49b2-9f05-9e6de41cf161","Type":"ContainerStarted","Data":"601b85ecbec86bec05bb9bbc25566657aacae5742e5b5be1d2a6c7c4f9b3936f"} Feb 23 17:33:50 crc kubenswrapper[4724]: I0223 17:33:50.977059 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c14acfb-83f3-4782-84df-6558dde9c268" path="/var/lib/kubelet/pods/2c14acfb-83f3-4782-84df-6558dde9c268/volumes" Feb 23 17:33:50 crc kubenswrapper[4724]: I0223 17:33:50.978062 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7746d0a1-242b-4afc-b968-36853a4ad1ac" path="/var/lib/kubelet/pods/7746d0a1-242b-4afc-b968-36853a4ad1ac/volumes" Feb 23 17:33:51 crc kubenswrapper[4724]: I0223 17:33:51.023325 4724 patch_prober.go:28] interesting pod/router-default-5444994796-s77tw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 17:33:51 crc kubenswrapper[4724]: [-]has-synced failed: reason withheld Feb 23 17:33:51 crc kubenswrapper[4724]: [+]process-running ok Feb 23 17:33:51 crc kubenswrapper[4724]: healthz check failed Feb 23 17:33:51 crc kubenswrapper[4724]: I0223 17:33:51.023453 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s77tw" podUID="facb437a-7568-41fc-a922-644ad2cfdda2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 17:33:51 crc kubenswrapper[4724]: I0223 17:33:51.278921 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 23 17:33:51 crc kubenswrapper[4724]: I0223 17:33:51.338491 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-77f99799b5-558d5" podStartSLOduration=4.338467377 podStartE2EDuration="4.338467377s" podCreationTimestamp="2026-02-23 17:33:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:33:50.831816177 +0000 UTC m=+186.648015777" watchObservedRunningTime="2026-02-23 17:33:51.338467377 +0000 UTC m=+187.154666977" Feb 23 17:33:51 crc kubenswrapper[4724]: I0223 17:33:51.361977 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/25854d47-4d3e-4817-b27c-a186432e8c32-kube-api-access\") pod \"25854d47-4d3e-4817-b27c-a186432e8c32\" (UID: \"25854d47-4d3e-4817-b27c-a186432e8c32\") " Feb 23 17:33:51 crc kubenswrapper[4724]: I0223 17:33:51.362164 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/25854d47-4d3e-4817-b27c-a186432e8c32-kubelet-dir\") pod \"25854d47-4d3e-4817-b27c-a186432e8c32\" (UID: \"25854d47-4d3e-4817-b27c-a186432e8c32\") " Feb 23 17:33:51 crc kubenswrapper[4724]: I0223 17:33:51.362494 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25854d47-4d3e-4817-b27c-a186432e8c32-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "25854d47-4d3e-4817-b27c-a186432e8c32" (UID: "25854d47-4d3e-4817-b27c-a186432e8c32"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:33:51 crc kubenswrapper[4724]: I0223 17:33:51.392573 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25854d47-4d3e-4817-b27c-a186432e8c32-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "25854d47-4d3e-4817-b27c-a186432e8c32" (UID: "25854d47-4d3e-4817-b27c-a186432e8c32"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:33:51 crc kubenswrapper[4724]: I0223 17:33:51.463688 4724 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/25854d47-4d3e-4817-b27c-a186432e8c32-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 23 17:33:51 crc kubenswrapper[4724]: I0223 17:33:51.463738 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/25854d47-4d3e-4817-b27c-a186432e8c32-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 23 17:33:51 crc kubenswrapper[4724]: I0223 17:33:51.843415 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"25854d47-4d3e-4817-b27c-a186432e8c32","Type":"ContainerDied","Data":"a64761a53dbed5793fced40c45dbdd0d3e44ccd21ff3a9a8b490a39fdb08e6ed"} Feb 23 17:33:51 crc kubenswrapper[4724]: I0223 17:33:51.843490 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a64761a53dbed5793fced40c45dbdd0d3e44ccd21ff3a9a8b490a39fdb08e6ed" Feb 23 17:33:51 crc kubenswrapper[4724]: I0223 17:33:51.843635 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 23 17:33:51 crc kubenswrapper[4724]: I0223 17:33:51.865854 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vklmk" event={"ID":"a5ebf13d-dedd-42b9-8308-4d8cbf8ea9c8","Type":"ContainerDied","Data":"ad051b6bb79454faf78f28b8692048101951deb96352d42662bf1f3b679c56d8"} Feb 23 17:33:51 crc kubenswrapper[4724]: I0223 17:33:51.865718 4724 generic.go:334] "Generic (PLEG): container finished" podID="a5ebf13d-dedd-42b9-8308-4d8cbf8ea9c8" containerID="ad051b6bb79454faf78f28b8692048101951deb96352d42662bf1f3b679c56d8" exitCode=0 Feb 23 17:33:51 crc kubenswrapper[4724]: I0223 17:33:51.874136 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vklmk" event={"ID":"a5ebf13d-dedd-42b9-8308-4d8cbf8ea9c8","Type":"ContainerStarted","Data":"a6bd97e2248728da8d733646b2f290a46992c18e30e357ea57b2ba14e1fdfe4f"} Feb 23 17:33:51 crc kubenswrapper[4724]: I0223 17:33:51.914847 4724 generic.go:334] "Generic (PLEG): container finished" podID="b4ebec31-2766-49b2-9f05-9e6de41cf161" containerID="25bfd3c4655199929f2d4bb08b8e479bc20d3298d96b639f969b51f99d4eac26" exitCode=0 Feb 23 17:33:51 crc kubenswrapper[4724]: I0223 17:33:51.916012 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kbxv5" event={"ID":"b4ebec31-2766-49b2-9f05-9e6de41cf161","Type":"ContainerDied","Data":"25bfd3c4655199929f2d4bb08b8e479bc20d3298d96b639f969b51f99d4eac26"} Feb 23 17:33:52 crc kubenswrapper[4724]: I0223 17:33:52.025098 4724 patch_prober.go:28] interesting pod/router-default-5444994796-s77tw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 17:33:52 crc kubenswrapper[4724]: [-]has-synced failed: reason withheld Feb 23 17:33:52 crc kubenswrapper[4724]: [+]process-running ok Feb 23 17:33:52 crc kubenswrapper[4724]: healthz check failed Feb 23 17:33:52 crc kubenswrapper[4724]: I0223 17:33:52.025176 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s77tw" podUID="facb437a-7568-41fc-a922-644ad2cfdda2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 17:33:52 crc kubenswrapper[4724]: I0223 17:33:52.238937 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 23 17:33:52 crc kubenswrapper[4724]: I0223 17:33:52.278652 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f94958ec-8484-4e01-b05c-f00b60bf4554-kubelet-dir\") pod \"f94958ec-8484-4e01-b05c-f00b60bf4554\" (UID: \"f94958ec-8484-4e01-b05c-f00b60bf4554\") " Feb 23 17:33:52 crc kubenswrapper[4724]: I0223 17:33:52.278755 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f94958ec-8484-4e01-b05c-f00b60bf4554-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f94958ec-8484-4e01-b05c-f00b60bf4554" (UID: "f94958ec-8484-4e01-b05c-f00b60bf4554"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:33:52 crc kubenswrapper[4724]: I0223 17:33:52.278984 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f94958ec-8484-4e01-b05c-f00b60bf4554-kube-api-access\") pod \"f94958ec-8484-4e01-b05c-f00b60bf4554\" (UID: \"f94958ec-8484-4e01-b05c-f00b60bf4554\") " Feb 23 17:33:52 crc kubenswrapper[4724]: I0223 17:33:52.280234 4724 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f94958ec-8484-4e01-b05c-f00b60bf4554-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 23 17:33:52 crc kubenswrapper[4724]: I0223 17:33:52.305096 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f94958ec-8484-4e01-b05c-f00b60bf4554-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f94958ec-8484-4e01-b05c-f00b60bf4554" (UID: "f94958ec-8484-4e01-b05c-f00b60bf4554"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:33:52 crc kubenswrapper[4724]: I0223 17:33:52.381269 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f94958ec-8484-4e01-b05c-f00b60bf4554-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 23 17:33:52 crc kubenswrapper[4724]: I0223 17:33:52.936265 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"f94958ec-8484-4e01-b05c-f00b60bf4554","Type":"ContainerDied","Data":"1565d80f2a21d86910f05521a8f67bc59a7099edf36f0c7f1ff20c9feebbe74c"} Feb 23 17:33:52 crc kubenswrapper[4724]: I0223 17:33:52.936327 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1565d80f2a21d86910f05521a8f67bc59a7099edf36f0c7f1ff20c9feebbe74c" Feb 23 17:33:52 crc kubenswrapper[4724]: I0223 17:33:52.936459 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 23 17:33:53 crc kubenswrapper[4724]: I0223 17:33:53.022100 4724 patch_prober.go:28] interesting pod/router-default-5444994796-s77tw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 17:33:53 crc kubenswrapper[4724]: [-]has-synced failed: reason withheld Feb 23 17:33:53 crc kubenswrapper[4724]: [+]process-running ok Feb 23 17:33:53 crc kubenswrapper[4724]: healthz check failed Feb 23 17:33:53 crc kubenswrapper[4724]: I0223 17:33:53.022245 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s77tw" podUID="facb437a-7568-41fc-a922-644ad2cfdda2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 17:33:53 crc kubenswrapper[4724]: I0223 17:33:53.365555 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-xtsjf" Feb 23 17:33:53 crc kubenswrapper[4724]: I0223 17:33:53.372567 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-xtsjf" Feb 23 17:33:54 crc kubenswrapper[4724]: I0223 17:33:54.022428 4724 patch_prober.go:28] interesting pod/router-default-5444994796-s77tw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 17:33:54 crc kubenswrapper[4724]: [-]has-synced failed: reason withheld Feb 23 17:33:54 crc kubenswrapper[4724]: [+]process-running ok Feb 23 17:33:54 crc kubenswrapper[4724]: healthz check failed Feb 23 17:33:54 crc kubenswrapper[4724]: I0223 17:33:54.022503 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s77tw" podUID="facb437a-7568-41fc-a922-644ad2cfdda2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 17:33:55 crc kubenswrapper[4724]: I0223 17:33:55.032927 4724 patch_prober.go:28] interesting pod/router-default-5444994796-s77tw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 17:33:55 crc kubenswrapper[4724]: [-]has-synced failed: reason withheld Feb 23 17:33:55 crc kubenswrapper[4724]: [+]process-running ok Feb 23 17:33:55 crc kubenswrapper[4724]: healthz check failed Feb 23 17:33:55 crc kubenswrapper[4724]: I0223 17:33:55.033004 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s77tw" podUID="facb437a-7568-41fc-a922-644ad2cfdda2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 17:33:56 crc kubenswrapper[4724]: I0223 17:33:56.031932 4724 patch_prober.go:28] interesting pod/router-default-5444994796-s77tw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 17:33:56 crc kubenswrapper[4724]: [-]has-synced failed: reason withheld Feb 23 17:33:56 crc kubenswrapper[4724]: [+]process-running ok Feb 23 17:33:56 crc kubenswrapper[4724]: healthz check failed Feb 23 17:33:56 crc kubenswrapper[4724]: I0223 17:33:56.032020 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s77tw" podUID="facb437a-7568-41fc-a922-644ad2cfdda2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 17:33:57 crc kubenswrapper[4724]: I0223 17:33:57.021897 4724 patch_prober.go:28] interesting pod/router-default-5444994796-s77tw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 17:33:57 crc kubenswrapper[4724]: [-]has-synced failed: reason withheld Feb 23 17:33:57 crc kubenswrapper[4724]: [+]process-running ok Feb 23 17:33:57 crc kubenswrapper[4724]: healthz check failed Feb 23 17:33:57 crc kubenswrapper[4724]: I0223 17:33:57.022280 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s77tw" podUID="facb437a-7568-41fc-a922-644ad2cfdda2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 17:33:57 crc kubenswrapper[4724]: I0223 17:33:57.233279 4724 patch_prober.go:28] interesting pod/downloads-7954f5f757-8hzn4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 23 17:33:57 crc kubenswrapper[4724]: I0223 17:33:57.233342 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8hzn4" podUID="fe2c617a-30bc-4095-b085-d6306827fcce" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 23 17:33:57 crc kubenswrapper[4724]: I0223 17:33:57.233939 4724 patch_prober.go:28] interesting pod/downloads-7954f5f757-8hzn4 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 23 17:33:57 crc kubenswrapper[4724]: I0223 17:33:57.233966 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-8hzn4" podUID="fe2c617a-30bc-4095-b085-d6306827fcce" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 23 17:33:58 crc kubenswrapper[4724]: I0223 17:33:58.021970 4724 patch_prober.go:28] interesting pod/router-default-5444994796-s77tw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 17:33:58 crc kubenswrapper[4724]: [-]has-synced failed: reason withheld Feb 23 17:33:58 crc kubenswrapper[4724]: [+]process-running ok Feb 23 17:33:58 crc kubenswrapper[4724]: healthz check failed Feb 23 17:33:58 crc kubenswrapper[4724]: I0223 17:33:58.022089 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s77tw" podUID="facb437a-7568-41fc-a922-644ad2cfdda2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 17:33:58 crc kubenswrapper[4724]: I0223 17:33:58.232681 4724 patch_prober.go:28] interesting pod/console-f9d7485db-fknnv container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.21:8443/health\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Feb 23 17:33:58 crc kubenswrapper[4724]: I0223 17:33:58.232795 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-fknnv" podUID="997b5710-9b99-4207-92da-28b7a1923db2" containerName="console" probeResult="failure" output="Get \"https://10.217.0.21:8443/health\": dial tcp 10.217.0.21:8443: connect: connection refused" Feb 23 17:33:59 crc kubenswrapper[4724]: I0223 17:33:59.023561 4724 patch_prober.go:28] interesting pod/router-default-5444994796-s77tw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 17:33:59 crc kubenswrapper[4724]: [-]has-synced failed: reason withheld Feb 23 17:33:59 crc kubenswrapper[4724]: [+]process-running ok Feb 23 17:33:59 crc kubenswrapper[4724]: healthz check failed Feb 23 17:33:59 crc kubenswrapper[4724]: I0223 17:33:59.023660 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s77tw" podUID="facb437a-7568-41fc-a922-644ad2cfdda2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 17:34:00 crc kubenswrapper[4724]: I0223 17:34:00.021865 4724 patch_prober.go:28] interesting pod/router-default-5444994796-s77tw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 17:34:00 crc kubenswrapper[4724]: [-]has-synced failed: reason withheld Feb 23 17:34:00 crc kubenswrapper[4724]: [+]process-running ok Feb 23 17:34:00 crc kubenswrapper[4724]: healthz check failed Feb 23 17:34:00 crc kubenswrapper[4724]: I0223 17:34:00.021934 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s77tw" podUID="facb437a-7568-41fc-a922-644ad2cfdda2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 17:34:01 crc kubenswrapper[4724]: I0223 17:34:01.021786 4724 patch_prober.go:28] interesting pod/router-default-5444994796-s77tw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 17:34:01 crc kubenswrapper[4724]: [-]has-synced failed: reason withheld Feb 23 17:34:01 crc kubenswrapper[4724]: [+]process-running ok Feb 23 17:34:01 crc kubenswrapper[4724]: healthz check failed Feb 23 17:34:01 crc kubenswrapper[4724]: I0223 17:34:01.021886 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s77tw" podUID="facb437a-7568-41fc-a922-644ad2cfdda2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 17:34:02 crc kubenswrapper[4724]: I0223 17:34:02.096114 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-s77tw" Feb 23 17:34:02 crc kubenswrapper[4724]: I0223 17:34:02.100964 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-s77tw" Feb 23 17:34:07 crc kubenswrapper[4724]: I0223 17:34:07.240307 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-8hzn4" Feb 23 17:34:08 crc kubenswrapper[4724]: I0223 17:34:08.231138 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-fknnv" Feb 23 17:34:08 crc kubenswrapper[4724]: I0223 17:34:08.235070 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-fknnv" Feb 23 17:34:08 crc kubenswrapper[4724]: I0223 17:34:08.758864 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:34:18 crc kubenswrapper[4724]: I0223 17:34:18.447288 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dlfsv" Feb 23 17:34:19 crc kubenswrapper[4724]: I0223 17:34:19.291282 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 17:34:19 crc kubenswrapper[4724]: E0223 17:34:19.866742 4724 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 23 17:34:19 crc kubenswrapper[4724]: E0223 17:34:19.867011 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nnv75,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-kbxv5_openshift-marketplace(b4ebec31-2766-49b2-9f05-9e6de41cf161): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 23 17:34:19 crc kubenswrapper[4724]: E0223 17:34:19.868239 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-kbxv5" podUID="b4ebec31-2766-49b2-9f05-9e6de41cf161" Feb 23 17:34:19 crc kubenswrapper[4724]: E0223 17:34:19.896527 4724 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 23 17:34:19 crc kubenswrapper[4724]: E0223 17:34:19.896729 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x9mhb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-vklmk_openshift-marketplace(a5ebf13d-dedd-42b9-8308-4d8cbf8ea9c8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 23 17:34:19 crc kubenswrapper[4724]: E0223 17:34:19.897962 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-vklmk" podUID="a5ebf13d-dedd-42b9-8308-4d8cbf8ea9c8" Feb 23 17:34:20 crc kubenswrapper[4724]: I0223 17:34:20.228101 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ft7cc" event={"ID":"7a5401c3-8e65-4b1f-89a5-4bd1628b149c","Type":"ContainerStarted","Data":"fdc39fb5091999c77e2e885f8e20628577fbc6860cb7be12ced53e5b4b1bca00"} Feb 23 17:34:20 crc kubenswrapper[4724]: I0223 17:34:20.231447 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jrhf2" event={"ID":"4e2b6f03-51f9-4434-b1ae-f7f7ce8d2004","Type":"ContainerStarted","Data":"696579ceca8e39da0e78f989aaf475ebdc14e35ba69884918baa53e37b7565f6"} Feb 23 17:34:20 crc kubenswrapper[4724]: I0223 17:34:20.235505 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7l6ld" event={"ID":"b2f027f8-9cac-49ef-87b4-6a6d0d2aee31","Type":"ContainerStarted","Data":"9c41ad9b48cd385f9fd8cd2dc2419ab59e7b4649ec72ac6400ce42b6b61028eb"} Feb 23 17:34:20 crc kubenswrapper[4724]: I0223 17:34:20.238673 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g5s4k" event={"ID":"828827aa-9a76-4ba6-962f-ad0ac278bd72","Type":"ContainerStarted","Data":"9e9c60d71bbafac4664e006d8c64d1cef552bf79c0f4e11cf85af4c89eb5f540"} Feb 23 17:34:20 crc kubenswrapper[4724]: I0223 17:34:20.241448 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j966w" event={"ID":"170d3970-9dce-48c5-9b25-9d30d5780282","Type":"ContainerStarted","Data":"6f78aef7fa5100a7b686687c718689170754ccd56a170e6d6a5b4cc7134c5bfb"} Feb 23 17:34:20 crc kubenswrapper[4724]: I0223 17:34:20.244689 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dfqzd" event={"ID":"5d8744a3-347d-4260-963c-5629092380fe","Type":"ContainerStarted","Data":"94e9c019d5adfffaa4d57531d0cf5af9b403aef714324f5fbbe563e8915f5197"} Feb 23 17:34:20 crc kubenswrapper[4724]: E0223 17:34:20.245482 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-vklmk" podUID="a5ebf13d-dedd-42b9-8308-4d8cbf8ea9c8" Feb 23 17:34:20 crc kubenswrapper[4724]: E0223 17:34:20.246518 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-kbxv5" podUID="b4ebec31-2766-49b2-9f05-9e6de41cf161" Feb 23 17:34:21 crc kubenswrapper[4724]: I0223 17:34:21.251089 4724 generic.go:334] "Generic (PLEG): container finished" podID="7a5401c3-8e65-4b1f-89a5-4bd1628b149c" containerID="fdc39fb5091999c77e2e885f8e20628577fbc6860cb7be12ced53e5b4b1bca00" exitCode=0 Feb 23 17:34:21 crc kubenswrapper[4724]: I0223 17:34:21.251227 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ft7cc" event={"ID":"7a5401c3-8e65-4b1f-89a5-4bd1628b149c","Type":"ContainerDied","Data":"fdc39fb5091999c77e2e885f8e20628577fbc6860cb7be12ced53e5b4b1bca00"} Feb 23 17:34:21 crc kubenswrapper[4724]: I0223 17:34:21.254640 4724 generic.go:334] "Generic (PLEG): container finished" podID="4e2b6f03-51f9-4434-b1ae-f7f7ce8d2004" containerID="696579ceca8e39da0e78f989aaf475ebdc14e35ba69884918baa53e37b7565f6" exitCode=0 Feb 23 17:34:21 crc kubenswrapper[4724]: I0223 17:34:21.254701 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jrhf2" event={"ID":"4e2b6f03-51f9-4434-b1ae-f7f7ce8d2004","Type":"ContainerDied","Data":"696579ceca8e39da0e78f989aaf475ebdc14e35ba69884918baa53e37b7565f6"} Feb 23 17:34:21 crc kubenswrapper[4724]: I0223 17:34:21.257250 4724 generic.go:334] "Generic (PLEG): container finished" podID="b2f027f8-9cac-49ef-87b4-6a6d0d2aee31" containerID="9c41ad9b48cd385f9fd8cd2dc2419ab59e7b4649ec72ac6400ce42b6b61028eb" exitCode=0 Feb 23 17:34:21 crc kubenswrapper[4724]: I0223 17:34:21.257901 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7l6ld" event={"ID":"b2f027f8-9cac-49ef-87b4-6a6d0d2aee31","Type":"ContainerDied","Data":"9c41ad9b48cd385f9fd8cd2dc2419ab59e7b4649ec72ac6400ce42b6b61028eb"} Feb 23 17:34:21 crc kubenswrapper[4724]: I0223 17:34:21.263142 4724 generic.go:334] "Generic (PLEG): container finished" podID="828827aa-9a76-4ba6-962f-ad0ac278bd72" containerID="9e9c60d71bbafac4664e006d8c64d1cef552bf79c0f4e11cf85af4c89eb5f540" exitCode=0 Feb 23 17:34:21 crc kubenswrapper[4724]: I0223 17:34:21.263195 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g5s4k" event={"ID":"828827aa-9a76-4ba6-962f-ad0ac278bd72","Type":"ContainerDied","Data":"9e9c60d71bbafac4664e006d8c64d1cef552bf79c0f4e11cf85af4c89eb5f540"} Feb 23 17:34:21 crc kubenswrapper[4724]: I0223 17:34:21.267155 4724 generic.go:334] "Generic (PLEG): container finished" podID="170d3970-9dce-48c5-9b25-9d30d5780282" containerID="6f78aef7fa5100a7b686687c718689170754ccd56a170e6d6a5b4cc7134c5bfb" exitCode=0 Feb 23 17:34:21 crc kubenswrapper[4724]: I0223 17:34:21.267202 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j966w" event={"ID":"170d3970-9dce-48c5-9b25-9d30d5780282","Type":"ContainerDied","Data":"6f78aef7fa5100a7b686687c718689170754ccd56a170e6d6a5b4cc7134c5bfb"} Feb 23 17:34:21 crc kubenswrapper[4724]: I0223 17:34:21.276699 4724 generic.go:334] "Generic (PLEG): container finished" podID="5d8744a3-347d-4260-963c-5629092380fe" containerID="94e9c019d5adfffaa4d57531d0cf5af9b403aef714324f5fbbe563e8915f5197" exitCode=0 Feb 23 17:34:21 crc kubenswrapper[4724]: I0223 17:34:21.276803 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dfqzd" event={"ID":"5d8744a3-347d-4260-963c-5629092380fe","Type":"ContainerDied","Data":"94e9c019d5adfffaa4d57531d0cf5af9b403aef714324f5fbbe563e8915f5197"} Feb 23 17:34:22 crc kubenswrapper[4724]: I0223 17:34:22.286069 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j966w" event={"ID":"170d3970-9dce-48c5-9b25-9d30d5780282","Type":"ContainerStarted","Data":"8794213a7154d5a8e3cae7aa0863cd460f80536690fb467e53e3a78672d7523c"} Feb 23 17:34:22 crc kubenswrapper[4724]: I0223 17:34:22.291211 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dfqzd" event={"ID":"5d8744a3-347d-4260-963c-5629092380fe","Type":"ContainerStarted","Data":"96f8b8cf4b4432fe908e199d6292d05a6e8c5c3f01ee0b29a8ecab4046afe2ab"} Feb 23 17:34:22 crc kubenswrapper[4724]: I0223 17:34:22.293568 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ft7cc" event={"ID":"7a5401c3-8e65-4b1f-89a5-4bd1628b149c","Type":"ContainerStarted","Data":"9e2b9c41ec9b333a86753347d3117701def8bf80c1c64b479dfba38e62383fa2"} Feb 23 17:34:22 crc kubenswrapper[4724]: I0223 17:34:22.295742 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jrhf2" event={"ID":"4e2b6f03-51f9-4434-b1ae-f7f7ce8d2004","Type":"ContainerStarted","Data":"b605bf754ec200ed3d7ff1362ca6d58669a979703b563f2fc872cea75330b483"} Feb 23 17:34:22 crc kubenswrapper[4724]: I0223 17:34:22.300125 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7l6ld" event={"ID":"b2f027f8-9cac-49ef-87b4-6a6d0d2aee31","Type":"ContainerStarted","Data":"220b87a1d2608fc8b1f2c08f8c729c5923da2d9a4e508df8449539a4a45ecd3f"} Feb 23 17:34:22 crc kubenswrapper[4724]: I0223 17:34:22.303700 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g5s4k" event={"ID":"828827aa-9a76-4ba6-962f-ad0ac278bd72","Type":"ContainerStarted","Data":"f1c924e5ee621253b9356ba53d5169b05d8aff40a88161649fcfdc16dbcfd773"} Feb 23 17:34:22 crc kubenswrapper[4724]: I0223 17:34:22.304095 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-j966w" podStartSLOduration=3.625486622 podStartE2EDuration="36.304078413s" podCreationTimestamp="2026-02-23 17:33:46 +0000 UTC" firstStartedPulling="2026-02-23 17:33:49.414653569 +0000 UTC m=+185.230853169" lastFinishedPulling="2026-02-23 17:34:22.09324536 +0000 UTC m=+217.909444960" observedRunningTime="2026-02-23 17:34:22.302531444 +0000 UTC m=+218.118731044" watchObservedRunningTime="2026-02-23 17:34:22.304078413 +0000 UTC m=+218.120278013" Feb 23 17:34:22 crc kubenswrapper[4724]: I0223 17:34:22.324540 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dfqzd" podStartSLOduration=2.669201245 podStartE2EDuration="36.324507822s" podCreationTimestamp="2026-02-23 17:33:46 +0000 UTC" firstStartedPulling="2026-02-23 17:33:48.215418348 +0000 UTC m=+184.031617948" lastFinishedPulling="2026-02-23 17:34:21.870724935 +0000 UTC m=+217.686924525" observedRunningTime="2026-02-23 17:34:22.320898328 +0000 UTC m=+218.137097928" watchObservedRunningTime="2026-02-23 17:34:22.324507822 +0000 UTC m=+218.140707422" Feb 23 17:34:22 crc kubenswrapper[4724]: I0223 17:34:22.339844 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7l6ld" podStartSLOduration=2.985721243 podStartE2EDuration="34.339810708s" podCreationTimestamp="2026-02-23 17:33:48 +0000 UTC" firstStartedPulling="2026-02-23 17:33:50.621499169 +0000 UTC m=+186.437698769" lastFinishedPulling="2026-02-23 17:34:21.975588634 +0000 UTC m=+217.791788234" observedRunningTime="2026-02-23 17:34:22.337548086 +0000 UTC m=+218.153747696" watchObservedRunningTime="2026-02-23 17:34:22.339810708 +0000 UTC m=+218.156010308" Feb 23 17:34:22 crc kubenswrapper[4724]: I0223 17:34:22.360684 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jrhf2" podStartSLOduration=3.547578249 podStartE2EDuration="37.36065084s" podCreationTimestamp="2026-02-23 17:33:45 +0000 UTC" firstStartedPulling="2026-02-23 17:33:48.207508621 +0000 UTC m=+184.023708221" lastFinishedPulling="2026-02-23 17:34:22.020581212 +0000 UTC m=+217.836780812" observedRunningTime="2026-02-23 17:34:22.354241136 +0000 UTC m=+218.170440736" watchObservedRunningTime="2026-02-23 17:34:22.36065084 +0000 UTC m=+218.176850440" Feb 23 17:34:22 crc kubenswrapper[4724]: I0223 17:34:22.373067 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ft7cc" podStartSLOduration=2.7794272429999998 podStartE2EDuration="36.373038473s" podCreationTimestamp="2026-02-23 17:33:46 +0000 UTC" firstStartedPulling="2026-02-23 17:33:48.250180439 +0000 UTC m=+184.066380039" lastFinishedPulling="2026-02-23 17:34:21.843791679 +0000 UTC m=+217.659991269" observedRunningTime="2026-02-23 17:34:22.370314386 +0000 UTC m=+218.186513986" watchObservedRunningTime="2026-02-23 17:34:22.373038473 +0000 UTC m=+218.189238073" Feb 23 17:34:22 crc kubenswrapper[4724]: I0223 17:34:22.394462 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-g5s4k" podStartSLOduration=2.807607772 podStartE2EDuration="35.394435852s" podCreationTimestamp="2026-02-23 17:33:47 +0000 UTC" firstStartedPulling="2026-02-23 17:33:49.350153578 +0000 UTC m=+185.166353188" lastFinishedPulling="2026-02-23 17:34:21.936981668 +0000 UTC m=+217.753181268" observedRunningTime="2026-02-23 17:34:22.392120939 +0000 UTC m=+218.208320539" watchObservedRunningTime="2026-02-23 17:34:22.394435852 +0000 UTC m=+218.210635452" Feb 23 17:34:23 crc kubenswrapper[4724]: I0223 17:34:23.380645 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 23 17:34:23 crc kubenswrapper[4724]: E0223 17:34:23.380942 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25854d47-4d3e-4817-b27c-a186432e8c32" containerName="pruner" Feb 23 17:34:23 crc kubenswrapper[4724]: I0223 17:34:23.380961 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="25854d47-4d3e-4817-b27c-a186432e8c32" containerName="pruner" Feb 23 17:34:23 crc kubenswrapper[4724]: E0223 17:34:23.380979 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67dbde4d-5c0f-45cf-82ae-435b16e17121" containerName="collect-profiles" Feb 23 17:34:23 crc kubenswrapper[4724]: I0223 17:34:23.380986 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="67dbde4d-5c0f-45cf-82ae-435b16e17121" containerName="collect-profiles" Feb 23 17:34:23 crc kubenswrapper[4724]: E0223 17:34:23.381011 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f94958ec-8484-4e01-b05c-f00b60bf4554" containerName="pruner" Feb 23 17:34:23 crc kubenswrapper[4724]: I0223 17:34:23.381019 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="f94958ec-8484-4e01-b05c-f00b60bf4554" containerName="pruner" Feb 23 17:34:23 crc kubenswrapper[4724]: I0223 17:34:23.381133 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="25854d47-4d3e-4817-b27c-a186432e8c32" containerName="pruner" Feb 23 17:34:23 crc kubenswrapper[4724]: I0223 17:34:23.381155 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="67dbde4d-5c0f-45cf-82ae-435b16e17121" containerName="collect-profiles" Feb 23 17:34:23 crc kubenswrapper[4724]: I0223 17:34:23.381166 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="f94958ec-8484-4e01-b05c-f00b60bf4554" containerName="pruner" Feb 23 17:34:23 crc kubenswrapper[4724]: I0223 17:34:23.381704 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 23 17:34:23 crc kubenswrapper[4724]: I0223 17:34:23.385127 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 23 17:34:23 crc kubenswrapper[4724]: I0223 17:34:23.388784 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 23 17:34:23 crc kubenswrapper[4724]: I0223 17:34:23.396112 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 23 17:34:23 crc kubenswrapper[4724]: I0223 17:34:23.578340 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/558ea4ed-624f-432f-b43c-552eefdd0938-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"558ea4ed-624f-432f-b43c-552eefdd0938\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 23 17:34:23 crc kubenswrapper[4724]: I0223 17:34:23.578459 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/558ea4ed-624f-432f-b43c-552eefdd0938-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"558ea4ed-624f-432f-b43c-552eefdd0938\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 23 17:34:23 crc kubenswrapper[4724]: I0223 17:34:23.679896 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/558ea4ed-624f-432f-b43c-552eefdd0938-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"558ea4ed-624f-432f-b43c-552eefdd0938\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 23 17:34:23 crc kubenswrapper[4724]: I0223 17:34:23.680031 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/558ea4ed-624f-432f-b43c-552eefdd0938-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"558ea4ed-624f-432f-b43c-552eefdd0938\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 23 17:34:23 crc kubenswrapper[4724]: I0223 17:34:23.680136 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/558ea4ed-624f-432f-b43c-552eefdd0938-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"558ea4ed-624f-432f-b43c-552eefdd0938\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 23 17:34:23 crc kubenswrapper[4724]: I0223 17:34:23.706432 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/558ea4ed-624f-432f-b43c-552eefdd0938-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"558ea4ed-624f-432f-b43c-552eefdd0938\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 23 17:34:24 crc kubenswrapper[4724]: I0223 17:34:24.002440 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 23 17:34:24 crc kubenswrapper[4724]: I0223 17:34:24.494981 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4kcvg"] Feb 23 17:34:24 crc kubenswrapper[4724]: I0223 17:34:24.568215 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 23 17:34:25 crc kubenswrapper[4724]: I0223 17:34:25.324350 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"558ea4ed-624f-432f-b43c-552eefdd0938","Type":"ContainerStarted","Data":"824c8c60341ecd187c9f8a6722c6574931b10869039c79da0ed09aaa9b271484"} Feb 23 17:34:25 crc kubenswrapper[4724]: I0223 17:34:25.324834 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"558ea4ed-624f-432f-b43c-552eefdd0938","Type":"ContainerStarted","Data":"3f8f524be9abad4a07d24c1f616a746d7d5a762095e95d17ebec7778cf641a27"} Feb 23 17:34:25 crc kubenswrapper[4724]: I0223 17:34:25.343494 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=2.343461003 podStartE2EDuration="2.343461003s" podCreationTimestamp="2026-02-23 17:34:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:34:25.340998395 +0000 UTC m=+221.157197995" watchObservedRunningTime="2026-02-23 17:34:25.343461003 +0000 UTC m=+221.159660603" Feb 23 17:34:26 crc kubenswrapper[4724]: I0223 17:34:26.334950 4724 generic.go:334] "Generic (PLEG): container finished" podID="558ea4ed-624f-432f-b43c-552eefdd0938" containerID="824c8c60341ecd187c9f8a6722c6574931b10869039c79da0ed09aaa9b271484" exitCode=0 Feb 23 17:34:26 crc kubenswrapper[4724]: I0223 17:34:26.335009 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"558ea4ed-624f-432f-b43c-552eefdd0938","Type":"ContainerDied","Data":"824c8c60341ecd187c9f8a6722c6574931b10869039c79da0ed09aaa9b271484"} Feb 23 17:34:26 crc kubenswrapper[4724]: I0223 17:34:26.366509 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jrhf2" Feb 23 17:34:26 crc kubenswrapper[4724]: I0223 17:34:26.366636 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jrhf2" Feb 23 17:34:26 crc kubenswrapper[4724]: I0223 17:34:26.568256 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jrhf2" Feb 23 17:34:26 crc kubenswrapper[4724]: I0223 17:34:26.607940 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ft7cc" Feb 23 17:34:26 crc kubenswrapper[4724]: I0223 17:34:26.608310 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ft7cc" Feb 23 17:34:26 crc kubenswrapper[4724]: I0223 17:34:26.649859 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ft7cc" Feb 23 17:34:26 crc kubenswrapper[4724]: I0223 17:34:26.858037 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dfqzd" Feb 23 17:34:26 crc kubenswrapper[4724]: I0223 17:34:26.858122 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dfqzd" Feb 23 17:34:26 crc kubenswrapper[4724]: I0223 17:34:26.920955 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dfqzd" Feb 23 17:34:27 crc kubenswrapper[4724]: I0223 17:34:27.001914 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-j966w" Feb 23 17:34:27 crc kubenswrapper[4724]: I0223 17:34:27.001991 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-j966w" Feb 23 17:34:27 crc kubenswrapper[4724]: I0223 17:34:27.053122 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-j966w" Feb 23 17:34:27 crc kubenswrapper[4724]: I0223 17:34:27.389314 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ft7cc" Feb 23 17:34:27 crc kubenswrapper[4724]: I0223 17:34:27.389685 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jrhf2" Feb 23 17:34:27 crc kubenswrapper[4724]: I0223 17:34:27.389928 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dfqzd" Feb 23 17:34:27 crc kubenswrapper[4724]: I0223 17:34:27.397215 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-j966w" Feb 23 17:34:27 crc kubenswrapper[4724]: I0223 17:34:27.727182 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 23 17:34:27 crc kubenswrapper[4724]: I0223 17:34:27.742895 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/558ea4ed-624f-432f-b43c-552eefdd0938-kube-api-access\") pod \"558ea4ed-624f-432f-b43c-552eefdd0938\" (UID: \"558ea4ed-624f-432f-b43c-552eefdd0938\") " Feb 23 17:34:27 crc kubenswrapper[4724]: I0223 17:34:27.742990 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/558ea4ed-624f-432f-b43c-552eefdd0938-kubelet-dir\") pod \"558ea4ed-624f-432f-b43c-552eefdd0938\" (UID: \"558ea4ed-624f-432f-b43c-552eefdd0938\") " Feb 23 17:34:27 crc kubenswrapper[4724]: I0223 17:34:27.743476 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/558ea4ed-624f-432f-b43c-552eefdd0938-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "558ea4ed-624f-432f-b43c-552eefdd0938" (UID: "558ea4ed-624f-432f-b43c-552eefdd0938"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:34:27 crc kubenswrapper[4724]: I0223 17:34:27.752369 4724 patch_prober.go:28] interesting pod/machine-config-daemon-rw78r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 17:34:27 crc kubenswrapper[4724]: I0223 17:34:27.752459 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 17:34:27 crc kubenswrapper[4724]: I0223 17:34:27.794220 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/558ea4ed-624f-432f-b43c-552eefdd0938-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "558ea4ed-624f-432f-b43c-552eefdd0938" (UID: "558ea4ed-624f-432f-b43c-552eefdd0938"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:34:27 crc kubenswrapper[4724]: I0223 17:34:27.844133 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/558ea4ed-624f-432f-b43c-552eefdd0938-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 23 17:34:27 crc kubenswrapper[4724]: I0223 17:34:27.844176 4724 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/558ea4ed-624f-432f-b43c-552eefdd0938-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 23 17:34:28 crc kubenswrapper[4724]: I0223 17:34:28.349058 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"558ea4ed-624f-432f-b43c-552eefdd0938","Type":"ContainerDied","Data":"3f8f524be9abad4a07d24c1f616a746d7d5a762095e95d17ebec7778cf641a27"} Feb 23 17:34:28 crc kubenswrapper[4724]: I0223 17:34:28.349125 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f8f524be9abad4a07d24c1f616a746d7d5a762095e95d17ebec7778cf641a27" Feb 23 17:34:28 crc kubenswrapper[4724]: I0223 17:34:28.352118 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 23 17:34:28 crc kubenswrapper[4724]: I0223 17:34:28.381674 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-g5s4k" Feb 23 17:34:28 crc kubenswrapper[4724]: I0223 17:34:28.382119 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-g5s4k" Feb 23 17:34:28 crc kubenswrapper[4724]: I0223 17:34:28.444635 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-g5s4k" Feb 23 17:34:28 crc kubenswrapper[4724]: I0223 17:34:28.587132 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j966w"] Feb 23 17:34:28 crc kubenswrapper[4724]: I0223 17:34:28.929580 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7l6ld" Feb 23 17:34:28 crc kubenswrapper[4724]: I0223 17:34:28.929666 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7l6ld" Feb 23 17:34:28 crc kubenswrapper[4724]: I0223 17:34:28.973507 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7l6ld" Feb 23 17:34:29 crc kubenswrapper[4724]: I0223 17:34:29.356090 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-j966w" podUID="170d3970-9dce-48c5-9b25-9d30d5780282" containerName="registry-server" containerID="cri-o://8794213a7154d5a8e3cae7aa0863cd460f80536690fb467e53e3a78672d7523c" gracePeriod=2 Feb 23 17:34:29 crc kubenswrapper[4724]: I0223 17:34:29.370460 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 23 17:34:29 crc kubenswrapper[4724]: E0223 17:34:29.370800 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="558ea4ed-624f-432f-b43c-552eefdd0938" containerName="pruner" Feb 23 17:34:29 crc kubenswrapper[4724]: I0223 17:34:29.370825 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="558ea4ed-624f-432f-b43c-552eefdd0938" containerName="pruner" Feb 23 17:34:29 crc kubenswrapper[4724]: I0223 17:34:29.370946 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="558ea4ed-624f-432f-b43c-552eefdd0938" containerName="pruner" Feb 23 17:34:29 crc kubenswrapper[4724]: I0223 17:34:29.371443 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 23 17:34:29 crc kubenswrapper[4724]: I0223 17:34:29.375885 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 23 17:34:29 crc kubenswrapper[4724]: I0223 17:34:29.375975 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 23 17:34:29 crc kubenswrapper[4724]: I0223 17:34:29.394232 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 23 17:34:29 crc kubenswrapper[4724]: I0223 17:34:29.416134 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7l6ld" Feb 23 17:34:29 crc kubenswrapper[4724]: I0223 17:34:29.417161 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-g5s4k" Feb 23 17:34:29 crc kubenswrapper[4724]: I0223 17:34:29.478538 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/129a58dd-706e-428b-8ab7-35194d9e0503-var-lock\") pod \"installer-9-crc\" (UID: \"129a58dd-706e-428b-8ab7-35194d9e0503\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 23 17:34:29 crc kubenswrapper[4724]: I0223 17:34:29.478604 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/129a58dd-706e-428b-8ab7-35194d9e0503-kube-api-access\") pod \"installer-9-crc\" (UID: \"129a58dd-706e-428b-8ab7-35194d9e0503\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 23 17:34:29 crc kubenswrapper[4724]: I0223 17:34:29.478638 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/129a58dd-706e-428b-8ab7-35194d9e0503-kubelet-dir\") pod \"installer-9-crc\" (UID: \"129a58dd-706e-428b-8ab7-35194d9e0503\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 23 17:34:29 crc kubenswrapper[4724]: I0223 17:34:29.579565 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/129a58dd-706e-428b-8ab7-35194d9e0503-var-lock\") pod \"installer-9-crc\" (UID: \"129a58dd-706e-428b-8ab7-35194d9e0503\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 23 17:34:29 crc kubenswrapper[4724]: I0223 17:34:29.579633 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/129a58dd-706e-428b-8ab7-35194d9e0503-kube-api-access\") pod \"installer-9-crc\" (UID: \"129a58dd-706e-428b-8ab7-35194d9e0503\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 23 17:34:29 crc kubenswrapper[4724]: I0223 17:34:29.579662 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/129a58dd-706e-428b-8ab7-35194d9e0503-kubelet-dir\") pod \"installer-9-crc\" (UID: \"129a58dd-706e-428b-8ab7-35194d9e0503\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 23 17:34:29 crc kubenswrapper[4724]: I0223 17:34:29.579719 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/129a58dd-706e-428b-8ab7-35194d9e0503-var-lock\") pod \"installer-9-crc\" (UID: \"129a58dd-706e-428b-8ab7-35194d9e0503\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 23 17:34:29 crc kubenswrapper[4724]: I0223 17:34:29.579778 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/129a58dd-706e-428b-8ab7-35194d9e0503-kubelet-dir\") pod \"installer-9-crc\" (UID: \"129a58dd-706e-428b-8ab7-35194d9e0503\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 23 17:34:29 crc kubenswrapper[4724]: I0223 17:34:29.582824 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dfqzd"] Feb 23 17:34:29 crc kubenswrapper[4724]: I0223 17:34:29.583103 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dfqzd" podUID="5d8744a3-347d-4260-963c-5629092380fe" containerName="registry-server" containerID="cri-o://96f8b8cf4b4432fe908e199d6292d05a6e8c5c3f01ee0b29a8ecab4046afe2ab" gracePeriod=2 Feb 23 17:34:29 crc kubenswrapper[4724]: I0223 17:34:29.617697 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/129a58dd-706e-428b-8ab7-35194d9e0503-kube-api-access\") pod \"installer-9-crc\" (UID: \"129a58dd-706e-428b-8ab7-35194d9e0503\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 23 17:34:29 crc kubenswrapper[4724]: I0223 17:34:29.713223 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 23 17:34:29 crc kubenswrapper[4724]: I0223 17:34:29.843022 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j966w" Feb 23 17:34:29 crc kubenswrapper[4724]: I0223 17:34:29.883328 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/170d3970-9dce-48c5-9b25-9d30d5780282-catalog-content\") pod \"170d3970-9dce-48c5-9b25-9d30d5780282\" (UID: \"170d3970-9dce-48c5-9b25-9d30d5780282\") " Feb 23 17:34:29 crc kubenswrapper[4724]: I0223 17:34:29.883510 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/170d3970-9dce-48c5-9b25-9d30d5780282-utilities\") pod \"170d3970-9dce-48c5-9b25-9d30d5780282\" (UID: \"170d3970-9dce-48c5-9b25-9d30d5780282\") " Feb 23 17:34:29 crc kubenswrapper[4724]: I0223 17:34:29.883632 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfq8g\" (UniqueName: \"kubernetes.io/projected/170d3970-9dce-48c5-9b25-9d30d5780282-kube-api-access-nfq8g\") pod \"170d3970-9dce-48c5-9b25-9d30d5780282\" (UID: \"170d3970-9dce-48c5-9b25-9d30d5780282\") " Feb 23 17:34:29 crc kubenswrapper[4724]: I0223 17:34:29.885415 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/170d3970-9dce-48c5-9b25-9d30d5780282-utilities" (OuterVolumeSpecName: "utilities") pod "170d3970-9dce-48c5-9b25-9d30d5780282" (UID: "170d3970-9dce-48c5-9b25-9d30d5780282"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:34:29 crc kubenswrapper[4724]: I0223 17:34:29.890515 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/170d3970-9dce-48c5-9b25-9d30d5780282-kube-api-access-nfq8g" (OuterVolumeSpecName: "kube-api-access-nfq8g") pod "170d3970-9dce-48c5-9b25-9d30d5780282" (UID: "170d3970-9dce-48c5-9b25-9d30d5780282"). InnerVolumeSpecName "kube-api-access-nfq8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:34:29 crc kubenswrapper[4724]: I0223 17:34:29.945418 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/170d3970-9dce-48c5-9b25-9d30d5780282-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "170d3970-9dce-48c5-9b25-9d30d5780282" (UID: "170d3970-9dce-48c5-9b25-9d30d5780282"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:34:29 crc kubenswrapper[4724]: I0223 17:34:29.985227 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/170d3970-9dce-48c5-9b25-9d30d5780282-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 17:34:29 crc kubenswrapper[4724]: I0223 17:34:29.985263 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfq8g\" (UniqueName: \"kubernetes.io/projected/170d3970-9dce-48c5-9b25-9d30d5780282-kube-api-access-nfq8g\") on node \"crc\" DevicePath \"\"" Feb 23 17:34:29 crc kubenswrapper[4724]: I0223 17:34:29.985275 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/170d3970-9dce-48c5-9b25-9d30d5780282-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 17:34:30 crc kubenswrapper[4724]: I0223 17:34:30.113293 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dfqzd" Feb 23 17:34:30 crc kubenswrapper[4724]: I0223 17:34:30.190679 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-klh5w\" (UniqueName: \"kubernetes.io/projected/5d8744a3-347d-4260-963c-5629092380fe-kube-api-access-klh5w\") pod \"5d8744a3-347d-4260-963c-5629092380fe\" (UID: \"5d8744a3-347d-4260-963c-5629092380fe\") " Feb 23 17:34:30 crc kubenswrapper[4724]: I0223 17:34:30.190785 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d8744a3-347d-4260-963c-5629092380fe-catalog-content\") pod \"5d8744a3-347d-4260-963c-5629092380fe\" (UID: \"5d8744a3-347d-4260-963c-5629092380fe\") " Feb 23 17:34:30 crc kubenswrapper[4724]: I0223 17:34:30.190913 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d8744a3-347d-4260-963c-5629092380fe-utilities\") pod \"5d8744a3-347d-4260-963c-5629092380fe\" (UID: \"5d8744a3-347d-4260-963c-5629092380fe\") " Feb 23 17:34:30 crc kubenswrapper[4724]: I0223 17:34:30.191709 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d8744a3-347d-4260-963c-5629092380fe-utilities" (OuterVolumeSpecName: "utilities") pod "5d8744a3-347d-4260-963c-5629092380fe" (UID: "5d8744a3-347d-4260-963c-5629092380fe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:34:30 crc kubenswrapper[4724]: I0223 17:34:30.198174 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d8744a3-347d-4260-963c-5629092380fe-kube-api-access-klh5w" (OuterVolumeSpecName: "kube-api-access-klh5w") pod "5d8744a3-347d-4260-963c-5629092380fe" (UID: "5d8744a3-347d-4260-963c-5629092380fe"). InnerVolumeSpecName "kube-api-access-klh5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:34:30 crc kubenswrapper[4724]: I0223 17:34:30.261213 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d8744a3-347d-4260-963c-5629092380fe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5d8744a3-347d-4260-963c-5629092380fe" (UID: "5d8744a3-347d-4260-963c-5629092380fe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:34:30 crc kubenswrapper[4724]: I0223 17:34:30.280717 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 23 17:34:30 crc kubenswrapper[4724]: I0223 17:34:30.292244 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d8744a3-347d-4260-963c-5629092380fe-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 17:34:30 crc kubenswrapper[4724]: I0223 17:34:30.292513 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-klh5w\" (UniqueName: \"kubernetes.io/projected/5d8744a3-347d-4260-963c-5629092380fe-kube-api-access-klh5w\") on node \"crc\" DevicePath \"\"" Feb 23 17:34:30 crc kubenswrapper[4724]: I0223 17:34:30.292610 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d8744a3-347d-4260-963c-5629092380fe-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 17:34:30 crc kubenswrapper[4724]: I0223 17:34:30.366404 4724 generic.go:334] "Generic (PLEG): container finished" podID="170d3970-9dce-48c5-9b25-9d30d5780282" containerID="8794213a7154d5a8e3cae7aa0863cd460f80536690fb467e53e3a78672d7523c" exitCode=0 Feb 23 17:34:30 crc kubenswrapper[4724]: I0223 17:34:30.366494 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j966w" event={"ID":"170d3970-9dce-48c5-9b25-9d30d5780282","Type":"ContainerDied","Data":"8794213a7154d5a8e3cae7aa0863cd460f80536690fb467e53e3a78672d7523c"} Feb 23 17:34:30 crc kubenswrapper[4724]: I0223 17:34:30.366529 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j966w" event={"ID":"170d3970-9dce-48c5-9b25-9d30d5780282","Type":"ContainerDied","Data":"31b18875f763481e8458b72f028ac2d78d4f087dd5c254ae069a2495c4568a63"} Feb 23 17:34:30 crc kubenswrapper[4724]: I0223 17:34:30.366550 4724 scope.go:117] "RemoveContainer" containerID="8794213a7154d5a8e3cae7aa0863cd460f80536690fb467e53e3a78672d7523c" Feb 23 17:34:30 crc kubenswrapper[4724]: I0223 17:34:30.366762 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j966w" Feb 23 17:34:30 crc kubenswrapper[4724]: I0223 17:34:30.372839 4724 generic.go:334] "Generic (PLEG): container finished" podID="5d8744a3-347d-4260-963c-5629092380fe" containerID="96f8b8cf4b4432fe908e199d6292d05a6e8c5c3f01ee0b29a8ecab4046afe2ab" exitCode=0 Feb 23 17:34:30 crc kubenswrapper[4724]: I0223 17:34:30.373167 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dfqzd" event={"ID":"5d8744a3-347d-4260-963c-5629092380fe","Type":"ContainerDied","Data":"96f8b8cf4b4432fe908e199d6292d05a6e8c5c3f01ee0b29a8ecab4046afe2ab"} Feb 23 17:34:30 crc kubenswrapper[4724]: I0223 17:34:30.373248 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dfqzd" event={"ID":"5d8744a3-347d-4260-963c-5629092380fe","Type":"ContainerDied","Data":"cde10bc33a70f3a714116ba206774c8228222e391a33449aa6523f9dbe91fcf1"} Feb 23 17:34:30 crc kubenswrapper[4724]: I0223 17:34:30.373352 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dfqzd" Feb 23 17:34:30 crc kubenswrapper[4724]: I0223 17:34:30.379010 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"129a58dd-706e-428b-8ab7-35194d9e0503","Type":"ContainerStarted","Data":"bb71c3e9daa80d1d901d38d3672d352ae0f4539149cfc735afa494483e1f55c2"} Feb 23 17:34:30 crc kubenswrapper[4724]: I0223 17:34:30.396760 4724 scope.go:117] "RemoveContainer" containerID="6f78aef7fa5100a7b686687c718689170754ccd56a170e6d6a5b4cc7134c5bfb" Feb 23 17:34:30 crc kubenswrapper[4724]: I0223 17:34:30.411252 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dfqzd"] Feb 23 17:34:30 crc kubenswrapper[4724]: I0223 17:34:30.412866 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dfqzd"] Feb 23 17:34:30 crc kubenswrapper[4724]: I0223 17:34:30.428880 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j966w"] Feb 23 17:34:30 crc kubenswrapper[4724]: I0223 17:34:30.431925 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-j966w"] Feb 23 17:34:30 crc kubenswrapper[4724]: I0223 17:34:30.437509 4724 scope.go:117] "RemoveContainer" containerID="3c64859c1b0e8eebd07578b754c4353737bfc8b8e796f282339781ddff1b31e3" Feb 23 17:34:30 crc kubenswrapper[4724]: I0223 17:34:30.463422 4724 scope.go:117] "RemoveContainer" containerID="8794213a7154d5a8e3cae7aa0863cd460f80536690fb467e53e3a78672d7523c" Feb 23 17:34:30 crc kubenswrapper[4724]: E0223 17:34:30.464098 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8794213a7154d5a8e3cae7aa0863cd460f80536690fb467e53e3a78672d7523c\": container with ID starting with 8794213a7154d5a8e3cae7aa0863cd460f80536690fb467e53e3a78672d7523c not found: ID does not exist" containerID="8794213a7154d5a8e3cae7aa0863cd460f80536690fb467e53e3a78672d7523c" Feb 23 17:34:30 crc kubenswrapper[4724]: I0223 17:34:30.464166 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8794213a7154d5a8e3cae7aa0863cd460f80536690fb467e53e3a78672d7523c"} err="failed to get container status \"8794213a7154d5a8e3cae7aa0863cd460f80536690fb467e53e3a78672d7523c\": rpc error: code = NotFound desc = could not find container \"8794213a7154d5a8e3cae7aa0863cd460f80536690fb467e53e3a78672d7523c\": container with ID starting with 8794213a7154d5a8e3cae7aa0863cd460f80536690fb467e53e3a78672d7523c not found: ID does not exist" Feb 23 17:34:30 crc kubenswrapper[4724]: I0223 17:34:30.464201 4724 scope.go:117] "RemoveContainer" containerID="6f78aef7fa5100a7b686687c718689170754ccd56a170e6d6a5b4cc7134c5bfb" Feb 23 17:34:30 crc kubenswrapper[4724]: E0223 17:34:30.464557 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f78aef7fa5100a7b686687c718689170754ccd56a170e6d6a5b4cc7134c5bfb\": container with ID starting with 6f78aef7fa5100a7b686687c718689170754ccd56a170e6d6a5b4cc7134c5bfb not found: ID does not exist" containerID="6f78aef7fa5100a7b686687c718689170754ccd56a170e6d6a5b4cc7134c5bfb" Feb 23 17:34:30 crc kubenswrapper[4724]: I0223 17:34:30.464645 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f78aef7fa5100a7b686687c718689170754ccd56a170e6d6a5b4cc7134c5bfb"} err="failed to get container status \"6f78aef7fa5100a7b686687c718689170754ccd56a170e6d6a5b4cc7134c5bfb\": rpc error: code = NotFound desc = could not find container \"6f78aef7fa5100a7b686687c718689170754ccd56a170e6d6a5b4cc7134c5bfb\": container with ID starting with 6f78aef7fa5100a7b686687c718689170754ccd56a170e6d6a5b4cc7134c5bfb not found: ID does not exist" Feb 23 17:34:30 crc kubenswrapper[4724]: I0223 17:34:30.464715 4724 scope.go:117] "RemoveContainer" containerID="3c64859c1b0e8eebd07578b754c4353737bfc8b8e796f282339781ddff1b31e3" Feb 23 17:34:30 crc kubenswrapper[4724]: E0223 17:34:30.465292 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c64859c1b0e8eebd07578b754c4353737bfc8b8e796f282339781ddff1b31e3\": container with ID starting with 3c64859c1b0e8eebd07578b754c4353737bfc8b8e796f282339781ddff1b31e3 not found: ID does not exist" containerID="3c64859c1b0e8eebd07578b754c4353737bfc8b8e796f282339781ddff1b31e3" Feb 23 17:34:30 crc kubenswrapper[4724]: I0223 17:34:30.465320 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c64859c1b0e8eebd07578b754c4353737bfc8b8e796f282339781ddff1b31e3"} err="failed to get container status \"3c64859c1b0e8eebd07578b754c4353737bfc8b8e796f282339781ddff1b31e3\": rpc error: code = NotFound desc = could not find container \"3c64859c1b0e8eebd07578b754c4353737bfc8b8e796f282339781ddff1b31e3\": container with ID starting with 3c64859c1b0e8eebd07578b754c4353737bfc8b8e796f282339781ddff1b31e3 not found: ID does not exist" Feb 23 17:34:30 crc kubenswrapper[4724]: I0223 17:34:30.465338 4724 scope.go:117] "RemoveContainer" containerID="96f8b8cf4b4432fe908e199d6292d05a6e8c5c3f01ee0b29a8ecab4046afe2ab" Feb 23 17:34:30 crc kubenswrapper[4724]: I0223 17:34:30.481769 4724 scope.go:117] "RemoveContainer" containerID="94e9c019d5adfffaa4d57531d0cf5af9b403aef714324f5fbbe563e8915f5197" Feb 23 17:34:30 crc kubenswrapper[4724]: I0223 17:34:30.552458 4724 scope.go:117] "RemoveContainer" containerID="7b48ee1428d997761f99509db6b07456e658316500ff913242f4f0a2eb494781" Feb 23 17:34:30 crc kubenswrapper[4724]: I0223 17:34:30.579707 4724 scope.go:117] "RemoveContainer" containerID="96f8b8cf4b4432fe908e199d6292d05a6e8c5c3f01ee0b29a8ecab4046afe2ab" Feb 23 17:34:30 crc kubenswrapper[4724]: E0223 17:34:30.580325 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96f8b8cf4b4432fe908e199d6292d05a6e8c5c3f01ee0b29a8ecab4046afe2ab\": container with ID starting with 96f8b8cf4b4432fe908e199d6292d05a6e8c5c3f01ee0b29a8ecab4046afe2ab not found: ID does not exist" containerID="96f8b8cf4b4432fe908e199d6292d05a6e8c5c3f01ee0b29a8ecab4046afe2ab" Feb 23 17:34:30 crc kubenswrapper[4724]: I0223 17:34:30.580362 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96f8b8cf4b4432fe908e199d6292d05a6e8c5c3f01ee0b29a8ecab4046afe2ab"} err="failed to get container status \"96f8b8cf4b4432fe908e199d6292d05a6e8c5c3f01ee0b29a8ecab4046afe2ab\": rpc error: code = NotFound desc = could not find container \"96f8b8cf4b4432fe908e199d6292d05a6e8c5c3f01ee0b29a8ecab4046afe2ab\": container with ID starting with 96f8b8cf4b4432fe908e199d6292d05a6e8c5c3f01ee0b29a8ecab4046afe2ab not found: ID does not exist" Feb 23 17:34:30 crc kubenswrapper[4724]: I0223 17:34:30.580434 4724 scope.go:117] "RemoveContainer" containerID="94e9c019d5adfffaa4d57531d0cf5af9b403aef714324f5fbbe563e8915f5197" Feb 23 17:34:30 crc kubenswrapper[4724]: E0223 17:34:30.581233 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94e9c019d5adfffaa4d57531d0cf5af9b403aef714324f5fbbe563e8915f5197\": container with ID starting with 94e9c019d5adfffaa4d57531d0cf5af9b403aef714324f5fbbe563e8915f5197 not found: ID does not exist" containerID="94e9c019d5adfffaa4d57531d0cf5af9b403aef714324f5fbbe563e8915f5197" Feb 23 17:34:30 crc kubenswrapper[4724]: I0223 17:34:30.581255 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94e9c019d5adfffaa4d57531d0cf5af9b403aef714324f5fbbe563e8915f5197"} err="failed to get container status \"94e9c019d5adfffaa4d57531d0cf5af9b403aef714324f5fbbe563e8915f5197\": rpc error: code = NotFound desc = could not find container \"94e9c019d5adfffaa4d57531d0cf5af9b403aef714324f5fbbe563e8915f5197\": container with ID starting with 94e9c019d5adfffaa4d57531d0cf5af9b403aef714324f5fbbe563e8915f5197 not found: ID does not exist" Feb 23 17:34:30 crc kubenswrapper[4724]: I0223 17:34:30.581270 4724 scope.go:117] "RemoveContainer" containerID="7b48ee1428d997761f99509db6b07456e658316500ff913242f4f0a2eb494781" Feb 23 17:34:30 crc kubenswrapper[4724]: E0223 17:34:30.581742 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b48ee1428d997761f99509db6b07456e658316500ff913242f4f0a2eb494781\": container with ID starting with 7b48ee1428d997761f99509db6b07456e658316500ff913242f4f0a2eb494781 not found: ID does not exist" containerID="7b48ee1428d997761f99509db6b07456e658316500ff913242f4f0a2eb494781" Feb 23 17:34:30 crc kubenswrapper[4724]: I0223 17:34:30.581767 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b48ee1428d997761f99509db6b07456e658316500ff913242f4f0a2eb494781"} err="failed to get container status \"7b48ee1428d997761f99509db6b07456e658316500ff913242f4f0a2eb494781\": rpc error: code = NotFound desc = could not find container \"7b48ee1428d997761f99509db6b07456e658316500ff913242f4f0a2eb494781\": container with ID starting with 7b48ee1428d997761f99509db6b07456e658316500ff913242f4f0a2eb494781 not found: ID does not exist" Feb 23 17:34:30 crc kubenswrapper[4724]: I0223 17:34:30.962639 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="170d3970-9dce-48c5-9b25-9d30d5780282" path="/var/lib/kubelet/pods/170d3970-9dce-48c5-9b25-9d30d5780282/volumes" Feb 23 17:34:30 crc kubenswrapper[4724]: I0223 17:34:30.963414 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d8744a3-347d-4260-963c-5629092380fe" path="/var/lib/kubelet/pods/5d8744a3-347d-4260-963c-5629092380fe/volumes" Feb 23 17:34:30 crc kubenswrapper[4724]: I0223 17:34:30.990183 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7l6ld"] Feb 23 17:34:31 crc kubenswrapper[4724]: I0223 17:34:31.389475 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7l6ld" podUID="b2f027f8-9cac-49ef-87b4-6a6d0d2aee31" containerName="registry-server" containerID="cri-o://220b87a1d2608fc8b1f2c08f8c729c5923da2d9a4e508df8449539a4a45ecd3f" gracePeriod=2 Feb 23 17:34:31 crc kubenswrapper[4724]: I0223 17:34:31.390469 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"129a58dd-706e-428b-8ab7-35194d9e0503","Type":"ContainerStarted","Data":"3096413083da557aa4aef0d5fbb52eda752df706faa1d4257d306bd7d8f3bd3f"} Feb 23 17:34:31 crc kubenswrapper[4724]: I0223 17:34:31.409368 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.409333954 podStartE2EDuration="2.409333954s" podCreationTimestamp="2026-02-23 17:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:34:31.407730573 +0000 UTC m=+227.223930183" watchObservedRunningTime="2026-02-23 17:34:31.409333954 +0000 UTC m=+227.225533554" Feb 23 17:34:32 crc kubenswrapper[4724]: I0223 17:34:32.400219 4724 generic.go:334] "Generic (PLEG): container finished" podID="b2f027f8-9cac-49ef-87b4-6a6d0d2aee31" containerID="220b87a1d2608fc8b1f2c08f8c729c5923da2d9a4e508df8449539a4a45ecd3f" exitCode=0 Feb 23 17:34:32 crc kubenswrapper[4724]: I0223 17:34:32.400313 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7l6ld" event={"ID":"b2f027f8-9cac-49ef-87b4-6a6d0d2aee31","Type":"ContainerDied","Data":"220b87a1d2608fc8b1f2c08f8c729c5923da2d9a4e508df8449539a4a45ecd3f"} Feb 23 17:34:34 crc kubenswrapper[4724]: I0223 17:34:34.666348 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7l6ld" Feb 23 17:34:34 crc kubenswrapper[4724]: I0223 17:34:34.756342 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2f027f8-9cac-49ef-87b4-6a6d0d2aee31-utilities\") pod \"b2f027f8-9cac-49ef-87b4-6a6d0d2aee31\" (UID: \"b2f027f8-9cac-49ef-87b4-6a6d0d2aee31\") " Feb 23 17:34:34 crc kubenswrapper[4724]: I0223 17:34:34.757519 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgdvh\" (UniqueName: \"kubernetes.io/projected/b2f027f8-9cac-49ef-87b4-6a6d0d2aee31-kube-api-access-tgdvh\") pod \"b2f027f8-9cac-49ef-87b4-6a6d0d2aee31\" (UID: \"b2f027f8-9cac-49ef-87b4-6a6d0d2aee31\") " Feb 23 17:34:34 crc kubenswrapper[4724]: I0223 17:34:34.757564 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2f027f8-9cac-49ef-87b4-6a6d0d2aee31-catalog-content\") pod \"b2f027f8-9cac-49ef-87b4-6a6d0d2aee31\" (UID: \"b2f027f8-9cac-49ef-87b4-6a6d0d2aee31\") " Feb 23 17:34:34 crc kubenswrapper[4724]: I0223 17:34:34.757800 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2f027f8-9cac-49ef-87b4-6a6d0d2aee31-utilities" (OuterVolumeSpecName: "utilities") pod "b2f027f8-9cac-49ef-87b4-6a6d0d2aee31" (UID: "b2f027f8-9cac-49ef-87b4-6a6d0d2aee31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:34:34 crc kubenswrapper[4724]: I0223 17:34:34.758253 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2f027f8-9cac-49ef-87b4-6a6d0d2aee31-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 17:34:34 crc kubenswrapper[4724]: I0223 17:34:34.763422 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2f027f8-9cac-49ef-87b4-6a6d0d2aee31-kube-api-access-tgdvh" (OuterVolumeSpecName: "kube-api-access-tgdvh") pod "b2f027f8-9cac-49ef-87b4-6a6d0d2aee31" (UID: "b2f027f8-9cac-49ef-87b4-6a6d0d2aee31"). InnerVolumeSpecName "kube-api-access-tgdvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:34:34 crc kubenswrapper[4724]: I0223 17:34:34.786953 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2f027f8-9cac-49ef-87b4-6a6d0d2aee31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b2f027f8-9cac-49ef-87b4-6a6d0d2aee31" (UID: "b2f027f8-9cac-49ef-87b4-6a6d0d2aee31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:34:34 crc kubenswrapper[4724]: I0223 17:34:34.859140 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgdvh\" (UniqueName: \"kubernetes.io/projected/b2f027f8-9cac-49ef-87b4-6a6d0d2aee31-kube-api-access-tgdvh\") on node \"crc\" DevicePath \"\"" Feb 23 17:34:34 crc kubenswrapper[4724]: I0223 17:34:34.859175 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2f027f8-9cac-49ef-87b4-6a6d0d2aee31-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 17:34:35 crc kubenswrapper[4724]: I0223 17:34:35.415519 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kbxv5" event={"ID":"b4ebec31-2766-49b2-9f05-9e6de41cf161","Type":"ContainerStarted","Data":"de81114211cb9fb4fa9e09c0f3163b7b9370aec75a2bc59b5bb9136afa237ad6"} Feb 23 17:34:35 crc kubenswrapper[4724]: I0223 17:34:35.418155 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7l6ld" event={"ID":"b2f027f8-9cac-49ef-87b4-6a6d0d2aee31","Type":"ContainerDied","Data":"d574c3f749e1d9f5286ba8995c6ad3da64b08fcf820575575a93e46f8c3da70a"} Feb 23 17:34:35 crc kubenswrapper[4724]: I0223 17:34:35.418196 4724 scope.go:117] "RemoveContainer" containerID="220b87a1d2608fc8b1f2c08f8c729c5923da2d9a4e508df8449539a4a45ecd3f" Feb 23 17:34:35 crc kubenswrapper[4724]: I0223 17:34:35.418242 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7l6ld" Feb 23 17:34:35 crc kubenswrapper[4724]: I0223 17:34:35.432659 4724 scope.go:117] "RemoveContainer" containerID="9c41ad9b48cd385f9fd8cd2dc2419ab59e7b4649ec72ac6400ce42b6b61028eb" Feb 23 17:34:35 crc kubenswrapper[4724]: I0223 17:34:35.451364 4724 scope.go:117] "RemoveContainer" containerID="109a6c2d4c359bbe19d0e24a1a1ad8869048c1e8fc3062b5295de54b77cf8eb0" Feb 23 17:34:35 crc kubenswrapper[4724]: I0223 17:34:35.452725 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7l6ld"] Feb 23 17:34:35 crc kubenswrapper[4724]: I0223 17:34:35.454891 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7l6ld"] Feb 23 17:34:36 crc kubenswrapper[4724]: I0223 17:34:36.424360 4724 generic.go:334] "Generic (PLEG): container finished" podID="a5ebf13d-dedd-42b9-8308-4d8cbf8ea9c8" containerID="fcbdc9df19cb38e6b00174ea4d526c6f4a02fff40e43ef1bcbbfa4eb8d2460ac" exitCode=0 Feb 23 17:34:36 crc kubenswrapper[4724]: I0223 17:34:36.424858 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vklmk" event={"ID":"a5ebf13d-dedd-42b9-8308-4d8cbf8ea9c8","Type":"ContainerDied","Data":"fcbdc9df19cb38e6b00174ea4d526c6f4a02fff40e43ef1bcbbfa4eb8d2460ac"} Feb 23 17:34:36 crc kubenswrapper[4724]: I0223 17:34:36.431873 4724 generic.go:334] "Generic (PLEG): container finished" podID="b4ebec31-2766-49b2-9f05-9e6de41cf161" containerID="de81114211cb9fb4fa9e09c0f3163b7b9370aec75a2bc59b5bb9136afa237ad6" exitCode=0 Feb 23 17:34:36 crc kubenswrapper[4724]: I0223 17:34:36.431921 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kbxv5" event={"ID":"b4ebec31-2766-49b2-9f05-9e6de41cf161","Type":"ContainerDied","Data":"de81114211cb9fb4fa9e09c0f3163b7b9370aec75a2bc59b5bb9136afa237ad6"} Feb 23 17:34:36 crc kubenswrapper[4724]: I0223 17:34:36.969975 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2f027f8-9cac-49ef-87b4-6a6d0d2aee31" path="/var/lib/kubelet/pods/b2f027f8-9cac-49ef-87b4-6a6d0d2aee31/volumes" Feb 23 17:34:37 crc kubenswrapper[4724]: I0223 17:34:37.440527 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kbxv5" event={"ID":"b4ebec31-2766-49b2-9f05-9e6de41cf161","Type":"ContainerStarted","Data":"94c977155f11c676f0c6d41213d0ac1ec8312266238d7b6f15a65f712cf16585"} Feb 23 17:34:37 crc kubenswrapper[4724]: I0223 17:34:37.442933 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vklmk" event={"ID":"a5ebf13d-dedd-42b9-8308-4d8cbf8ea9c8","Type":"ContainerStarted","Data":"6f42f60b179a86259e9df66d4f34390a55bc95885d946bbc54dbb7d73483171e"} Feb 23 17:34:37 crc kubenswrapper[4724]: I0223 17:34:37.501097 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kbxv5" podStartSLOduration=3.641994711 podStartE2EDuration="48.501077566s" podCreationTimestamp="2026-02-23 17:33:49 +0000 UTC" firstStartedPulling="2026-02-23 17:33:51.942368848 +0000 UTC m=+187.758568448" lastFinishedPulling="2026-02-23 17:34:36.801451663 +0000 UTC m=+232.617651303" observedRunningTime="2026-02-23 17:34:37.473538361 +0000 UTC m=+233.289737971" watchObservedRunningTime="2026-02-23 17:34:37.501077566 +0000 UTC m=+233.317277166" Feb 23 17:34:37 crc kubenswrapper[4724]: I0223 17:34:37.501638 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vklmk" podStartSLOduration=3.560321631 podStartE2EDuration="48.501630573s" podCreationTimestamp="2026-02-23 17:33:49 +0000 UTC" firstStartedPulling="2026-02-23 17:33:51.894350327 +0000 UTC m=+187.710549927" lastFinishedPulling="2026-02-23 17:34:36.835659269 +0000 UTC m=+232.651858869" observedRunningTime="2026-02-23 17:34:37.495714805 +0000 UTC m=+233.311914405" watchObservedRunningTime="2026-02-23 17:34:37.501630573 +0000 UTC m=+233.317830173" Feb 23 17:34:39 crc kubenswrapper[4724]: I0223 17:34:39.776367 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kbxv5" Feb 23 17:34:39 crc kubenswrapper[4724]: I0223 17:34:39.776447 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kbxv5" Feb 23 17:34:40 crc kubenswrapper[4724]: I0223 17:34:40.120016 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vklmk" Feb 23 17:34:40 crc kubenswrapper[4724]: I0223 17:34:40.120069 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vklmk" Feb 23 17:34:40 crc kubenswrapper[4724]: I0223 17:34:40.829511 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kbxv5" podUID="b4ebec31-2766-49b2-9f05-9e6de41cf161" containerName="registry-server" probeResult="failure" output=< Feb 23 17:34:40 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 23 17:34:40 crc kubenswrapper[4724]: > Feb 23 17:34:41 crc kubenswrapper[4724]: I0223 17:34:41.172962 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vklmk" podUID="a5ebf13d-dedd-42b9-8308-4d8cbf8ea9c8" containerName="registry-server" probeResult="failure" output=< Feb 23 17:34:41 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 23 17:34:41 crc kubenswrapper[4724]: > Feb 23 17:34:49 crc kubenswrapper[4724]: I0223 17:34:49.547847 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-4kcvg" podUID="757355b8-9b0f-4c38-9560-a0281e0fa332" containerName="oauth-openshift" containerID="cri-o://a6e946e37ee6d22768fafc08fbb8ed082d5b9dac186b570f6caa39f8f4bb28ca" gracePeriod=15 Feb 23 17:34:49 crc kubenswrapper[4724]: I0223 17:34:49.840238 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kbxv5" Feb 23 17:34:49 crc kubenswrapper[4724]: I0223 17:34:49.904904 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kbxv5" Feb 23 17:34:50 crc kubenswrapper[4724]: I0223 17:34:50.227586 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vklmk" Feb 23 17:34:50 crc kubenswrapper[4724]: I0223 17:34:50.266726 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vklmk" Feb 23 17:34:50 crc kubenswrapper[4724]: I0223 17:34:50.585646 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vklmk"] Feb 23 17:34:51 crc kubenswrapper[4724]: I0223 17:34:51.528078 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vklmk" podUID="a5ebf13d-dedd-42b9-8308-4d8cbf8ea9c8" containerName="registry-server" containerID="cri-o://6f42f60b179a86259e9df66d4f34390a55bc95885d946bbc54dbb7d73483171e" gracePeriod=2 Feb 23 17:34:54 crc kubenswrapper[4724]: I0223 17:34:54.663451 4724 generic.go:334] "Generic (PLEG): container finished" podID="757355b8-9b0f-4c38-9560-a0281e0fa332" containerID="a6e946e37ee6d22768fafc08fbb8ed082d5b9dac186b570f6caa39f8f4bb28ca" exitCode=0 Feb 23 17:34:54 crc kubenswrapper[4724]: I0223 17:34:54.663529 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4kcvg" event={"ID":"757355b8-9b0f-4c38-9560-a0281e0fa332","Type":"ContainerDied","Data":"a6e946e37ee6d22768fafc08fbb8ed082d5b9dac186b570f6caa39f8f4bb28ca"} Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.192619 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4kcvg" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.226283 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-74fd85d944-wbx58"] Feb 23 17:34:55 crc kubenswrapper[4724]: E0223 17:34:55.226600 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2f027f8-9cac-49ef-87b4-6a6d0d2aee31" containerName="extract-utilities" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.226618 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2f027f8-9cac-49ef-87b4-6a6d0d2aee31" containerName="extract-utilities" Feb 23 17:34:55 crc kubenswrapper[4724]: E0223 17:34:55.226633 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d8744a3-347d-4260-963c-5629092380fe" containerName="extract-content" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.226641 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d8744a3-347d-4260-963c-5629092380fe" containerName="extract-content" Feb 23 17:34:55 crc kubenswrapper[4724]: E0223 17:34:55.226652 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d8744a3-347d-4260-963c-5629092380fe" containerName="extract-utilities" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.226665 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d8744a3-347d-4260-963c-5629092380fe" containerName="extract-utilities" Feb 23 17:34:55 crc kubenswrapper[4724]: E0223 17:34:55.226682 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="170d3970-9dce-48c5-9b25-9d30d5780282" containerName="registry-server" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.226690 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="170d3970-9dce-48c5-9b25-9d30d5780282" containerName="registry-server" Feb 23 17:34:55 crc kubenswrapper[4724]: E0223 17:34:55.226701 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d8744a3-347d-4260-963c-5629092380fe" containerName="registry-server" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.226710 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d8744a3-347d-4260-963c-5629092380fe" containerName="registry-server" Feb 23 17:34:55 crc kubenswrapper[4724]: E0223 17:34:55.226720 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2f027f8-9cac-49ef-87b4-6a6d0d2aee31" containerName="extract-content" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.226728 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2f027f8-9cac-49ef-87b4-6a6d0d2aee31" containerName="extract-content" Feb 23 17:34:55 crc kubenswrapper[4724]: E0223 17:34:55.226743 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="170d3970-9dce-48c5-9b25-9d30d5780282" containerName="extract-utilities" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.226751 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="170d3970-9dce-48c5-9b25-9d30d5780282" containerName="extract-utilities" Feb 23 17:34:55 crc kubenswrapper[4724]: E0223 17:34:55.226761 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="757355b8-9b0f-4c38-9560-a0281e0fa332" containerName="oauth-openshift" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.226769 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="757355b8-9b0f-4c38-9560-a0281e0fa332" containerName="oauth-openshift" Feb 23 17:34:55 crc kubenswrapper[4724]: E0223 17:34:55.226778 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2f027f8-9cac-49ef-87b4-6a6d0d2aee31" containerName="registry-server" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.226786 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2f027f8-9cac-49ef-87b4-6a6d0d2aee31" containerName="registry-server" Feb 23 17:34:55 crc kubenswrapper[4724]: E0223 17:34:55.226803 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="170d3970-9dce-48c5-9b25-9d30d5780282" containerName="extract-content" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.226811 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="170d3970-9dce-48c5-9b25-9d30d5780282" containerName="extract-content" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.226937 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2f027f8-9cac-49ef-87b4-6a6d0d2aee31" containerName="registry-server" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.226953 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d8744a3-347d-4260-963c-5629092380fe" containerName="registry-server" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.226963 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="170d3970-9dce-48c5-9b25-9d30d5780282" containerName="registry-server" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.226978 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="757355b8-9b0f-4c38-9560-a0281e0fa332" containerName="oauth-openshift" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.227542 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fd85d944-wbx58" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.235714 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-74fd85d944-wbx58"] Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.305130 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-system-ocp-branding-template\") pod \"757355b8-9b0f-4c38-9560-a0281e0fa332\" (UID: \"757355b8-9b0f-4c38-9560-a0281e0fa332\") " Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.305196 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/757355b8-9b0f-4c38-9560-a0281e0fa332-audit-dir\") pod \"757355b8-9b0f-4c38-9560-a0281e0fa332\" (UID: \"757355b8-9b0f-4c38-9560-a0281e0fa332\") " Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.305255 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-system-serving-cert\") pod \"757355b8-9b0f-4c38-9560-a0281e0fa332\" (UID: \"757355b8-9b0f-4c38-9560-a0281e0fa332\") " Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.305309 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-user-template-error\") pod \"757355b8-9b0f-4c38-9560-a0281e0fa332\" (UID: \"757355b8-9b0f-4c38-9560-a0281e0fa332\") " Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.305354 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/757355b8-9b0f-4c38-9560-a0281e0fa332-audit-policies\") pod \"757355b8-9b0f-4c38-9560-a0281e0fa332\" (UID: \"757355b8-9b0f-4c38-9560-a0281e0fa332\") " Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.305410 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-system-service-ca\") pod \"757355b8-9b0f-4c38-9560-a0281e0fa332\" (UID: \"757355b8-9b0f-4c38-9560-a0281e0fa332\") " Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.305441 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-system-trusted-ca-bundle\") pod \"757355b8-9b0f-4c38-9560-a0281e0fa332\" (UID: \"757355b8-9b0f-4c38-9560-a0281e0fa332\") " Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.305466 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/757355b8-9b0f-4c38-9560-a0281e0fa332-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "757355b8-9b0f-4c38-9560-a0281e0fa332" (UID: "757355b8-9b0f-4c38-9560-a0281e0fa332"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.305514 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-system-session\") pod \"757355b8-9b0f-4c38-9560-a0281e0fa332\" (UID: \"757355b8-9b0f-4c38-9560-a0281e0fa332\") " Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.305541 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-system-router-certs\") pod \"757355b8-9b0f-4c38-9560-a0281e0fa332\" (UID: \"757355b8-9b0f-4c38-9560-a0281e0fa332\") " Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.306079 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "757355b8-9b0f-4c38-9560-a0281e0fa332" (UID: "757355b8-9b0f-4c38-9560-a0281e0fa332"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.306102 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/757355b8-9b0f-4c38-9560-a0281e0fa332-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "757355b8-9b0f-4c38-9560-a0281e0fa332" (UID: "757355b8-9b0f-4c38-9560-a0281e0fa332"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.306406 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-user-idp-0-file-data\") pod \"757355b8-9b0f-4c38-9560-a0281e0fa332\" (UID: \"757355b8-9b0f-4c38-9560-a0281e0fa332\") " Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.306439 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-625kc\" (UniqueName: \"kubernetes.io/projected/757355b8-9b0f-4c38-9560-a0281e0fa332-kube-api-access-625kc\") pod \"757355b8-9b0f-4c38-9560-a0281e0fa332\" (UID: \"757355b8-9b0f-4c38-9560-a0281e0fa332\") " Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.306475 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-user-template-login\") pod \"757355b8-9b0f-4c38-9560-a0281e0fa332\" (UID: \"757355b8-9b0f-4c38-9560-a0281e0fa332\") " Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.306527 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-system-cliconfig\") pod \"757355b8-9b0f-4c38-9560-a0281e0fa332\" (UID: \"757355b8-9b0f-4c38-9560-a0281e0fa332\") " Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.306573 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-user-template-provider-selection\") pod \"757355b8-9b0f-4c38-9560-a0281e0fa332\" (UID: \"757355b8-9b0f-4c38-9560-a0281e0fa332\") " Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.306767 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "757355b8-9b0f-4c38-9560-a0281e0fa332" (UID: "757355b8-9b0f-4c38-9560-a0281e0fa332"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.306770 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a5b3d3e4-f060-47a0-a758-e78e423949dd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-74fd85d944-wbx58\" (UID: \"a5b3d3e4-f060-47a0-a758-e78e423949dd\") " pod="openshift-authentication/oauth-openshift-74fd85d944-wbx58" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.306893 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a5b3d3e4-f060-47a0-a758-e78e423949dd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-74fd85d944-wbx58\" (UID: \"a5b3d3e4-f060-47a0-a758-e78e423949dd\") " pod="openshift-authentication/oauth-openshift-74fd85d944-wbx58" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.306943 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a5b3d3e4-f060-47a0-a758-e78e423949dd-v4-0-config-system-service-ca\") pod \"oauth-openshift-74fd85d944-wbx58\" (UID: \"a5b3d3e4-f060-47a0-a758-e78e423949dd\") " pod="openshift-authentication/oauth-openshift-74fd85d944-wbx58" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.306978 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5b3d3e4-f060-47a0-a758-e78e423949dd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-74fd85d944-wbx58\" (UID: \"a5b3d3e4-f060-47a0-a758-e78e423949dd\") " pod="openshift-authentication/oauth-openshift-74fd85d944-wbx58" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.307019 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a5b3d3e4-f060-47a0-a758-e78e423949dd-audit-policies\") pod \"oauth-openshift-74fd85d944-wbx58\" (UID: \"a5b3d3e4-f060-47a0-a758-e78e423949dd\") " pod="openshift-authentication/oauth-openshift-74fd85d944-wbx58" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.307051 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a5b3d3e4-f060-47a0-a758-e78e423949dd-v4-0-config-user-template-error\") pod \"oauth-openshift-74fd85d944-wbx58\" (UID: \"a5b3d3e4-f060-47a0-a758-e78e423949dd\") " pod="openshift-authentication/oauth-openshift-74fd85d944-wbx58" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.307092 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a5b3d3e4-f060-47a0-a758-e78e423949dd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-74fd85d944-wbx58\" (UID: \"a5b3d3e4-f060-47a0-a758-e78e423949dd\") " pod="openshift-authentication/oauth-openshift-74fd85d944-wbx58" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.307281 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a5b3d3e4-f060-47a0-a758-e78e423949dd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-74fd85d944-wbx58\" (UID: \"a5b3d3e4-f060-47a0-a758-e78e423949dd\") " pod="openshift-authentication/oauth-openshift-74fd85d944-wbx58" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.307336 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a5b3d3e4-f060-47a0-a758-e78e423949dd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-74fd85d944-wbx58\" (UID: \"a5b3d3e4-f060-47a0-a758-e78e423949dd\") " pod="openshift-authentication/oauth-openshift-74fd85d944-wbx58" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.307414 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srknl\" (UniqueName: \"kubernetes.io/projected/a5b3d3e4-f060-47a0-a758-e78e423949dd-kube-api-access-srknl\") pod \"oauth-openshift-74fd85d944-wbx58\" (UID: \"a5b3d3e4-f060-47a0-a758-e78e423949dd\") " pod="openshift-authentication/oauth-openshift-74fd85d944-wbx58" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.307948 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "757355b8-9b0f-4c38-9560-a0281e0fa332" (UID: "757355b8-9b0f-4c38-9560-a0281e0fa332"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.308033 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a5b3d3e4-f060-47a0-a758-e78e423949dd-audit-dir\") pod \"oauth-openshift-74fd85d944-wbx58\" (UID: \"a5b3d3e4-f060-47a0-a758-e78e423949dd\") " pod="openshift-authentication/oauth-openshift-74fd85d944-wbx58" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.308220 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a5b3d3e4-f060-47a0-a758-e78e423949dd-v4-0-config-user-template-login\") pod \"oauth-openshift-74fd85d944-wbx58\" (UID: \"a5b3d3e4-f060-47a0-a758-e78e423949dd\") " pod="openshift-authentication/oauth-openshift-74fd85d944-wbx58" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.308350 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a5b3d3e4-f060-47a0-a758-e78e423949dd-v4-0-config-system-router-certs\") pod \"oauth-openshift-74fd85d944-wbx58\" (UID: \"a5b3d3e4-f060-47a0-a758-e78e423949dd\") " pod="openshift-authentication/oauth-openshift-74fd85d944-wbx58" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.308443 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a5b3d3e4-f060-47a0-a758-e78e423949dd-v4-0-config-system-session\") pod \"oauth-openshift-74fd85d944-wbx58\" (UID: \"a5b3d3e4-f060-47a0-a758-e78e423949dd\") " pod="openshift-authentication/oauth-openshift-74fd85d944-wbx58" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.308621 4724 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.308653 4724 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/757355b8-9b0f-4c38-9560-a0281e0fa332-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.308674 4724 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/757355b8-9b0f-4c38-9560-a0281e0fa332-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.308694 4724 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.308711 4724 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.311844 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "757355b8-9b0f-4c38-9560-a0281e0fa332" (UID: "757355b8-9b0f-4c38-9560-a0281e0fa332"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.312305 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "757355b8-9b0f-4c38-9560-a0281e0fa332" (UID: "757355b8-9b0f-4c38-9560-a0281e0fa332"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.312558 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "757355b8-9b0f-4c38-9560-a0281e0fa332" (UID: "757355b8-9b0f-4c38-9560-a0281e0fa332"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.312762 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "757355b8-9b0f-4c38-9560-a0281e0fa332" (UID: "757355b8-9b0f-4c38-9560-a0281e0fa332"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.313215 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/757355b8-9b0f-4c38-9560-a0281e0fa332-kube-api-access-625kc" (OuterVolumeSpecName: "kube-api-access-625kc") pod "757355b8-9b0f-4c38-9560-a0281e0fa332" (UID: "757355b8-9b0f-4c38-9560-a0281e0fa332"). InnerVolumeSpecName "kube-api-access-625kc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.317726 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "757355b8-9b0f-4c38-9560-a0281e0fa332" (UID: "757355b8-9b0f-4c38-9560-a0281e0fa332"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.319724 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "757355b8-9b0f-4c38-9560-a0281e0fa332" (UID: "757355b8-9b0f-4c38-9560-a0281e0fa332"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.320038 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "757355b8-9b0f-4c38-9560-a0281e0fa332" (UID: "757355b8-9b0f-4c38-9560-a0281e0fa332"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.320284 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "757355b8-9b0f-4c38-9560-a0281e0fa332" (UID: "757355b8-9b0f-4c38-9560-a0281e0fa332"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.409977 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srknl\" (UniqueName: \"kubernetes.io/projected/a5b3d3e4-f060-47a0-a758-e78e423949dd-kube-api-access-srknl\") pod \"oauth-openshift-74fd85d944-wbx58\" (UID: \"a5b3d3e4-f060-47a0-a758-e78e423949dd\") " pod="openshift-authentication/oauth-openshift-74fd85d944-wbx58" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.410053 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a5b3d3e4-f060-47a0-a758-e78e423949dd-audit-dir\") pod \"oauth-openshift-74fd85d944-wbx58\" (UID: \"a5b3d3e4-f060-47a0-a758-e78e423949dd\") " pod="openshift-authentication/oauth-openshift-74fd85d944-wbx58" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.410083 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a5b3d3e4-f060-47a0-a758-e78e423949dd-v4-0-config-user-template-login\") pod \"oauth-openshift-74fd85d944-wbx58\" (UID: \"a5b3d3e4-f060-47a0-a758-e78e423949dd\") " pod="openshift-authentication/oauth-openshift-74fd85d944-wbx58" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.410115 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a5b3d3e4-f060-47a0-a758-e78e423949dd-v4-0-config-system-router-certs\") pod \"oauth-openshift-74fd85d944-wbx58\" (UID: \"a5b3d3e4-f060-47a0-a758-e78e423949dd\") " pod="openshift-authentication/oauth-openshift-74fd85d944-wbx58" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.410139 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a5b3d3e4-f060-47a0-a758-e78e423949dd-v4-0-config-system-session\") pod \"oauth-openshift-74fd85d944-wbx58\" (UID: \"a5b3d3e4-f060-47a0-a758-e78e423949dd\") " pod="openshift-authentication/oauth-openshift-74fd85d944-wbx58" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.410168 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a5b3d3e4-f060-47a0-a758-e78e423949dd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-74fd85d944-wbx58\" (UID: \"a5b3d3e4-f060-47a0-a758-e78e423949dd\") " pod="openshift-authentication/oauth-openshift-74fd85d944-wbx58" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.410189 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a5b3d3e4-f060-47a0-a758-e78e423949dd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-74fd85d944-wbx58\" (UID: \"a5b3d3e4-f060-47a0-a758-e78e423949dd\") " pod="openshift-authentication/oauth-openshift-74fd85d944-wbx58" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.410210 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a5b3d3e4-f060-47a0-a758-e78e423949dd-v4-0-config-system-service-ca\") pod \"oauth-openshift-74fd85d944-wbx58\" (UID: \"a5b3d3e4-f060-47a0-a758-e78e423949dd\") " pod="openshift-authentication/oauth-openshift-74fd85d944-wbx58" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.410225 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5b3d3e4-f060-47a0-a758-e78e423949dd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-74fd85d944-wbx58\" (UID: \"a5b3d3e4-f060-47a0-a758-e78e423949dd\") " pod="openshift-authentication/oauth-openshift-74fd85d944-wbx58" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.410242 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a5b3d3e4-f060-47a0-a758-e78e423949dd-audit-policies\") pod \"oauth-openshift-74fd85d944-wbx58\" (UID: \"a5b3d3e4-f060-47a0-a758-e78e423949dd\") " pod="openshift-authentication/oauth-openshift-74fd85d944-wbx58" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.410258 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a5b3d3e4-f060-47a0-a758-e78e423949dd-v4-0-config-user-template-error\") pod \"oauth-openshift-74fd85d944-wbx58\" (UID: \"a5b3d3e4-f060-47a0-a758-e78e423949dd\") " pod="openshift-authentication/oauth-openshift-74fd85d944-wbx58" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.410276 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a5b3d3e4-f060-47a0-a758-e78e423949dd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-74fd85d944-wbx58\" (UID: \"a5b3d3e4-f060-47a0-a758-e78e423949dd\") " pod="openshift-authentication/oauth-openshift-74fd85d944-wbx58" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.410303 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a5b3d3e4-f060-47a0-a758-e78e423949dd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-74fd85d944-wbx58\" (UID: \"a5b3d3e4-f060-47a0-a758-e78e423949dd\") " pod="openshift-authentication/oauth-openshift-74fd85d944-wbx58" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.410330 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a5b3d3e4-f060-47a0-a758-e78e423949dd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-74fd85d944-wbx58\" (UID: \"a5b3d3e4-f060-47a0-a758-e78e423949dd\") " pod="openshift-authentication/oauth-openshift-74fd85d944-wbx58" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.410374 4724 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.410406 4724 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.410419 4724 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.410430 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-625kc\" (UniqueName: \"kubernetes.io/projected/757355b8-9b0f-4c38-9560-a0281e0fa332-kube-api-access-625kc\") on node \"crc\" DevicePath \"\"" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.410440 4724 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.410454 4724 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.410464 4724 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.410474 4724 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.410486 4724 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/757355b8-9b0f-4c38-9560-a0281e0fa332-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.411272 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a5b3d3e4-f060-47a0-a758-e78e423949dd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-74fd85d944-wbx58\" (UID: \"a5b3d3e4-f060-47a0-a758-e78e423949dd\") " pod="openshift-authentication/oauth-openshift-74fd85d944-wbx58" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.411717 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a5b3d3e4-f060-47a0-a758-e78e423949dd-audit-dir\") pod \"oauth-openshift-74fd85d944-wbx58\" (UID: \"a5b3d3e4-f060-47a0-a758-e78e423949dd\") " pod="openshift-authentication/oauth-openshift-74fd85d944-wbx58" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.412954 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a5b3d3e4-f060-47a0-a758-e78e423949dd-v4-0-config-system-service-ca\") pod \"oauth-openshift-74fd85d944-wbx58\" (UID: \"a5b3d3e4-f060-47a0-a758-e78e423949dd\") " pod="openshift-authentication/oauth-openshift-74fd85d944-wbx58" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.413021 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a5b3d3e4-f060-47a0-a758-e78e423949dd-audit-policies\") pod \"oauth-openshift-74fd85d944-wbx58\" (UID: \"a5b3d3e4-f060-47a0-a758-e78e423949dd\") " pod="openshift-authentication/oauth-openshift-74fd85d944-wbx58" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.413580 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5b3d3e4-f060-47a0-a758-e78e423949dd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-74fd85d944-wbx58\" (UID: \"a5b3d3e4-f060-47a0-a758-e78e423949dd\") " pod="openshift-authentication/oauth-openshift-74fd85d944-wbx58" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.416527 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a5b3d3e4-f060-47a0-a758-e78e423949dd-v4-0-config-user-template-login\") pod \"oauth-openshift-74fd85d944-wbx58\" (UID: \"a5b3d3e4-f060-47a0-a758-e78e423949dd\") " pod="openshift-authentication/oauth-openshift-74fd85d944-wbx58" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.416623 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a5b3d3e4-f060-47a0-a758-e78e423949dd-v4-0-config-system-router-certs\") pod \"oauth-openshift-74fd85d944-wbx58\" (UID: \"a5b3d3e4-f060-47a0-a758-e78e423949dd\") " pod="openshift-authentication/oauth-openshift-74fd85d944-wbx58" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.417134 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a5b3d3e4-f060-47a0-a758-e78e423949dd-v4-0-config-system-session\") pod \"oauth-openshift-74fd85d944-wbx58\" (UID: \"a5b3d3e4-f060-47a0-a758-e78e423949dd\") " pod="openshift-authentication/oauth-openshift-74fd85d944-wbx58" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.418092 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a5b3d3e4-f060-47a0-a758-e78e423949dd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-74fd85d944-wbx58\" (UID: \"a5b3d3e4-f060-47a0-a758-e78e423949dd\") " pod="openshift-authentication/oauth-openshift-74fd85d944-wbx58" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.418569 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a5b3d3e4-f060-47a0-a758-e78e423949dd-v4-0-config-user-template-error\") pod \"oauth-openshift-74fd85d944-wbx58\" (UID: \"a5b3d3e4-f060-47a0-a758-e78e423949dd\") " pod="openshift-authentication/oauth-openshift-74fd85d944-wbx58" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.418869 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a5b3d3e4-f060-47a0-a758-e78e423949dd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-74fd85d944-wbx58\" (UID: \"a5b3d3e4-f060-47a0-a758-e78e423949dd\") " pod="openshift-authentication/oauth-openshift-74fd85d944-wbx58" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.418962 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a5b3d3e4-f060-47a0-a758-e78e423949dd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-74fd85d944-wbx58\" (UID: \"a5b3d3e4-f060-47a0-a758-e78e423949dd\") " pod="openshift-authentication/oauth-openshift-74fd85d944-wbx58" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.419546 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a5b3d3e4-f060-47a0-a758-e78e423949dd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-74fd85d944-wbx58\" (UID: \"a5b3d3e4-f060-47a0-a758-e78e423949dd\") " pod="openshift-authentication/oauth-openshift-74fd85d944-wbx58" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.426685 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srknl\" (UniqueName: \"kubernetes.io/projected/a5b3d3e4-f060-47a0-a758-e78e423949dd-kube-api-access-srknl\") pod \"oauth-openshift-74fd85d944-wbx58\" (UID: \"a5b3d3e4-f060-47a0-a758-e78e423949dd\") " pod="openshift-authentication/oauth-openshift-74fd85d944-wbx58" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.569783 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fd85d944-wbx58" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.683443 4724 generic.go:334] "Generic (PLEG): container finished" podID="a5ebf13d-dedd-42b9-8308-4d8cbf8ea9c8" containerID="6f42f60b179a86259e9df66d4f34390a55bc95885d946bbc54dbb7d73483171e" exitCode=0 Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.683519 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vklmk" event={"ID":"a5ebf13d-dedd-42b9-8308-4d8cbf8ea9c8","Type":"ContainerDied","Data":"6f42f60b179a86259e9df66d4f34390a55bc95885d946bbc54dbb7d73483171e"} Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.685283 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4kcvg" event={"ID":"757355b8-9b0f-4c38-9560-a0281e0fa332","Type":"ContainerDied","Data":"ee2cacd57ae8ab25878bba19ffdcf1a59d7e52428addc164d77c55653bf47e80"} Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.685313 4724 scope.go:117] "RemoveContainer" containerID="a6e946e37ee6d22768fafc08fbb8ed082d5b9dac186b570f6caa39f8f4bb28ca" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.685478 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4kcvg" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.720187 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4kcvg"] Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.722933 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4kcvg"] Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.762720 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vklmk" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.815591 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5ebf13d-dedd-42b9-8308-4d8cbf8ea9c8-catalog-content\") pod \"a5ebf13d-dedd-42b9-8308-4d8cbf8ea9c8\" (UID: \"a5ebf13d-dedd-42b9-8308-4d8cbf8ea9c8\") " Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.815780 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9mhb\" (UniqueName: \"kubernetes.io/projected/a5ebf13d-dedd-42b9-8308-4d8cbf8ea9c8-kube-api-access-x9mhb\") pod \"a5ebf13d-dedd-42b9-8308-4d8cbf8ea9c8\" (UID: \"a5ebf13d-dedd-42b9-8308-4d8cbf8ea9c8\") " Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.815820 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5ebf13d-dedd-42b9-8308-4d8cbf8ea9c8-utilities\") pod \"a5ebf13d-dedd-42b9-8308-4d8cbf8ea9c8\" (UID: \"a5ebf13d-dedd-42b9-8308-4d8cbf8ea9c8\") " Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.817089 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5ebf13d-dedd-42b9-8308-4d8cbf8ea9c8-utilities" (OuterVolumeSpecName: "utilities") pod "a5ebf13d-dedd-42b9-8308-4d8cbf8ea9c8" (UID: "a5ebf13d-dedd-42b9-8308-4d8cbf8ea9c8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.822261 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5ebf13d-dedd-42b9-8308-4d8cbf8ea9c8-kube-api-access-x9mhb" (OuterVolumeSpecName: "kube-api-access-x9mhb") pod "a5ebf13d-dedd-42b9-8308-4d8cbf8ea9c8" (UID: "a5ebf13d-dedd-42b9-8308-4d8cbf8ea9c8"). InnerVolumeSpecName "kube-api-access-x9mhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.917713 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9mhb\" (UniqueName: \"kubernetes.io/projected/a5ebf13d-dedd-42b9-8308-4d8cbf8ea9c8-kube-api-access-x9mhb\") on node \"crc\" DevicePath \"\"" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.917754 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5ebf13d-dedd-42b9-8308-4d8cbf8ea9c8-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 17:34:55 crc kubenswrapper[4724]: I0223 17:34:55.938318 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5ebf13d-dedd-42b9-8308-4d8cbf8ea9c8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a5ebf13d-dedd-42b9-8308-4d8cbf8ea9c8" (UID: "a5ebf13d-dedd-42b9-8308-4d8cbf8ea9c8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:34:56 crc kubenswrapper[4724]: I0223 17:34:56.008918 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-74fd85d944-wbx58"] Feb 23 17:34:56 crc kubenswrapper[4724]: I0223 17:34:56.018820 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5ebf13d-dedd-42b9-8308-4d8cbf8ea9c8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 17:34:56 crc kubenswrapper[4724]: I0223 17:34:56.693814 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-74fd85d944-wbx58" event={"ID":"a5b3d3e4-f060-47a0-a758-e78e423949dd","Type":"ContainerStarted","Data":"74e7b9b2816f17e5149c356c7c33eadb8d1b962d481d2436cbaa7934627dcd42"} Feb 23 17:34:56 crc kubenswrapper[4724]: I0223 17:34:56.694189 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-74fd85d944-wbx58" Feb 23 17:34:56 crc kubenswrapper[4724]: I0223 17:34:56.694215 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-74fd85d944-wbx58" event={"ID":"a5b3d3e4-f060-47a0-a758-e78e423949dd","Type":"ContainerStarted","Data":"3b6c7e4720b3890382d5cd426bf040ad741a1577dd9688dd5138b71fbcd3580a"} Feb 23 17:34:56 crc kubenswrapper[4724]: I0223 17:34:56.696942 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vklmk" event={"ID":"a5ebf13d-dedd-42b9-8308-4d8cbf8ea9c8","Type":"ContainerDied","Data":"a6bd97e2248728da8d733646b2f290a46992c18e30e357ea57b2ba14e1fdfe4f"} Feb 23 17:34:56 crc kubenswrapper[4724]: I0223 17:34:56.696986 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vklmk" Feb 23 17:34:56 crc kubenswrapper[4724]: I0223 17:34:56.697015 4724 scope.go:117] "RemoveContainer" containerID="6f42f60b179a86259e9df66d4f34390a55bc95885d946bbc54dbb7d73483171e" Feb 23 17:34:56 crc kubenswrapper[4724]: I0223 17:34:56.729937 4724 scope.go:117] "RemoveContainer" containerID="fcbdc9df19cb38e6b00174ea4d526c6f4a02fff40e43ef1bcbbfa4eb8d2460ac" Feb 23 17:34:56 crc kubenswrapper[4724]: I0223 17:34:56.727412 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-74fd85d944-wbx58" podStartSLOduration=32.727301833 podStartE2EDuration="32.727301833s" podCreationTimestamp="2026-02-23 17:34:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:34:56.719156515 +0000 UTC m=+252.535356155" watchObservedRunningTime="2026-02-23 17:34:56.727301833 +0000 UTC m=+252.543501483" Feb 23 17:34:56 crc kubenswrapper[4724]: I0223 17:34:56.739796 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vklmk"] Feb 23 17:34:56 crc kubenswrapper[4724]: I0223 17:34:56.742407 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vklmk"] Feb 23 17:34:56 crc kubenswrapper[4724]: I0223 17:34:56.752755 4724 scope.go:117] "RemoveContainer" containerID="ad051b6bb79454faf78f28b8692048101951deb96352d42662bf1f3b679c56d8" Feb 23 17:34:56 crc kubenswrapper[4724]: I0223 17:34:56.967132 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="757355b8-9b0f-4c38-9560-a0281e0fa332" path="/var/lib/kubelet/pods/757355b8-9b0f-4c38-9560-a0281e0fa332/volumes" Feb 23 17:34:56 crc kubenswrapper[4724]: I0223 17:34:56.968181 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5ebf13d-dedd-42b9-8308-4d8cbf8ea9c8" path="/var/lib/kubelet/pods/a5ebf13d-dedd-42b9-8308-4d8cbf8ea9c8/volumes" Feb 23 17:34:56 crc kubenswrapper[4724]: I0223 17:34:56.982666 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-74fd85d944-wbx58" Feb 23 17:34:57 crc kubenswrapper[4724]: I0223 17:34:57.751997 4724 patch_prober.go:28] interesting pod/machine-config-daemon-rw78r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 17:34:57 crc kubenswrapper[4724]: I0223 17:34:57.752365 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.383599 4724 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 23 17:35:08 crc kubenswrapper[4724]: E0223 17:35:08.384572 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5ebf13d-dedd-42b9-8308-4d8cbf8ea9c8" containerName="extract-content" Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.384596 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5ebf13d-dedd-42b9-8308-4d8cbf8ea9c8" containerName="extract-content" Feb 23 17:35:08 crc kubenswrapper[4724]: E0223 17:35:08.384627 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5ebf13d-dedd-42b9-8308-4d8cbf8ea9c8" containerName="registry-server" Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.384643 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5ebf13d-dedd-42b9-8308-4d8cbf8ea9c8" containerName="registry-server" Feb 23 17:35:08 crc kubenswrapper[4724]: E0223 17:35:08.384678 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5ebf13d-dedd-42b9-8308-4d8cbf8ea9c8" containerName="extract-utilities" Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.384691 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5ebf13d-dedd-42b9-8308-4d8cbf8ea9c8" containerName="extract-utilities" Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.384913 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5ebf13d-dedd-42b9-8308-4d8cbf8ea9c8" containerName="registry-server" Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.385526 4724 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.385954 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://fe805066e5e61f09a4f5cf1f9e912da6664229c8db86a898cbe670278e4c2530" gracePeriod=15 Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.386045 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://b550694e49325768a2674a4d2b7089dd267b91a273751fb62d060a0663f06619" gracePeriod=15 Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.386113 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://65e108d74a635da102e857e2ad4f18384861e4e630ae31af9aa05841e6bbc2de" gracePeriod=15 Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.386154 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.386189 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://a5b889ee0ad15cc6ff92c69a44c4bc0bf620e23e3dc7276324662c265ba5c7d4" gracePeriod=15 Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.386091 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://beda3a9d030f2d4aa9c6a78508a84f5bc122debebb2c5b423233834433b8fd5d" gracePeriod=15 Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.387119 4724 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 23 17:35:08 crc kubenswrapper[4724]: E0223 17:35:08.387383 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.387422 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 23 17:35:08 crc kubenswrapper[4724]: E0223 17:35:08.387436 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.387442 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 23 17:35:08 crc kubenswrapper[4724]: E0223 17:35:08.387467 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.387473 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 17:35:08 crc kubenswrapper[4724]: E0223 17:35:08.387481 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.387503 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 17:35:08 crc kubenswrapper[4724]: E0223 17:35:08.387512 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.387517 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 23 17:35:08 crc kubenswrapper[4724]: E0223 17:35:08.387525 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.387532 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 23 17:35:08 crc kubenswrapper[4724]: E0223 17:35:08.387539 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.387544 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 17:35:08 crc kubenswrapper[4724]: E0223 17:35:08.387554 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.387559 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 23 17:35:08 crc kubenswrapper[4724]: E0223 17:35:08.387567 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.387573 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.387693 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.387711 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.387726 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.387735 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.387743 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.387751 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.387763 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.387983 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.405112 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.422635 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.422698 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.422803 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.422828 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.422851 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.423029 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.423110 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.524273 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.524324 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.524350 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.524388 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.524418 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.524442 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.524473 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.524562 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.524568 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.524595 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.524621 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.524624 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.524652 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.524661 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.524664 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.524692 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.777014 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.778699 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.779857 4724 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a5b889ee0ad15cc6ff92c69a44c4bc0bf620e23e3dc7276324662c265ba5c7d4" exitCode=0 Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.779886 4724 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b550694e49325768a2674a4d2b7089dd267b91a273751fb62d060a0663f06619" exitCode=0 Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.779895 4724 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="beda3a9d030f2d4aa9c6a78508a84f5bc122debebb2c5b423233834433b8fd5d" exitCode=0 Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.779902 4724 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="65e108d74a635da102e857e2ad4f18384861e4e630ae31af9aa05841e6bbc2de" exitCode=2 Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.779950 4724 scope.go:117] "RemoveContainer" containerID="93ecc81be09fd6bda89d4151c56443b5b7b4f8c76b8fa7976cf21fc6427abfae" Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.782208 4724 generic.go:334] "Generic (PLEG): container finished" podID="129a58dd-706e-428b-8ab7-35194d9e0503" containerID="3096413083da557aa4aef0d5fbb52eda752df706faa1d4257d306bd7d8f3bd3f" exitCode=0 Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.782246 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"129a58dd-706e-428b-8ab7-35194d9e0503","Type":"ContainerDied","Data":"3096413083da557aa4aef0d5fbb52eda752df706faa1d4257d306bd7d8f3bd3f"} Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.783456 4724 status_manager.go:851] "Failed to get status for pod" podUID="129a58dd-706e-428b-8ab7-35194d9e0503" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Feb 23 17:35:08 crc kubenswrapper[4724]: I0223 17:35:08.783904 4724 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Feb 23 17:35:09 crc kubenswrapper[4724]: I0223 17:35:09.807536 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 23 17:35:10 crc kubenswrapper[4724]: E0223 17:35:10.020590 4724 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.174:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" volumeName="registry-storage" Feb 23 17:35:10 crc kubenswrapper[4724]: I0223 17:35:10.127205 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 23 17:35:10 crc kubenswrapper[4724]: I0223 17:35:10.128090 4724 status_manager.go:851] "Failed to get status for pod" podUID="129a58dd-706e-428b-8ab7-35194d9e0503" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Feb 23 17:35:10 crc kubenswrapper[4724]: I0223 17:35:10.249150 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/129a58dd-706e-428b-8ab7-35194d9e0503-var-lock\") pod \"129a58dd-706e-428b-8ab7-35194d9e0503\" (UID: \"129a58dd-706e-428b-8ab7-35194d9e0503\") " Feb 23 17:35:10 crc kubenswrapper[4724]: I0223 17:35:10.249300 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/129a58dd-706e-428b-8ab7-35194d9e0503-var-lock" (OuterVolumeSpecName: "var-lock") pod "129a58dd-706e-428b-8ab7-35194d9e0503" (UID: "129a58dd-706e-428b-8ab7-35194d9e0503"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:35:10 crc kubenswrapper[4724]: I0223 17:35:10.249364 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/129a58dd-706e-428b-8ab7-35194d9e0503-kube-api-access\") pod \"129a58dd-706e-428b-8ab7-35194d9e0503\" (UID: \"129a58dd-706e-428b-8ab7-35194d9e0503\") " Feb 23 17:35:10 crc kubenswrapper[4724]: I0223 17:35:10.249423 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/129a58dd-706e-428b-8ab7-35194d9e0503-kubelet-dir\") pod \"129a58dd-706e-428b-8ab7-35194d9e0503\" (UID: \"129a58dd-706e-428b-8ab7-35194d9e0503\") " Feb 23 17:35:10 crc kubenswrapper[4724]: I0223 17:35:10.249519 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/129a58dd-706e-428b-8ab7-35194d9e0503-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "129a58dd-706e-428b-8ab7-35194d9e0503" (UID: "129a58dd-706e-428b-8ab7-35194d9e0503"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:35:10 crc kubenswrapper[4724]: I0223 17:35:10.249763 4724 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/129a58dd-706e-428b-8ab7-35194d9e0503-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 23 17:35:10 crc kubenswrapper[4724]: I0223 17:35:10.249783 4724 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/129a58dd-706e-428b-8ab7-35194d9e0503-var-lock\") on node \"crc\" DevicePath \"\"" Feb 23 17:35:10 crc kubenswrapper[4724]: I0223 17:35:10.258785 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/129a58dd-706e-428b-8ab7-35194d9e0503-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "129a58dd-706e-428b-8ab7-35194d9e0503" (UID: "129a58dd-706e-428b-8ab7-35194d9e0503"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:35:10 crc kubenswrapper[4724]: I0223 17:35:10.351556 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/129a58dd-706e-428b-8ab7-35194d9e0503-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 23 17:35:10 crc kubenswrapper[4724]: I0223 17:35:10.821215 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 23 17:35:10 crc kubenswrapper[4724]: I0223 17:35:10.822530 4724 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="fe805066e5e61f09a4f5cf1f9e912da6664229c8db86a898cbe670278e4c2530" exitCode=0 Feb 23 17:35:10 crc kubenswrapper[4724]: I0223 17:35:10.824788 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"129a58dd-706e-428b-8ab7-35194d9e0503","Type":"ContainerDied","Data":"bb71c3e9daa80d1d901d38d3672d352ae0f4539149cfc735afa494483e1f55c2"} Feb 23 17:35:10 crc kubenswrapper[4724]: I0223 17:35:10.824829 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb71c3e9daa80d1d901d38d3672d352ae0f4539149cfc735afa494483e1f55c2" Feb 23 17:35:10 crc kubenswrapper[4724]: I0223 17:35:10.824882 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 23 17:35:10 crc kubenswrapper[4724]: I0223 17:35:10.845899 4724 status_manager.go:851] "Failed to get status for pod" podUID="129a58dd-706e-428b-8ab7-35194d9e0503" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Feb 23 17:35:10 crc kubenswrapper[4724]: I0223 17:35:10.848180 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 23 17:35:10 crc kubenswrapper[4724]: I0223 17:35:10.848996 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 17:35:10 crc kubenswrapper[4724]: I0223 17:35:10.849767 4724 status_manager.go:851] "Failed to get status for pod" podUID="129a58dd-706e-428b-8ab7-35194d9e0503" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Feb 23 17:35:10 crc kubenswrapper[4724]: I0223 17:35:10.850061 4724 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Feb 23 17:35:10 crc kubenswrapper[4724]: I0223 17:35:10.960631 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 23 17:35:10 crc kubenswrapper[4724]: I0223 17:35:10.960735 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 23 17:35:10 crc kubenswrapper[4724]: I0223 17:35:10.960780 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 23 17:35:10 crc kubenswrapper[4724]: I0223 17:35:10.960807 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:35:10 crc kubenswrapper[4724]: I0223 17:35:10.960867 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:35:10 crc kubenswrapper[4724]: I0223 17:35:10.961014 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:35:10 crc kubenswrapper[4724]: I0223 17:35:10.961692 4724 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 23 17:35:10 crc kubenswrapper[4724]: I0223 17:35:10.961728 4724 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 23 17:35:10 crc kubenswrapper[4724]: I0223 17:35:10.961741 4724 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 23 17:35:11 crc kubenswrapper[4724]: I0223 17:35:11.833250 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 23 17:35:11 crc kubenswrapper[4724]: I0223 17:35:11.835460 4724 scope.go:117] "RemoveContainer" containerID="a5b889ee0ad15cc6ff92c69a44c4bc0bf620e23e3dc7276324662c265ba5c7d4" Feb 23 17:35:11 crc kubenswrapper[4724]: I0223 17:35:11.835678 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 17:35:11 crc kubenswrapper[4724]: I0223 17:35:11.837254 4724 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Feb 23 17:35:11 crc kubenswrapper[4724]: I0223 17:35:11.837871 4724 status_manager.go:851] "Failed to get status for pod" podUID="129a58dd-706e-428b-8ab7-35194d9e0503" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Feb 23 17:35:11 crc kubenswrapper[4724]: I0223 17:35:11.851906 4724 status_manager.go:851] "Failed to get status for pod" podUID="129a58dd-706e-428b-8ab7-35194d9e0503" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Feb 23 17:35:11 crc kubenswrapper[4724]: I0223 17:35:11.852173 4724 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Feb 23 17:35:11 crc kubenswrapper[4724]: I0223 17:35:11.852312 4724 scope.go:117] "RemoveContainer" containerID="b550694e49325768a2674a4d2b7089dd267b91a273751fb62d060a0663f06619" Feb 23 17:35:11 crc kubenswrapper[4724]: I0223 17:35:11.869482 4724 scope.go:117] "RemoveContainer" containerID="beda3a9d030f2d4aa9c6a78508a84f5bc122debebb2c5b423233834433b8fd5d" Feb 23 17:35:11 crc kubenswrapper[4724]: I0223 17:35:11.883340 4724 scope.go:117] "RemoveContainer" containerID="65e108d74a635da102e857e2ad4f18384861e4e630ae31af9aa05841e6bbc2de" Feb 23 17:35:11 crc kubenswrapper[4724]: I0223 17:35:11.897430 4724 scope.go:117] "RemoveContainer" containerID="fe805066e5e61f09a4f5cf1f9e912da6664229c8db86a898cbe670278e4c2530" Feb 23 17:35:11 crc kubenswrapper[4724]: I0223 17:35:11.916792 4724 scope.go:117] "RemoveContainer" containerID="0b6052faa9c320cff02ab6aa2110ae43293c5142f8565b1200749d89805cab74" Feb 23 17:35:12 crc kubenswrapper[4724]: I0223 17:35:12.959813 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 23 17:35:13 crc kubenswrapper[4724]: E0223 17:35:13.442994 4724 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.174:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 17:35:13 crc kubenswrapper[4724]: I0223 17:35:13.443906 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 17:35:13 crc kubenswrapper[4724]: E0223 17:35:13.475451 4724 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.174:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1896f0a954827114 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-23 17:35:13.4748429 +0000 UTC m=+269.291042490,LastTimestamp:2026-02-23 17:35:13.4748429 +0000 UTC m=+269.291042490,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 23 17:35:13 crc kubenswrapper[4724]: I0223 17:35:13.851316 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"8712cd2b19385e7c725ef0ee6ad4b5e870a9b22965ac57a829c5d1fbda625d63"} Feb 23 17:35:13 crc kubenswrapper[4724]: I0223 17:35:13.851369 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"7a760530202bdb5417f27baaa2af8f5cf6e9bf5dd4da90bcb7a956b3a320db83"} Feb 23 17:35:13 crc kubenswrapper[4724]: I0223 17:35:13.851936 4724 status_manager.go:851] "Failed to get status for pod" podUID="129a58dd-706e-428b-8ab7-35194d9e0503" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Feb 23 17:35:13 crc kubenswrapper[4724]: E0223 17:35:13.851967 4724 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.174:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 17:35:14 crc kubenswrapper[4724]: I0223 17:35:14.956287 4724 status_manager.go:851] "Failed to get status for pod" podUID="129a58dd-706e-428b-8ab7-35194d9e0503" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Feb 23 17:35:15 crc kubenswrapper[4724]: E0223 17:35:15.202412 4724 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" Feb 23 17:35:15 crc kubenswrapper[4724]: E0223 17:35:15.203428 4724 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" Feb 23 17:35:15 crc kubenswrapper[4724]: E0223 17:35:15.203997 4724 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" Feb 23 17:35:15 crc kubenswrapper[4724]: E0223 17:35:15.204414 4724 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" Feb 23 17:35:15 crc kubenswrapper[4724]: E0223 17:35:15.204768 4724 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" Feb 23 17:35:15 crc kubenswrapper[4724]: I0223 17:35:15.204808 4724 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 23 17:35:15 crc kubenswrapper[4724]: E0223 17:35:15.205123 4724 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" interval="200ms" Feb 23 17:35:15 crc kubenswrapper[4724]: E0223 17:35:15.406811 4724 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" interval="400ms" Feb 23 17:35:15 crc kubenswrapper[4724]: E0223 17:35:15.808407 4724 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" interval="800ms" Feb 23 17:35:16 crc kubenswrapper[4724]: E0223 17:35:16.609328 4724 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" interval="1.6s" Feb 23 17:35:18 crc kubenswrapper[4724]: E0223 17:35:18.209943 4724 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" interval="3.2s" Feb 23 17:35:18 crc kubenswrapper[4724]: E0223 17:35:18.856023 4724 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.174:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1896f0a954827114 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-23 17:35:13.4748429 +0000 UTC m=+269.291042490,LastTimestamp:2026-02-23 17:35:13.4748429 +0000 UTC m=+269.291042490,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 23 17:35:20 crc kubenswrapper[4724]: I0223 17:35:20.950933 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 17:35:20 crc kubenswrapper[4724]: I0223 17:35:20.951917 4724 status_manager.go:851] "Failed to get status for pod" podUID="129a58dd-706e-428b-8ab7-35194d9e0503" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Feb 23 17:35:20 crc kubenswrapper[4724]: I0223 17:35:20.979648 4724 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7e1d6606-75fc-41fd-9c23-18ee248da2af" Feb 23 17:35:20 crc kubenswrapper[4724]: I0223 17:35:20.979720 4724 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7e1d6606-75fc-41fd-9c23-18ee248da2af" Feb 23 17:35:20 crc kubenswrapper[4724]: E0223 17:35:20.980760 4724 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 17:35:20 crc kubenswrapper[4724]: I0223 17:35:20.981577 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 17:35:21 crc kubenswrapper[4724]: E0223 17:35:21.411999 4724 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" interval="6.4s" Feb 23 17:35:21 crc kubenswrapper[4724]: I0223 17:35:21.902750 4724 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="a64cf1c2f614c0153a6778196ba5b4b4f44079c64aec71d18c0cc7af711ee740" exitCode=0 Feb 23 17:35:21 crc kubenswrapper[4724]: I0223 17:35:21.902822 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"a64cf1c2f614c0153a6778196ba5b4b4f44079c64aec71d18c0cc7af711ee740"} Feb 23 17:35:21 crc kubenswrapper[4724]: I0223 17:35:21.902866 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6c75594de0703320ec20e5cfb86e4ba5abd7c73610128daced63de820e21d9bc"} Feb 23 17:35:21 crc kubenswrapper[4724]: I0223 17:35:21.903361 4724 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7e1d6606-75fc-41fd-9c23-18ee248da2af" Feb 23 17:35:21 crc kubenswrapper[4724]: I0223 17:35:21.903431 4724 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7e1d6606-75fc-41fd-9c23-18ee248da2af" Feb 23 17:35:21 crc kubenswrapper[4724]: E0223 17:35:21.904114 4724 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 17:35:21 crc kubenswrapper[4724]: I0223 17:35:21.904208 4724 status_manager.go:851] "Failed to get status for pod" podUID="129a58dd-706e-428b-8ab7-35194d9e0503" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Feb 23 17:35:22 crc kubenswrapper[4724]: I0223 17:35:22.913348 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 23 17:35:22 crc kubenswrapper[4724]: I0223 17:35:22.915477 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 23 17:35:22 crc kubenswrapper[4724]: I0223 17:35:22.915536 4724 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="cbce4ddd851479ee9b40b8c773a052d5a1d1420df107d8ea0e9fa2b927300910" exitCode=1 Feb 23 17:35:22 crc kubenswrapper[4724]: I0223 17:35:22.915610 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"cbce4ddd851479ee9b40b8c773a052d5a1d1420df107d8ea0e9fa2b927300910"} Feb 23 17:35:22 crc kubenswrapper[4724]: I0223 17:35:22.916360 4724 scope.go:117] "RemoveContainer" containerID="cbce4ddd851479ee9b40b8c773a052d5a1d1420df107d8ea0e9fa2b927300910" Feb 23 17:35:22 crc kubenswrapper[4724]: I0223 17:35:22.935523 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"df02d4e5946f12697a477c564746c0643eb5f15274a14c3d06a69591cbd1ec31"} Feb 23 17:35:22 crc kubenswrapper[4724]: I0223 17:35:22.935602 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ccd75b5d3828f1adb9ce8aa05f94150f89eebd5d834d0713ccf4e4030e10a588"} Feb 23 17:35:22 crc kubenswrapper[4724]: I0223 17:35:22.935619 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"bb166692f535ae309f0a330049db64234953fce6b87a06483b83f286e5e9371d"} Feb 23 17:35:22 crc kubenswrapper[4724]: I0223 17:35:22.935633 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7c18866ea06e5333e8b098771e5574fc77efd3d9702290d2bb5dbad3c46524f8"} Feb 23 17:35:23 crc kubenswrapper[4724]: I0223 17:35:23.959047 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 23 17:35:23 crc kubenswrapper[4724]: I0223 17:35:23.959714 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 23 17:35:23 crc kubenswrapper[4724]: I0223 17:35:23.959814 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ee0dfff5e7a7c8abbec1206901ad7c143cd328ca364dcd5dc27520bc2035d4d1"} Feb 23 17:35:23 crc kubenswrapper[4724]: I0223 17:35:23.963581 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"83433dcdaeb75d7ed83f7871874a572c1949c83d63856fb9331c370e6486e6dc"} Feb 23 17:35:23 crc kubenswrapper[4724]: I0223 17:35:23.963957 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 17:35:23 crc kubenswrapper[4724]: I0223 17:35:23.963969 4724 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7e1d6606-75fc-41fd-9c23-18ee248da2af" Feb 23 17:35:23 crc kubenswrapper[4724]: I0223 17:35:23.964065 4724 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7e1d6606-75fc-41fd-9c23-18ee248da2af" Feb 23 17:35:25 crc kubenswrapper[4724]: I0223 17:35:25.982305 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 17:35:25 crc kubenswrapper[4724]: I0223 17:35:25.984525 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 17:35:25 crc kubenswrapper[4724]: I0223 17:35:25.991327 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 17:35:27 crc kubenswrapper[4724]: I0223 17:35:27.752375 4724 patch_prober.go:28] interesting pod/machine-config-daemon-rw78r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 17:35:27 crc kubenswrapper[4724]: I0223 17:35:27.753055 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 17:35:27 crc kubenswrapper[4724]: I0223 17:35:27.753234 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" Feb 23 17:35:27 crc kubenswrapper[4724]: I0223 17:35:27.754740 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"716e3c7a8293727fc9315beb9da7fea72ec299ea5d8f15035ab77347c201a7db"} pod="openshift-machine-config-operator/machine-config-daemon-rw78r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 17:35:27 crc kubenswrapper[4724]: I0223 17:35:27.754943 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" containerID="cri-o://716e3c7a8293727fc9315beb9da7fea72ec299ea5d8f15035ab77347c201a7db" gracePeriod=600 Feb 23 17:35:27 crc kubenswrapper[4724]: E0223 17:35:27.801937 4724 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda065b197_b354_4d9b_b2e9_7d4882a3d1a2.slice/crio-716e3c7a8293727fc9315beb9da7fea72ec299ea5d8f15035ab77347c201a7db.scope\": RecentStats: unable to find data in memory cache]" Feb 23 17:35:27 crc kubenswrapper[4724]: I0223 17:35:27.988997 4724 generic.go:334] "Generic (PLEG): container finished" podID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerID="716e3c7a8293727fc9315beb9da7fea72ec299ea5d8f15035ab77347c201a7db" exitCode=0 Feb 23 17:35:27 crc kubenswrapper[4724]: I0223 17:35:27.989052 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" event={"ID":"a065b197-b354-4d9b-b2e9-7d4882a3d1a2","Type":"ContainerDied","Data":"716e3c7a8293727fc9315beb9da7fea72ec299ea5d8f15035ab77347c201a7db"} Feb 23 17:35:28 crc kubenswrapper[4724]: I0223 17:35:28.991653 4724 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 17:35:29 crc kubenswrapper[4724]: I0223 17:35:29.006170 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" event={"ID":"a065b197-b354-4d9b-b2e9-7d4882a3d1a2","Type":"ContainerStarted","Data":"bf38d9a5a1d2630175dcd94c9e597b013cf2712dd646e5ede28f7464d6d184a5"} Feb 23 17:35:29 crc kubenswrapper[4724]: I0223 17:35:29.971015 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 17:35:30 crc kubenswrapper[4724]: I0223 17:35:30.016427 4724 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7e1d6606-75fc-41fd-9c23-18ee248da2af" Feb 23 17:35:30 crc kubenswrapper[4724]: I0223 17:35:30.016485 4724 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7e1d6606-75fc-41fd-9c23-18ee248da2af" Feb 23 17:35:30 crc kubenswrapper[4724]: I0223 17:35:30.022515 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 17:35:30 crc kubenswrapper[4724]: I0223 17:35:30.025692 4724 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="59731c39-38f6-4075-a583-a77a82e48b17" Feb 23 17:35:30 crc kubenswrapper[4724]: I0223 17:35:30.508705 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 17:35:30 crc kubenswrapper[4724]: I0223 17:35:30.509827 4724 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 23 17:35:30 crc kubenswrapper[4724]: I0223 17:35:30.509872 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 23 17:35:31 crc kubenswrapper[4724]: I0223 17:35:31.022149 4724 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7e1d6606-75fc-41fd-9c23-18ee248da2af" Feb 23 17:35:31 crc kubenswrapper[4724]: I0223 17:35:31.022191 4724 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7e1d6606-75fc-41fd-9c23-18ee248da2af" Feb 23 17:35:34 crc kubenswrapper[4724]: I0223 17:35:34.969944 4724 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="59731c39-38f6-4075-a583-a77a82e48b17" Feb 23 17:35:39 crc kubenswrapper[4724]: I0223 17:35:39.247488 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 23 17:35:39 crc kubenswrapper[4724]: I0223 17:35:39.509553 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 23 17:35:40 crc kubenswrapper[4724]: I0223 17:35:40.043942 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 23 17:35:40 crc kubenswrapper[4724]: I0223 17:35:40.116788 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 23 17:35:40 crc kubenswrapper[4724]: I0223 17:35:40.134690 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 23 17:35:40 crc kubenswrapper[4724]: I0223 17:35:40.388207 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 23 17:35:40 crc kubenswrapper[4724]: I0223 17:35:40.509177 4724 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 23 17:35:40 crc kubenswrapper[4724]: I0223 17:35:40.509250 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 23 17:35:40 crc kubenswrapper[4724]: I0223 17:35:40.893190 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 23 17:35:40 crc kubenswrapper[4724]: I0223 17:35:40.899791 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 23 17:35:40 crc kubenswrapper[4724]: I0223 17:35:40.958516 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 23 17:35:41 crc kubenswrapper[4724]: I0223 17:35:41.081954 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 23 17:35:41 crc kubenswrapper[4724]: I0223 17:35:41.193463 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 23 17:35:41 crc kubenswrapper[4724]: I0223 17:35:41.266938 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 23 17:35:41 crc kubenswrapper[4724]: I0223 17:35:41.306868 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 23 17:35:41 crc kubenswrapper[4724]: I0223 17:35:41.361102 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 23 17:35:41 crc kubenswrapper[4724]: I0223 17:35:41.419669 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 23 17:35:41 crc kubenswrapper[4724]: I0223 17:35:41.437444 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 23 17:35:41 crc kubenswrapper[4724]: I0223 17:35:41.534225 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 23 17:35:41 crc kubenswrapper[4724]: I0223 17:35:41.551776 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 23 17:35:41 crc kubenswrapper[4724]: I0223 17:35:41.696659 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 23 17:35:41 crc kubenswrapper[4724]: I0223 17:35:41.829588 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 23 17:35:41 crc kubenswrapper[4724]: I0223 17:35:41.918347 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 23 17:35:42 crc kubenswrapper[4724]: I0223 17:35:42.097516 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 23 17:35:42 crc kubenswrapper[4724]: I0223 17:35:42.249560 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 23 17:35:42 crc kubenswrapper[4724]: I0223 17:35:42.268246 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 23 17:35:42 crc kubenswrapper[4724]: I0223 17:35:42.341127 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 23 17:35:42 crc kubenswrapper[4724]: I0223 17:35:42.578356 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 23 17:35:42 crc kubenswrapper[4724]: I0223 17:35:42.601038 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 23 17:35:42 crc kubenswrapper[4724]: I0223 17:35:42.632468 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 23 17:35:42 crc kubenswrapper[4724]: I0223 17:35:42.694001 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 23 17:35:42 crc kubenswrapper[4724]: I0223 17:35:42.815029 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 23 17:35:42 crc kubenswrapper[4724]: I0223 17:35:42.836574 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 23 17:35:42 crc kubenswrapper[4724]: I0223 17:35:42.861913 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 23 17:35:42 crc kubenswrapper[4724]: I0223 17:35:42.897023 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 23 17:35:42 crc kubenswrapper[4724]: I0223 17:35:42.916380 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 23 17:35:43 crc kubenswrapper[4724]: I0223 17:35:43.064342 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 23 17:35:43 crc kubenswrapper[4724]: I0223 17:35:43.099014 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 23 17:35:43 crc kubenswrapper[4724]: I0223 17:35:43.132815 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 23 17:35:43 crc kubenswrapper[4724]: I0223 17:35:43.163628 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 23 17:35:43 crc kubenswrapper[4724]: I0223 17:35:43.350361 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 23 17:35:43 crc kubenswrapper[4724]: I0223 17:35:43.350846 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 23 17:35:43 crc kubenswrapper[4724]: I0223 17:35:43.470502 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 23 17:35:43 crc kubenswrapper[4724]: I0223 17:35:43.711306 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 23 17:35:43 crc kubenswrapper[4724]: I0223 17:35:43.803898 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 23 17:35:43 crc kubenswrapper[4724]: I0223 17:35:43.893987 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 23 17:35:43 crc kubenswrapper[4724]: I0223 17:35:43.945366 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 23 17:35:44 crc kubenswrapper[4724]: I0223 17:35:44.056048 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 23 17:35:44 crc kubenswrapper[4724]: I0223 17:35:44.064595 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 23 17:35:44 crc kubenswrapper[4724]: I0223 17:35:44.267924 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 23 17:35:44 crc kubenswrapper[4724]: I0223 17:35:44.281351 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 23 17:35:44 crc kubenswrapper[4724]: I0223 17:35:44.335945 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 23 17:35:44 crc kubenswrapper[4724]: I0223 17:35:44.369627 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 23 17:35:44 crc kubenswrapper[4724]: I0223 17:35:44.385518 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 23 17:35:44 crc kubenswrapper[4724]: I0223 17:35:44.408631 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 23 17:35:44 crc kubenswrapper[4724]: I0223 17:35:44.523132 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 23 17:35:44 crc kubenswrapper[4724]: I0223 17:35:44.526441 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 23 17:35:44 crc kubenswrapper[4724]: I0223 17:35:44.555904 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 23 17:35:44 crc kubenswrapper[4724]: I0223 17:35:44.614880 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 23 17:35:44 crc kubenswrapper[4724]: I0223 17:35:44.644635 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 23 17:35:44 crc kubenswrapper[4724]: I0223 17:35:44.646853 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 23 17:35:44 crc kubenswrapper[4724]: I0223 17:35:44.672616 4724 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 23 17:35:44 crc kubenswrapper[4724]: I0223 17:35:44.753703 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 23 17:35:44 crc kubenswrapper[4724]: I0223 17:35:44.790108 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 23 17:35:44 crc kubenswrapper[4724]: I0223 17:35:44.814620 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 23 17:35:44 crc kubenswrapper[4724]: I0223 17:35:44.963297 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 23 17:35:45 crc kubenswrapper[4724]: I0223 17:35:45.185500 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 23 17:35:45 crc kubenswrapper[4724]: I0223 17:35:45.282109 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 23 17:35:45 crc kubenswrapper[4724]: I0223 17:35:45.311288 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 23 17:35:45 crc kubenswrapper[4724]: I0223 17:35:45.345823 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 23 17:35:45 crc kubenswrapper[4724]: I0223 17:35:45.354174 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 23 17:35:45 crc kubenswrapper[4724]: I0223 17:35:45.379086 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 23 17:35:45 crc kubenswrapper[4724]: I0223 17:35:45.399224 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 23 17:35:45 crc kubenswrapper[4724]: I0223 17:35:45.431681 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 23 17:35:45 crc kubenswrapper[4724]: I0223 17:35:45.519757 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 23 17:35:45 crc kubenswrapper[4724]: I0223 17:35:45.550215 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 23 17:35:45 crc kubenswrapper[4724]: I0223 17:35:45.711659 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 23 17:35:45 crc kubenswrapper[4724]: I0223 17:35:45.743961 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 23 17:35:45 crc kubenswrapper[4724]: I0223 17:35:45.804502 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 23 17:35:45 crc kubenswrapper[4724]: I0223 17:35:45.909202 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 23 17:35:46 crc kubenswrapper[4724]: I0223 17:35:46.040790 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 23 17:35:46 crc kubenswrapper[4724]: I0223 17:35:46.074962 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 23 17:35:46 crc kubenswrapper[4724]: I0223 17:35:46.108499 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 23 17:35:46 crc kubenswrapper[4724]: I0223 17:35:46.155576 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 23 17:35:46 crc kubenswrapper[4724]: I0223 17:35:46.166619 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 23 17:35:46 crc kubenswrapper[4724]: I0223 17:35:46.202584 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 23 17:35:46 crc kubenswrapper[4724]: I0223 17:35:46.224433 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 23 17:35:46 crc kubenswrapper[4724]: I0223 17:35:46.231755 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 23 17:35:46 crc kubenswrapper[4724]: I0223 17:35:46.246793 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 23 17:35:46 crc kubenswrapper[4724]: I0223 17:35:46.281613 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 23 17:35:46 crc kubenswrapper[4724]: I0223 17:35:46.346148 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 23 17:35:46 crc kubenswrapper[4724]: I0223 17:35:46.374042 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 23 17:35:46 crc kubenswrapper[4724]: I0223 17:35:46.376036 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 23 17:35:46 crc kubenswrapper[4724]: I0223 17:35:46.454774 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 23 17:35:46 crc kubenswrapper[4724]: I0223 17:35:46.509143 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 23 17:35:46 crc kubenswrapper[4724]: I0223 17:35:46.510099 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 23 17:35:46 crc kubenswrapper[4724]: I0223 17:35:46.549204 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 23 17:35:46 crc kubenswrapper[4724]: I0223 17:35:46.556160 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 23 17:35:46 crc kubenswrapper[4724]: I0223 17:35:46.589177 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 23 17:35:46 crc kubenswrapper[4724]: I0223 17:35:46.635933 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 23 17:35:46 crc kubenswrapper[4724]: I0223 17:35:46.715439 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 23 17:35:46 crc kubenswrapper[4724]: I0223 17:35:46.770639 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 23 17:35:46 crc kubenswrapper[4724]: I0223 17:35:46.771707 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 23 17:35:46 crc kubenswrapper[4724]: I0223 17:35:46.781885 4724 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 23 17:35:46 crc kubenswrapper[4724]: I0223 17:35:46.823453 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 23 17:35:46 crc kubenswrapper[4724]: I0223 17:35:46.839673 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 23 17:35:46 crc kubenswrapper[4724]: I0223 17:35:46.919921 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 23 17:35:47 crc kubenswrapper[4724]: I0223 17:35:47.031655 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 23 17:35:47 crc kubenswrapper[4724]: I0223 17:35:47.083822 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 23 17:35:47 crc kubenswrapper[4724]: I0223 17:35:47.106061 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 23 17:35:47 crc kubenswrapper[4724]: I0223 17:35:47.159458 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 23 17:35:47 crc kubenswrapper[4724]: I0223 17:35:47.219360 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 23 17:35:47 crc kubenswrapper[4724]: I0223 17:35:47.288205 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 23 17:35:47 crc kubenswrapper[4724]: I0223 17:35:47.365709 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 23 17:35:47 crc kubenswrapper[4724]: I0223 17:35:47.387540 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 23 17:35:47 crc kubenswrapper[4724]: I0223 17:35:47.416141 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 23 17:35:47 crc kubenswrapper[4724]: I0223 17:35:47.436882 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 23 17:35:47 crc kubenswrapper[4724]: I0223 17:35:47.462377 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 23 17:35:47 crc kubenswrapper[4724]: I0223 17:35:47.662161 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 23 17:35:47 crc kubenswrapper[4724]: I0223 17:35:47.773489 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 23 17:35:47 crc kubenswrapper[4724]: I0223 17:35:47.873426 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 23 17:35:47 crc kubenswrapper[4724]: I0223 17:35:47.893132 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 23 17:35:48 crc kubenswrapper[4724]: I0223 17:35:48.111283 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 23 17:35:48 crc kubenswrapper[4724]: I0223 17:35:48.113566 4724 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 23 17:35:48 crc kubenswrapper[4724]: I0223 17:35:48.133281 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 23 17:35:48 crc kubenswrapper[4724]: I0223 17:35:48.244108 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 23 17:35:48 crc kubenswrapper[4724]: I0223 17:35:48.325350 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 23 17:35:48 crc kubenswrapper[4724]: I0223 17:35:48.404794 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 23 17:35:48 crc kubenswrapper[4724]: I0223 17:35:48.419178 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 23 17:35:48 crc kubenswrapper[4724]: I0223 17:35:48.511707 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 23 17:35:48 crc kubenswrapper[4724]: I0223 17:35:48.523893 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 23 17:35:48 crc kubenswrapper[4724]: I0223 17:35:48.542716 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 23 17:35:48 crc kubenswrapper[4724]: I0223 17:35:48.566678 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 23 17:35:48 crc kubenswrapper[4724]: I0223 17:35:48.570950 4724 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 23 17:35:48 crc kubenswrapper[4724]: I0223 17:35:48.577639 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 23 17:35:48 crc kubenswrapper[4724]: I0223 17:35:48.662497 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 23 17:35:48 crc kubenswrapper[4724]: I0223 17:35:48.669421 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 23 17:35:48 crc kubenswrapper[4724]: I0223 17:35:48.684021 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 23 17:35:48 crc kubenswrapper[4724]: I0223 17:35:48.695683 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 23 17:35:48 crc kubenswrapper[4724]: I0223 17:35:48.698514 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 23 17:35:48 crc kubenswrapper[4724]: I0223 17:35:48.832580 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 23 17:35:48 crc kubenswrapper[4724]: I0223 17:35:48.858986 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 23 17:35:48 crc kubenswrapper[4724]: I0223 17:35:48.877049 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 23 17:35:49 crc kubenswrapper[4724]: I0223 17:35:49.003329 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 23 17:35:49 crc kubenswrapper[4724]: I0223 17:35:49.083920 4724 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 23 17:35:49 crc kubenswrapper[4724]: I0223 17:35:49.208768 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 23 17:35:49 crc kubenswrapper[4724]: I0223 17:35:49.237601 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 23 17:35:49 crc kubenswrapper[4724]: I0223 17:35:49.375502 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 23 17:35:49 crc kubenswrapper[4724]: I0223 17:35:49.541843 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 23 17:35:49 crc kubenswrapper[4724]: I0223 17:35:49.627069 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 23 17:35:49 crc kubenswrapper[4724]: I0223 17:35:49.637328 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 23 17:35:49 crc kubenswrapper[4724]: I0223 17:35:49.674442 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 23 17:35:49 crc kubenswrapper[4724]: I0223 17:35:49.713725 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 23 17:35:49 crc kubenswrapper[4724]: I0223 17:35:49.754528 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 23 17:35:49 crc kubenswrapper[4724]: I0223 17:35:49.754878 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 23 17:35:49 crc kubenswrapper[4724]: I0223 17:35:49.770167 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 23 17:35:49 crc kubenswrapper[4724]: I0223 17:35:49.803954 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 23 17:35:49 crc kubenswrapper[4724]: I0223 17:35:49.807465 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 23 17:35:49 crc kubenswrapper[4724]: I0223 17:35:49.836563 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 23 17:35:49 crc kubenswrapper[4724]: I0223 17:35:49.855153 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 23 17:35:49 crc kubenswrapper[4724]: I0223 17:35:49.938922 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 23 17:35:49 crc kubenswrapper[4724]: I0223 17:35:49.961991 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.002671 4724 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.011820 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.011933 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.011998 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-n984k","openshift-marketplace/redhat-operators-kbxv5","openshift-marketplace/community-operators-ft7cc","openshift-marketplace/certified-operators-jrhf2","openshift-marketplace/redhat-marketplace-g5s4k"] Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.012576 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-g5s4k" podUID="828827aa-9a76-4ba6-962f-ad0ac278bd72" containerName="registry-server" containerID="cri-o://f1c924e5ee621253b9356ba53d5169b05d8aff40a88161649fcfdc16dbcfd773" gracePeriod=30 Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.012880 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jrhf2" podUID="4e2b6f03-51f9-4434-b1ae-f7f7ce8d2004" containerName="registry-server" containerID="cri-o://b605bf754ec200ed3d7ff1362ca6d58669a979703b563f2fc872cea75330b483" gracePeriod=30 Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.013202 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-n984k" podUID="e63a5cc4-56f4-414c-87e9-4ec6ff77de47" containerName="marketplace-operator" containerID="cri-o://2a622a9edfe599272f64a126a8abb9947d66beef0ae978d3ac916027ef1086cf" gracePeriod=30 Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.013375 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kbxv5" podUID="b4ebec31-2766-49b2-9f05-9e6de41cf161" containerName="registry-server" containerID="cri-o://94c977155f11c676f0c6d41213d0ac1ec8312266238d7b6f15a65f712cf16585" gracePeriod=30 Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.013610 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ft7cc" podUID="7a5401c3-8e65-4b1f-89a5-4bd1628b149c" containerName="registry-server" containerID="cri-o://9e2b9c41ec9b333a86753347d3117701def8bf80c1c64b479dfba38e62383fa2" gracePeriod=30 Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.013870 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.018333 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.081627 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=21.081601578 podStartE2EDuration="21.081601578s" podCreationTimestamp="2026-02-23 17:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:35:50.074140828 +0000 UTC m=+305.890340438" watchObservedRunningTime="2026-02-23 17:35:50.081601578 +0000 UTC m=+305.897801218" Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.140965 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.160686 4724 generic.go:334] "Generic (PLEG): container finished" podID="b4ebec31-2766-49b2-9f05-9e6de41cf161" containerID="94c977155f11c676f0c6d41213d0ac1ec8312266238d7b6f15a65f712cf16585" exitCode=0 Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.160763 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kbxv5" event={"ID":"b4ebec31-2766-49b2-9f05-9e6de41cf161","Type":"ContainerDied","Data":"94c977155f11c676f0c6d41213d0ac1ec8312266238d7b6f15a65f712cf16585"} Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.162781 4724 generic.go:334] "Generic (PLEG): container finished" podID="e63a5cc4-56f4-414c-87e9-4ec6ff77de47" containerID="2a622a9edfe599272f64a126a8abb9947d66beef0ae978d3ac916027ef1086cf" exitCode=0 Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.162839 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-n984k" event={"ID":"e63a5cc4-56f4-414c-87e9-4ec6ff77de47","Type":"ContainerDied","Data":"2a622a9edfe599272f64a126a8abb9947d66beef0ae978d3ac916027ef1086cf"} Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.166097 4724 generic.go:334] "Generic (PLEG): container finished" podID="828827aa-9a76-4ba6-962f-ad0ac278bd72" containerID="f1c924e5ee621253b9356ba53d5169b05d8aff40a88161649fcfdc16dbcfd773" exitCode=0 Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.166228 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g5s4k" event={"ID":"828827aa-9a76-4ba6-962f-ad0ac278bd72","Type":"ContainerDied","Data":"f1c924e5ee621253b9356ba53d5169b05d8aff40a88161649fcfdc16dbcfd773"} Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.168828 4724 generic.go:334] "Generic (PLEG): container finished" podID="7a5401c3-8e65-4b1f-89a5-4bd1628b149c" containerID="9e2b9c41ec9b333a86753347d3117701def8bf80c1c64b479dfba38e62383fa2" exitCode=0 Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.168860 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ft7cc" event={"ID":"7a5401c3-8e65-4b1f-89a5-4bd1628b149c","Type":"ContainerDied","Data":"9e2b9c41ec9b333a86753347d3117701def8bf80c1c64b479dfba38e62383fa2"} Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.203202 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.271499 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.413270 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.443667 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.460693 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.461766 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.482423 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.492337 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kbxv5" Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.508346 4724 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.508418 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.508477 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.509129 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"ee0dfff5e7a7c8abbec1206901ad7c143cd328ca364dcd5dc27520bc2035d4d1"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.509238 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://ee0dfff5e7a7c8abbec1206901ad7c143cd328ca364dcd5dc27520bc2035d4d1" gracePeriod=30 Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.511954 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.526920 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.568447 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jrhf2" Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.572950 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g5s4k" Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.577430 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ft7cc" Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.597276 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.601895 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-n984k" Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.612231 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4ebec31-2766-49b2-9f05-9e6de41cf161-catalog-content\") pod \"b4ebec31-2766-49b2-9f05-9e6de41cf161\" (UID: \"b4ebec31-2766-49b2-9f05-9e6de41cf161\") " Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.612319 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnv75\" (UniqueName: \"kubernetes.io/projected/b4ebec31-2766-49b2-9f05-9e6de41cf161-kube-api-access-nnv75\") pod \"b4ebec31-2766-49b2-9f05-9e6de41cf161\" (UID: \"b4ebec31-2766-49b2-9f05-9e6de41cf161\") " Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.612508 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4ebec31-2766-49b2-9f05-9e6de41cf161-utilities\") pod \"b4ebec31-2766-49b2-9f05-9e6de41cf161\" (UID: \"b4ebec31-2766-49b2-9f05-9e6de41cf161\") " Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.614362 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4ebec31-2766-49b2-9f05-9e6de41cf161-utilities" (OuterVolumeSpecName: "utilities") pod "b4ebec31-2766-49b2-9f05-9e6de41cf161" (UID: "b4ebec31-2766-49b2-9f05-9e6de41cf161"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.625028 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4ebec31-2766-49b2-9f05-9e6de41cf161-kube-api-access-nnv75" (OuterVolumeSpecName: "kube-api-access-nnv75") pod "b4ebec31-2766-49b2-9f05-9e6de41cf161" (UID: "b4ebec31-2766-49b2-9f05-9e6de41cf161"). InnerVolumeSpecName "kube-api-access-nnv75". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.656444 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.672225 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.706499 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.709130 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.710093 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.714450 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vg5md\" (UniqueName: \"kubernetes.io/projected/7a5401c3-8e65-4b1f-89a5-4bd1628b149c-kube-api-access-vg5md\") pod \"7a5401c3-8e65-4b1f-89a5-4bd1628b149c\" (UID: \"7a5401c3-8e65-4b1f-89a5-4bd1628b149c\") " Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.714609 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e63a5cc4-56f4-414c-87e9-4ec6ff77de47-marketplace-operator-metrics\") pod \"e63a5cc4-56f4-414c-87e9-4ec6ff77de47\" (UID: \"e63a5cc4-56f4-414c-87e9-4ec6ff77de47\") " Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.714740 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9fnk\" (UniqueName: \"kubernetes.io/projected/e63a5cc4-56f4-414c-87e9-4ec6ff77de47-kube-api-access-w9fnk\") pod \"e63a5cc4-56f4-414c-87e9-4ec6ff77de47\" (UID: \"e63a5cc4-56f4-414c-87e9-4ec6ff77de47\") " Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.714876 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjf6l\" (UniqueName: \"kubernetes.io/projected/4e2b6f03-51f9-4434-b1ae-f7f7ce8d2004-kube-api-access-rjf6l\") pod \"4e2b6f03-51f9-4434-b1ae-f7f7ce8d2004\" (UID: \"4e2b6f03-51f9-4434-b1ae-f7f7ce8d2004\") " Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.715425 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/828827aa-9a76-4ba6-962f-ad0ac278bd72-catalog-content\") pod \"828827aa-9a76-4ba6-962f-ad0ac278bd72\" (UID: \"828827aa-9a76-4ba6-962f-ad0ac278bd72\") " Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.715622 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/828827aa-9a76-4ba6-962f-ad0ac278bd72-utilities\") pod \"828827aa-9a76-4ba6-962f-ad0ac278bd72\" (UID: \"828827aa-9a76-4ba6-962f-ad0ac278bd72\") " Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.715741 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e2b6f03-51f9-4434-b1ae-f7f7ce8d2004-catalog-content\") pod \"4e2b6f03-51f9-4434-b1ae-f7f7ce8d2004\" (UID: \"4e2b6f03-51f9-4434-b1ae-f7f7ce8d2004\") " Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.715877 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e2b6f03-51f9-4434-b1ae-f7f7ce8d2004-utilities\") pod \"4e2b6f03-51f9-4434-b1ae-f7f7ce8d2004\" (UID: \"4e2b6f03-51f9-4434-b1ae-f7f7ce8d2004\") " Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.715973 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mdh6\" (UniqueName: \"kubernetes.io/projected/828827aa-9a76-4ba6-962f-ad0ac278bd72-kube-api-access-5mdh6\") pod \"828827aa-9a76-4ba6-962f-ad0ac278bd72\" (UID: \"828827aa-9a76-4ba6-962f-ad0ac278bd72\") " Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.716254 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a5401c3-8e65-4b1f-89a5-4bd1628b149c-catalog-content\") pod \"7a5401c3-8e65-4b1f-89a5-4bd1628b149c\" (UID: \"7a5401c3-8e65-4b1f-89a5-4bd1628b149c\") " Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.716414 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/828827aa-9a76-4ba6-962f-ad0ac278bd72-utilities" (OuterVolumeSpecName: "utilities") pod "828827aa-9a76-4ba6-962f-ad0ac278bd72" (UID: "828827aa-9a76-4ba6-962f-ad0ac278bd72"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.716430 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a5401c3-8e65-4b1f-89a5-4bd1628b149c-utilities\") pod \"7a5401c3-8e65-4b1f-89a5-4bd1628b149c\" (UID: \"7a5401c3-8e65-4b1f-89a5-4bd1628b149c\") " Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.716542 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e63a5cc4-56f4-414c-87e9-4ec6ff77de47-marketplace-trusted-ca\") pod \"e63a5cc4-56f4-414c-87e9-4ec6ff77de47\" (UID: \"e63a5cc4-56f4-414c-87e9-4ec6ff77de47\") " Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.716813 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e2b6f03-51f9-4434-b1ae-f7f7ce8d2004-utilities" (OuterVolumeSpecName: "utilities") pod "4e2b6f03-51f9-4434-b1ae-f7f7ce8d2004" (UID: "4e2b6f03-51f9-4434-b1ae-f7f7ce8d2004"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.717349 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e2b6f03-51f9-4434-b1ae-f7f7ce8d2004-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.717378 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4ebec31-2766-49b2-9f05-9e6de41cf161-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.717411 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnv75\" (UniqueName: \"kubernetes.io/projected/b4ebec31-2766-49b2-9f05-9e6de41cf161-kube-api-access-nnv75\") on node \"crc\" DevicePath \"\"" Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.717426 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/828827aa-9a76-4ba6-962f-ad0ac278bd72-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.717547 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e63a5cc4-56f4-414c-87e9-4ec6ff77de47-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "e63a5cc4-56f4-414c-87e9-4ec6ff77de47" (UID: "e63a5cc4-56f4-414c-87e9-4ec6ff77de47"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.718264 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a5401c3-8e65-4b1f-89a5-4bd1628b149c-utilities" (OuterVolumeSpecName: "utilities") pod "7a5401c3-8e65-4b1f-89a5-4bd1628b149c" (UID: "7a5401c3-8e65-4b1f-89a5-4bd1628b149c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.719681 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/828827aa-9a76-4ba6-962f-ad0ac278bd72-kube-api-access-5mdh6" (OuterVolumeSpecName: "kube-api-access-5mdh6") pod "828827aa-9a76-4ba6-962f-ad0ac278bd72" (UID: "828827aa-9a76-4ba6-962f-ad0ac278bd72"). InnerVolumeSpecName "kube-api-access-5mdh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.720171 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e63a5cc4-56f4-414c-87e9-4ec6ff77de47-kube-api-access-w9fnk" (OuterVolumeSpecName: "kube-api-access-w9fnk") pod "e63a5cc4-56f4-414c-87e9-4ec6ff77de47" (UID: "e63a5cc4-56f4-414c-87e9-4ec6ff77de47"). InnerVolumeSpecName "kube-api-access-w9fnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.720970 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e63a5cc4-56f4-414c-87e9-4ec6ff77de47-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "e63a5cc4-56f4-414c-87e9-4ec6ff77de47" (UID: "e63a5cc4-56f4-414c-87e9-4ec6ff77de47"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.722576 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e2b6f03-51f9-4434-b1ae-f7f7ce8d2004-kube-api-access-rjf6l" (OuterVolumeSpecName: "kube-api-access-rjf6l") pod "4e2b6f03-51f9-4434-b1ae-f7f7ce8d2004" (UID: "4e2b6f03-51f9-4434-b1ae-f7f7ce8d2004"). InnerVolumeSpecName "kube-api-access-rjf6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.723143 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a5401c3-8e65-4b1f-89a5-4bd1628b149c-kube-api-access-vg5md" (OuterVolumeSpecName: "kube-api-access-vg5md") pod "7a5401c3-8e65-4b1f-89a5-4bd1628b149c" (UID: "7a5401c3-8e65-4b1f-89a5-4bd1628b149c"). InnerVolumeSpecName "kube-api-access-vg5md". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.729264 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.742525 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/828827aa-9a76-4ba6-962f-ad0ac278bd72-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "828827aa-9a76-4ba6-962f-ad0ac278bd72" (UID: "828827aa-9a76-4ba6-962f-ad0ac278bd72"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.749842 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4ebec31-2766-49b2-9f05-9e6de41cf161-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b4ebec31-2766-49b2-9f05-9e6de41cf161" (UID: "b4ebec31-2766-49b2-9f05-9e6de41cf161"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.778046 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e2b6f03-51f9-4434-b1ae-f7f7ce8d2004-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4e2b6f03-51f9-4434-b1ae-f7f7ce8d2004" (UID: "4e2b6f03-51f9-4434-b1ae-f7f7ce8d2004"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.782687 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a5401c3-8e65-4b1f-89a5-4bd1628b149c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7a5401c3-8e65-4b1f-89a5-4bd1628b149c" (UID: "7a5401c3-8e65-4b1f-89a5-4bd1628b149c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.819222 4724 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e63a5cc4-56f4-414c-87e9-4ec6ff77de47-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.819264 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vg5md\" (UniqueName: \"kubernetes.io/projected/7a5401c3-8e65-4b1f-89a5-4bd1628b149c-kube-api-access-vg5md\") on node \"crc\" DevicePath \"\"" Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.819275 4724 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e63a5cc4-56f4-414c-87e9-4ec6ff77de47-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.819288 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9fnk\" (UniqueName: \"kubernetes.io/projected/e63a5cc4-56f4-414c-87e9-4ec6ff77de47-kube-api-access-w9fnk\") on node \"crc\" DevicePath \"\"" Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.819298 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjf6l\" (UniqueName: \"kubernetes.io/projected/4e2b6f03-51f9-4434-b1ae-f7f7ce8d2004-kube-api-access-rjf6l\") on node \"crc\" DevicePath \"\"" Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.819308 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/828827aa-9a76-4ba6-962f-ad0ac278bd72-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.819317 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4ebec31-2766-49b2-9f05-9e6de41cf161-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.819326 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e2b6f03-51f9-4434-b1ae-f7f7ce8d2004-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.819334 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mdh6\" (UniqueName: \"kubernetes.io/projected/828827aa-9a76-4ba6-962f-ad0ac278bd72-kube-api-access-5mdh6\") on node \"crc\" DevicePath \"\"" Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.819373 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a5401c3-8e65-4b1f-89a5-4bd1628b149c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.819383 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a5401c3-8e65-4b1f-89a5-4bd1628b149c-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.899911 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 23 17:35:50 crc kubenswrapper[4724]: I0223 17:35:50.962863 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.053565 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.063084 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.073993 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.136343 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.137153 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.156453 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.178789 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ft7cc" event={"ID":"7a5401c3-8e65-4b1f-89a5-4bd1628b149c","Type":"ContainerDied","Data":"de45fc482deeeed1b70f7bec0cef22f0148d7de5560804af93f887fefa59596a"} Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.178862 4724 scope.go:117] "RemoveContainer" containerID="9e2b9c41ec9b333a86753347d3117701def8bf80c1c64b479dfba38e62383fa2" Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.178870 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ft7cc" Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.184130 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kbxv5" event={"ID":"b4ebec31-2766-49b2-9f05-9e6de41cf161","Type":"ContainerDied","Data":"601b85ecbec86bec05bb9bbc25566657aacae5742e5b5be1d2a6c7c4f9b3936f"} Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.184278 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kbxv5" Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.186540 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-n984k" Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.186520 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-n984k" event={"ID":"e63a5cc4-56f4-414c-87e9-4ec6ff77de47","Type":"ContainerDied","Data":"586e2d3c4e00972ee9bd9a3eed2aa86c35a1ca5aa8c3baba5e5f19065d0186ca"} Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.194312 4724 generic.go:334] "Generic (PLEG): container finished" podID="4e2b6f03-51f9-4434-b1ae-f7f7ce8d2004" containerID="b605bf754ec200ed3d7ff1362ca6d58669a979703b563f2fc872cea75330b483" exitCode=0 Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.194756 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jrhf2" event={"ID":"4e2b6f03-51f9-4434-b1ae-f7f7ce8d2004","Type":"ContainerDied","Data":"b605bf754ec200ed3d7ff1362ca6d58669a979703b563f2fc872cea75330b483"} Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.194818 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jrhf2" event={"ID":"4e2b6f03-51f9-4434-b1ae-f7f7ce8d2004","Type":"ContainerDied","Data":"96cd3d5abffa7e7cbcc02c362f821ea4b33a25ecf642a9550312fb30c3736aac"} Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.194947 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jrhf2" Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.199563 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g5s4k" event={"ID":"828827aa-9a76-4ba6-962f-ad0ac278bd72","Type":"ContainerDied","Data":"6046f6c31ebda1bea92284c2c4e0ba48ea721268593e638e97380d80e789300c"} Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.199663 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g5s4k" Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.223030 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.239361 4724 scope.go:117] "RemoveContainer" containerID="fdc39fb5091999c77e2e885f8e20628577fbc6860cb7be12ced53e5b4b1bca00" Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.246444 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.256707 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kbxv5"] Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.267303 4724 scope.go:117] "RemoveContainer" containerID="b39c7a459045b0197a5009823d0f79af76e70dbb5a8ce221b4a5ffdfa5581dae" Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.270897 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kbxv5"] Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.277380 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jrhf2"] Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.289490 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jrhf2"] Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.299733 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.302833 4724 scope.go:117] "RemoveContainer" containerID="94c977155f11c676f0c6d41213d0ac1ec8312266238d7b6f15a65f712cf16585" Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.312104 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ft7cc"] Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.315901 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ft7cc"] Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.318109 4724 scope.go:117] "RemoveContainer" containerID="de81114211cb9fb4fa9e09c0f3163b7b9370aec75a2bc59b5bb9136afa237ad6" Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.319071 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-n984k"] Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.321930 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-n984k"] Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.325580 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g5s4k"] Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.333806 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-g5s4k"] Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.334078 4724 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.334345 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://8712cd2b19385e7c725ef0ee6ad4b5e870a9b22965ac57a829c5d1fbda625d63" gracePeriod=5 Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.336281 4724 scope.go:117] "RemoveContainer" containerID="25bfd3c4655199929f2d4bb08b8e479bc20d3298d96b639f969b51f99d4eac26" Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.351181 4724 scope.go:117] "RemoveContainer" containerID="2a622a9edfe599272f64a126a8abb9947d66beef0ae978d3ac916027ef1086cf" Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.366658 4724 scope.go:117] "RemoveContainer" containerID="b605bf754ec200ed3d7ff1362ca6d58669a979703b563f2fc872cea75330b483" Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.384234 4724 scope.go:117] "RemoveContainer" containerID="696579ceca8e39da0e78f989aaf475ebdc14e35ba69884918baa53e37b7565f6" Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.420842 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.424663 4724 scope.go:117] "RemoveContainer" containerID="621ac4e2803ca7848c717b297d3ba017183bd5928cba8dcc3cf13a7ba8289cc4" Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.437316 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.457640 4724 scope.go:117] "RemoveContainer" containerID="b605bf754ec200ed3d7ff1362ca6d58669a979703b563f2fc872cea75330b483" Feb 23 17:35:51 crc kubenswrapper[4724]: E0223 17:35:51.458590 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b605bf754ec200ed3d7ff1362ca6d58669a979703b563f2fc872cea75330b483\": container with ID starting with b605bf754ec200ed3d7ff1362ca6d58669a979703b563f2fc872cea75330b483 not found: ID does not exist" containerID="b605bf754ec200ed3d7ff1362ca6d58669a979703b563f2fc872cea75330b483" Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.458653 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b605bf754ec200ed3d7ff1362ca6d58669a979703b563f2fc872cea75330b483"} err="failed to get container status \"b605bf754ec200ed3d7ff1362ca6d58669a979703b563f2fc872cea75330b483\": rpc error: code = NotFound desc = could not find container \"b605bf754ec200ed3d7ff1362ca6d58669a979703b563f2fc872cea75330b483\": container with ID starting with b605bf754ec200ed3d7ff1362ca6d58669a979703b563f2fc872cea75330b483 not found: ID does not exist" Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.458687 4724 scope.go:117] "RemoveContainer" containerID="696579ceca8e39da0e78f989aaf475ebdc14e35ba69884918baa53e37b7565f6" Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.462766 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 23 17:35:51 crc kubenswrapper[4724]: E0223 17:35:51.463015 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"696579ceca8e39da0e78f989aaf475ebdc14e35ba69884918baa53e37b7565f6\": container with ID starting with 696579ceca8e39da0e78f989aaf475ebdc14e35ba69884918baa53e37b7565f6 not found: ID does not exist" containerID="696579ceca8e39da0e78f989aaf475ebdc14e35ba69884918baa53e37b7565f6" Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.463044 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"696579ceca8e39da0e78f989aaf475ebdc14e35ba69884918baa53e37b7565f6"} err="failed to get container status \"696579ceca8e39da0e78f989aaf475ebdc14e35ba69884918baa53e37b7565f6\": rpc error: code = NotFound desc = could not find container \"696579ceca8e39da0e78f989aaf475ebdc14e35ba69884918baa53e37b7565f6\": container with ID starting with 696579ceca8e39da0e78f989aaf475ebdc14e35ba69884918baa53e37b7565f6 not found: ID does not exist" Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.463067 4724 scope.go:117] "RemoveContainer" containerID="621ac4e2803ca7848c717b297d3ba017183bd5928cba8dcc3cf13a7ba8289cc4" Feb 23 17:35:51 crc kubenswrapper[4724]: E0223 17:35:51.466717 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"621ac4e2803ca7848c717b297d3ba017183bd5928cba8dcc3cf13a7ba8289cc4\": container with ID starting with 621ac4e2803ca7848c717b297d3ba017183bd5928cba8dcc3cf13a7ba8289cc4 not found: ID does not exist" containerID="621ac4e2803ca7848c717b297d3ba017183bd5928cba8dcc3cf13a7ba8289cc4" Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.466795 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"621ac4e2803ca7848c717b297d3ba017183bd5928cba8dcc3cf13a7ba8289cc4"} err="failed to get container status \"621ac4e2803ca7848c717b297d3ba017183bd5928cba8dcc3cf13a7ba8289cc4\": rpc error: code = NotFound desc = could not find container \"621ac4e2803ca7848c717b297d3ba017183bd5928cba8dcc3cf13a7ba8289cc4\": container with ID starting with 621ac4e2803ca7848c717b297d3ba017183bd5928cba8dcc3cf13a7ba8289cc4 not found: ID does not exist" Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.466838 4724 scope.go:117] "RemoveContainer" containerID="f1c924e5ee621253b9356ba53d5169b05d8aff40a88161649fcfdc16dbcfd773" Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.484615 4724 scope.go:117] "RemoveContainer" containerID="9e9c60d71bbafac4664e006d8c64d1cef552bf79c0f4e11cf85af4c89eb5f540" Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.496561 4724 scope.go:117] "RemoveContainer" containerID="23d9db63a7c114c6335d5e0eee86be89ed2475b44750dc18bdb1fe08c01d1dec" Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.529626 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.591054 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.678655 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.771092 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.822029 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.944038 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.975536 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 23 17:35:51 crc kubenswrapper[4724]: I0223 17:35:51.996931 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 23 17:35:52 crc kubenswrapper[4724]: I0223 17:35:52.004009 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 23 17:35:52 crc kubenswrapper[4724]: I0223 17:35:52.016865 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 23 17:35:52 crc kubenswrapper[4724]: I0223 17:35:52.017321 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 23 17:35:52 crc kubenswrapper[4724]: I0223 17:35:52.030527 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 23 17:35:52 crc kubenswrapper[4724]: I0223 17:35:52.234949 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 23 17:35:52 crc kubenswrapper[4724]: I0223 17:35:52.241012 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 23 17:35:52 crc kubenswrapper[4724]: I0223 17:35:52.366545 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 23 17:35:52 crc kubenswrapper[4724]: I0223 17:35:52.407749 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 23 17:35:52 crc kubenswrapper[4724]: I0223 17:35:52.434808 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 23 17:35:52 crc kubenswrapper[4724]: I0223 17:35:52.466528 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 23 17:35:52 crc kubenswrapper[4724]: I0223 17:35:52.534763 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 23 17:35:52 crc kubenswrapper[4724]: I0223 17:35:52.563465 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 23 17:35:52 crc kubenswrapper[4724]: I0223 17:35:52.582461 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 23 17:35:52 crc kubenswrapper[4724]: I0223 17:35:52.689016 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 23 17:35:52 crc kubenswrapper[4724]: I0223 17:35:52.756653 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 23 17:35:52 crc kubenswrapper[4724]: I0223 17:35:52.801305 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 23 17:35:52 crc kubenswrapper[4724]: I0223 17:35:52.808269 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 23 17:35:52 crc kubenswrapper[4724]: I0223 17:35:52.895294 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 23 17:35:52 crc kubenswrapper[4724]: I0223 17:35:52.955714 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 23 17:35:52 crc kubenswrapper[4724]: I0223 17:35:52.961369 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e2b6f03-51f9-4434-b1ae-f7f7ce8d2004" path="/var/lib/kubelet/pods/4e2b6f03-51f9-4434-b1ae-f7f7ce8d2004/volumes" Feb 23 17:35:52 crc kubenswrapper[4724]: I0223 17:35:52.962283 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a5401c3-8e65-4b1f-89a5-4bd1628b149c" path="/var/lib/kubelet/pods/7a5401c3-8e65-4b1f-89a5-4bd1628b149c/volumes" Feb 23 17:35:52 crc kubenswrapper[4724]: I0223 17:35:52.963243 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="828827aa-9a76-4ba6-962f-ad0ac278bd72" path="/var/lib/kubelet/pods/828827aa-9a76-4ba6-962f-ad0ac278bd72/volumes" Feb 23 17:35:52 crc kubenswrapper[4724]: I0223 17:35:52.964689 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4ebec31-2766-49b2-9f05-9e6de41cf161" path="/var/lib/kubelet/pods/b4ebec31-2766-49b2-9f05-9e6de41cf161/volumes" Feb 23 17:35:52 crc kubenswrapper[4724]: I0223 17:35:52.965589 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e63a5cc4-56f4-414c-87e9-4ec6ff77de47" path="/var/lib/kubelet/pods/e63a5cc4-56f4-414c-87e9-4ec6ff77de47/volumes" Feb 23 17:35:52 crc kubenswrapper[4724]: I0223 17:35:52.989682 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 23 17:35:52 crc kubenswrapper[4724]: I0223 17:35:52.990425 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 23 17:35:53 crc kubenswrapper[4724]: I0223 17:35:53.000967 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 23 17:35:53 crc kubenswrapper[4724]: I0223 17:35:53.091649 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 23 17:35:53 crc kubenswrapper[4724]: I0223 17:35:53.509168 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 23 17:35:53 crc kubenswrapper[4724]: I0223 17:35:53.511189 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 23 17:35:53 crc kubenswrapper[4724]: I0223 17:35:53.746932 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 23 17:35:54 crc kubenswrapper[4724]: I0223 17:35:54.029628 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 23 17:35:54 crc kubenswrapper[4724]: I0223 17:35:54.137537 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 23 17:35:54 crc kubenswrapper[4724]: I0223 17:35:54.211335 4724 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 23 17:35:54 crc kubenswrapper[4724]: I0223 17:35:54.231198 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 23 17:35:54 crc kubenswrapper[4724]: I0223 17:35:54.470592 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 23 17:35:54 crc kubenswrapper[4724]: I0223 17:35:54.507597 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 23 17:35:54 crc kubenswrapper[4724]: I0223 17:35:54.546839 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 23 17:35:54 crc kubenswrapper[4724]: I0223 17:35:54.570129 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 23 17:35:54 crc kubenswrapper[4724]: I0223 17:35:54.576862 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 23 17:35:54 crc kubenswrapper[4724]: I0223 17:35:54.594088 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 23 17:35:54 crc kubenswrapper[4724]: I0223 17:35:54.769829 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 23 17:35:54 crc kubenswrapper[4724]: I0223 17:35:54.807322 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 23 17:35:55 crc kubenswrapper[4724]: I0223 17:35:55.302335 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 23 17:35:55 crc kubenswrapper[4724]: I0223 17:35:55.417285 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 23 17:35:55 crc kubenswrapper[4724]: I0223 17:35:55.503579 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 23 17:35:55 crc kubenswrapper[4724]: I0223 17:35:55.529365 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 23 17:35:56 crc kubenswrapper[4724]: I0223 17:35:56.271487 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 23 17:35:56 crc kubenswrapper[4724]: I0223 17:35:56.335937 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 23 17:35:56 crc kubenswrapper[4724]: I0223 17:35:56.608997 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 23 17:35:56 crc kubenswrapper[4724]: I0223 17:35:56.922343 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 23 17:35:56 crc kubenswrapper[4724]: I0223 17:35:56.922476 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 17:35:57 crc kubenswrapper[4724]: I0223 17:35:57.012383 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 23 17:35:57 crc kubenswrapper[4724]: I0223 17:35:57.012701 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 23 17:35:57 crc kubenswrapper[4724]: I0223 17:35:57.012880 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 23 17:35:57 crc kubenswrapper[4724]: I0223 17:35:57.013198 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 23 17:35:57 crc kubenswrapper[4724]: I0223 17:35:57.013367 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 23 17:35:57 crc kubenswrapper[4724]: I0223 17:35:57.013916 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:35:57 crc kubenswrapper[4724]: I0223 17:35:57.014120 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:35:57 crc kubenswrapper[4724]: I0223 17:35:57.014127 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:35:57 crc kubenswrapper[4724]: I0223 17:35:57.014179 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:35:57 crc kubenswrapper[4724]: I0223 17:35:57.028675 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:35:57 crc kubenswrapper[4724]: I0223 17:35:57.114816 4724 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 23 17:35:57 crc kubenswrapper[4724]: I0223 17:35:57.114857 4724 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 23 17:35:57 crc kubenswrapper[4724]: I0223 17:35:57.114868 4724 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 23 17:35:57 crc kubenswrapper[4724]: I0223 17:35:57.114879 4724 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 23 17:35:57 crc kubenswrapper[4724]: I0223 17:35:57.114887 4724 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 23 17:35:57 crc kubenswrapper[4724]: I0223 17:35:57.251769 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 23 17:35:57 crc kubenswrapper[4724]: I0223 17:35:57.252229 4724 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="8712cd2b19385e7c725ef0ee6ad4b5e870a9b22965ac57a829c5d1fbda625d63" exitCode=137 Feb 23 17:35:57 crc kubenswrapper[4724]: I0223 17:35:57.252446 4724 scope.go:117] "RemoveContainer" containerID="8712cd2b19385e7c725ef0ee6ad4b5e870a9b22965ac57a829c5d1fbda625d63" Feb 23 17:35:57 crc kubenswrapper[4724]: I0223 17:35:57.252540 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 17:35:57 crc kubenswrapper[4724]: I0223 17:35:57.274311 4724 scope.go:117] "RemoveContainer" containerID="8712cd2b19385e7c725ef0ee6ad4b5e870a9b22965ac57a829c5d1fbda625d63" Feb 23 17:35:57 crc kubenswrapper[4724]: E0223 17:35:57.274857 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8712cd2b19385e7c725ef0ee6ad4b5e870a9b22965ac57a829c5d1fbda625d63\": container with ID starting with 8712cd2b19385e7c725ef0ee6ad4b5e870a9b22965ac57a829c5d1fbda625d63 not found: ID does not exist" containerID="8712cd2b19385e7c725ef0ee6ad4b5e870a9b22965ac57a829c5d1fbda625d63" Feb 23 17:35:57 crc kubenswrapper[4724]: I0223 17:35:57.274977 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8712cd2b19385e7c725ef0ee6ad4b5e870a9b22965ac57a829c5d1fbda625d63"} err="failed to get container status \"8712cd2b19385e7c725ef0ee6ad4b5e870a9b22965ac57a829c5d1fbda625d63\": rpc error: code = NotFound desc = could not find container \"8712cd2b19385e7c725ef0ee6ad4b5e870a9b22965ac57a829c5d1fbda625d63\": container with ID starting with 8712cd2b19385e7c725ef0ee6ad4b5e870a9b22965ac57a829c5d1fbda625d63 not found: ID does not exist" Feb 23 17:35:57 crc kubenswrapper[4724]: I0223 17:35:57.722490 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 23 17:35:57 crc kubenswrapper[4724]: I0223 17:35:57.766508 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 23 17:35:57 crc kubenswrapper[4724]: I0223 17:35:57.939221 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 23 17:35:58 crc kubenswrapper[4724]: I0223 17:35:58.201013 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 23 17:35:58 crc kubenswrapper[4724]: I0223 17:35:58.305553 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 23 17:35:58 crc kubenswrapper[4724]: I0223 17:35:58.960782 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 23 17:36:07 crc kubenswrapper[4724]: I0223 17:36:07.987173 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-w8klm"] Feb 23 17:36:07 crc kubenswrapper[4724]: E0223 17:36:07.988424 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e63a5cc4-56f4-414c-87e9-4ec6ff77de47" containerName="marketplace-operator" Feb 23 17:36:07 crc kubenswrapper[4724]: I0223 17:36:07.988445 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="e63a5cc4-56f4-414c-87e9-4ec6ff77de47" containerName="marketplace-operator" Feb 23 17:36:07 crc kubenswrapper[4724]: E0223 17:36:07.988472 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="828827aa-9a76-4ba6-962f-ad0ac278bd72" containerName="extract-content" Feb 23 17:36:07 crc kubenswrapper[4724]: I0223 17:36:07.988482 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="828827aa-9a76-4ba6-962f-ad0ac278bd72" containerName="extract-content" Feb 23 17:36:07 crc kubenswrapper[4724]: E0223 17:36:07.988498 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a5401c3-8e65-4b1f-89a5-4bd1628b149c" containerName="extract-utilities" Feb 23 17:36:07 crc kubenswrapper[4724]: I0223 17:36:07.988508 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a5401c3-8e65-4b1f-89a5-4bd1628b149c" containerName="extract-utilities" Feb 23 17:36:07 crc kubenswrapper[4724]: E0223 17:36:07.988528 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a5401c3-8e65-4b1f-89a5-4bd1628b149c" containerName="registry-server" Feb 23 17:36:07 crc kubenswrapper[4724]: I0223 17:36:07.988538 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a5401c3-8e65-4b1f-89a5-4bd1628b149c" containerName="registry-server" Feb 23 17:36:07 crc kubenswrapper[4724]: E0223 17:36:07.988559 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4ebec31-2766-49b2-9f05-9e6de41cf161" containerName="extract-utilities" Feb 23 17:36:07 crc kubenswrapper[4724]: I0223 17:36:07.988569 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4ebec31-2766-49b2-9f05-9e6de41cf161" containerName="extract-utilities" Feb 23 17:36:07 crc kubenswrapper[4724]: E0223 17:36:07.988589 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a5401c3-8e65-4b1f-89a5-4bd1628b149c" containerName="extract-content" Feb 23 17:36:07 crc kubenswrapper[4724]: I0223 17:36:07.988599 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a5401c3-8e65-4b1f-89a5-4bd1628b149c" containerName="extract-content" Feb 23 17:36:07 crc kubenswrapper[4724]: E0223 17:36:07.988622 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="828827aa-9a76-4ba6-962f-ad0ac278bd72" containerName="extract-utilities" Feb 23 17:36:07 crc kubenswrapper[4724]: I0223 17:36:07.988632 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="828827aa-9a76-4ba6-962f-ad0ac278bd72" containerName="extract-utilities" Feb 23 17:36:07 crc kubenswrapper[4724]: E0223 17:36:07.988642 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4ebec31-2766-49b2-9f05-9e6de41cf161" containerName="extract-content" Feb 23 17:36:07 crc kubenswrapper[4724]: I0223 17:36:07.988651 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4ebec31-2766-49b2-9f05-9e6de41cf161" containerName="extract-content" Feb 23 17:36:07 crc kubenswrapper[4724]: E0223 17:36:07.988672 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4ebec31-2766-49b2-9f05-9e6de41cf161" containerName="registry-server" Feb 23 17:36:07 crc kubenswrapper[4724]: I0223 17:36:07.988681 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4ebec31-2766-49b2-9f05-9e6de41cf161" containerName="registry-server" Feb 23 17:36:07 crc kubenswrapper[4724]: E0223 17:36:07.988700 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e2b6f03-51f9-4434-b1ae-f7f7ce8d2004" containerName="extract-utilities" Feb 23 17:36:07 crc kubenswrapper[4724]: I0223 17:36:07.988709 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e2b6f03-51f9-4434-b1ae-f7f7ce8d2004" containerName="extract-utilities" Feb 23 17:36:07 crc kubenswrapper[4724]: E0223 17:36:07.988726 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e2b6f03-51f9-4434-b1ae-f7f7ce8d2004" containerName="registry-server" Feb 23 17:36:07 crc kubenswrapper[4724]: I0223 17:36:07.988738 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e2b6f03-51f9-4434-b1ae-f7f7ce8d2004" containerName="registry-server" Feb 23 17:36:07 crc kubenswrapper[4724]: E0223 17:36:07.988761 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e2b6f03-51f9-4434-b1ae-f7f7ce8d2004" containerName="extract-content" Feb 23 17:36:07 crc kubenswrapper[4724]: I0223 17:36:07.988771 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e2b6f03-51f9-4434-b1ae-f7f7ce8d2004" containerName="extract-content" Feb 23 17:36:07 crc kubenswrapper[4724]: E0223 17:36:07.988787 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="828827aa-9a76-4ba6-962f-ad0ac278bd72" containerName="registry-server" Feb 23 17:36:07 crc kubenswrapper[4724]: I0223 17:36:07.988797 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="828827aa-9a76-4ba6-962f-ad0ac278bd72" containerName="registry-server" Feb 23 17:36:07 crc kubenswrapper[4724]: E0223 17:36:07.988806 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 23 17:36:07 crc kubenswrapper[4724]: I0223 17:36:07.988815 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 23 17:36:07 crc kubenswrapper[4724]: E0223 17:36:07.988827 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="129a58dd-706e-428b-8ab7-35194d9e0503" containerName="installer" Feb 23 17:36:07 crc kubenswrapper[4724]: I0223 17:36:07.988836 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="129a58dd-706e-428b-8ab7-35194d9e0503" containerName="installer" Feb 23 17:36:07 crc kubenswrapper[4724]: I0223 17:36:07.989116 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a5401c3-8e65-4b1f-89a5-4bd1628b149c" containerName="registry-server" Feb 23 17:36:07 crc kubenswrapper[4724]: I0223 17:36:07.989141 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="129a58dd-706e-428b-8ab7-35194d9e0503" containerName="installer" Feb 23 17:36:07 crc kubenswrapper[4724]: I0223 17:36:07.989163 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 23 17:36:07 crc kubenswrapper[4724]: I0223 17:36:07.989178 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="e63a5cc4-56f4-414c-87e9-4ec6ff77de47" containerName="marketplace-operator" Feb 23 17:36:07 crc kubenswrapper[4724]: I0223 17:36:07.989194 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4ebec31-2766-49b2-9f05-9e6de41cf161" containerName="registry-server" Feb 23 17:36:07 crc kubenswrapper[4724]: I0223 17:36:07.989214 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e2b6f03-51f9-4434-b1ae-f7f7ce8d2004" containerName="registry-server" Feb 23 17:36:07 crc kubenswrapper[4724]: I0223 17:36:07.989228 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="828827aa-9a76-4ba6-962f-ad0ac278bd72" containerName="registry-server" Feb 23 17:36:07 crc kubenswrapper[4724]: I0223 17:36:07.996272 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-w8klm" Feb 23 17:36:08 crc kubenswrapper[4724]: I0223 17:36:08.010854 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 23 17:36:08 crc kubenswrapper[4724]: I0223 17:36:08.018301 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-w8klm"] Feb 23 17:36:08 crc kubenswrapper[4724]: I0223 17:36:08.040000 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 23 17:36:08 crc kubenswrapper[4724]: I0223 17:36:08.040339 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 23 17:36:08 crc kubenswrapper[4724]: I0223 17:36:08.040731 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 23 17:36:08 crc kubenswrapper[4724]: I0223 17:36:08.060152 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 23 17:36:08 crc kubenswrapper[4724]: I0223 17:36:08.070052 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/67588304-35a3-404e-bd48-9f7bc0ec5a44-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-w8klm\" (UID: \"67588304-35a3-404e-bd48-9f7bc0ec5a44\") " pod="openshift-marketplace/marketplace-operator-79b997595-w8klm" Feb 23 17:36:08 crc kubenswrapper[4724]: I0223 17:36:08.070148 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/67588304-35a3-404e-bd48-9f7bc0ec5a44-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-w8klm\" (UID: \"67588304-35a3-404e-bd48-9f7bc0ec5a44\") " pod="openshift-marketplace/marketplace-operator-79b997595-w8klm" Feb 23 17:36:08 crc kubenswrapper[4724]: I0223 17:36:08.172163 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/67588304-35a3-404e-bd48-9f7bc0ec5a44-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-w8klm\" (UID: \"67588304-35a3-404e-bd48-9f7bc0ec5a44\") " pod="openshift-marketplace/marketplace-operator-79b997595-w8klm" Feb 23 17:36:08 crc kubenswrapper[4724]: I0223 17:36:08.172251 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rxlh\" (UniqueName: \"kubernetes.io/projected/67588304-35a3-404e-bd48-9f7bc0ec5a44-kube-api-access-4rxlh\") pod \"marketplace-operator-79b997595-w8klm\" (UID: \"67588304-35a3-404e-bd48-9f7bc0ec5a44\") " pod="openshift-marketplace/marketplace-operator-79b997595-w8klm" Feb 23 17:36:08 crc kubenswrapper[4724]: I0223 17:36:08.172310 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/67588304-35a3-404e-bd48-9f7bc0ec5a44-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-w8klm\" (UID: \"67588304-35a3-404e-bd48-9f7bc0ec5a44\") " pod="openshift-marketplace/marketplace-operator-79b997595-w8klm" Feb 23 17:36:08 crc kubenswrapper[4724]: I0223 17:36:08.174071 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/67588304-35a3-404e-bd48-9f7bc0ec5a44-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-w8klm\" (UID: \"67588304-35a3-404e-bd48-9f7bc0ec5a44\") " pod="openshift-marketplace/marketplace-operator-79b997595-w8klm" Feb 23 17:36:08 crc kubenswrapper[4724]: I0223 17:36:08.200128 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/67588304-35a3-404e-bd48-9f7bc0ec5a44-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-w8klm\" (UID: \"67588304-35a3-404e-bd48-9f7bc0ec5a44\") " pod="openshift-marketplace/marketplace-operator-79b997595-w8klm" Feb 23 17:36:08 crc kubenswrapper[4724]: I0223 17:36:08.274101 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rxlh\" (UniqueName: \"kubernetes.io/projected/67588304-35a3-404e-bd48-9f7bc0ec5a44-kube-api-access-4rxlh\") pod \"marketplace-operator-79b997595-w8klm\" (UID: \"67588304-35a3-404e-bd48-9f7bc0ec5a44\") " pod="openshift-marketplace/marketplace-operator-79b997595-w8klm" Feb 23 17:36:08 crc kubenswrapper[4724]: I0223 17:36:08.297186 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rxlh\" (UniqueName: \"kubernetes.io/projected/67588304-35a3-404e-bd48-9f7bc0ec5a44-kube-api-access-4rxlh\") pod \"marketplace-operator-79b997595-w8klm\" (UID: \"67588304-35a3-404e-bd48-9f7bc0ec5a44\") " pod="openshift-marketplace/marketplace-operator-79b997595-w8klm" Feb 23 17:36:08 crc kubenswrapper[4724]: I0223 17:36:08.340378 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-w8klm" Feb 23 17:36:08 crc kubenswrapper[4724]: I0223 17:36:08.540966 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-w8klm"] Feb 23 17:36:09 crc kubenswrapper[4724]: I0223 17:36:09.323005 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-w8klm" event={"ID":"67588304-35a3-404e-bd48-9f7bc0ec5a44","Type":"ContainerStarted","Data":"c746e4b88fa941b5302e13af2d3a886fe13d684f54bdec4d549a4002769ddeb0"} Feb 23 17:36:09 crc kubenswrapper[4724]: I0223 17:36:09.323366 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-w8klm" Feb 23 17:36:09 crc kubenswrapper[4724]: I0223 17:36:09.323380 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-w8klm" event={"ID":"67588304-35a3-404e-bd48-9f7bc0ec5a44","Type":"ContainerStarted","Data":"c5da617ee1bd3ba385a04a6abd6912a817d9c2e323e13ebe992b68049f8fefb9"} Feb 23 17:36:09 crc kubenswrapper[4724]: I0223 17:36:09.326953 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-w8klm" Feb 23 17:36:09 crc kubenswrapper[4724]: I0223 17:36:09.342383 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-w8klm" podStartSLOduration=2.342360964 podStartE2EDuration="2.342360964s" podCreationTimestamp="2026-02-23 17:36:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:36:09.33903957 +0000 UTC m=+325.155239170" watchObservedRunningTime="2026-02-23 17:36:09.342360964 +0000 UTC m=+325.158560564" Feb 23 17:36:11 crc kubenswrapper[4724]: I0223 17:36:11.762420 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7bd75d5955-5vvll"] Feb 23 17:36:11 crc kubenswrapper[4724]: I0223 17:36:11.762774 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7bd75d5955-5vvll" podUID="57071a98-7587-4bd9-90a5-eb4ee3f86979" containerName="controller-manager" containerID="cri-o://c0e023da4e634a008f8ee753e420cc4c02cbca36534b2e38487f668ae2c72742" gracePeriod=30 Feb 23 17:36:11 crc kubenswrapper[4724]: I0223 17:36:11.865063 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77f99799b5-558d5"] Feb 23 17:36:11 crc kubenswrapper[4724]: I0223 17:36:11.865916 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-77f99799b5-558d5" podUID="5ee8459a-b10f-4b12-9222-b3d7407d98a8" containerName="route-controller-manager" containerID="cri-o://d54cbd485d8ec387dd3230159a071b413fdab9cfe83a27e04ce481daeee2b352" gracePeriod=30 Feb 23 17:36:12 crc kubenswrapper[4724]: I0223 17:36:12.130160 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bd75d5955-5vvll" Feb 23 17:36:12 crc kubenswrapper[4724]: I0223 17:36:12.222065 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77f99799b5-558d5" Feb 23 17:36:12 crc kubenswrapper[4724]: I0223 17:36:12.229945 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/57071a98-7587-4bd9-90a5-eb4ee3f86979-client-ca\") pod \"57071a98-7587-4bd9-90a5-eb4ee3f86979\" (UID: \"57071a98-7587-4bd9-90a5-eb4ee3f86979\") " Feb 23 17:36:12 crc kubenswrapper[4724]: I0223 17:36:12.230039 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87wxj\" (UniqueName: \"kubernetes.io/projected/57071a98-7587-4bd9-90a5-eb4ee3f86979-kube-api-access-87wxj\") pod \"57071a98-7587-4bd9-90a5-eb4ee3f86979\" (UID: \"57071a98-7587-4bd9-90a5-eb4ee3f86979\") " Feb 23 17:36:12 crc kubenswrapper[4724]: I0223 17:36:12.230069 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57071a98-7587-4bd9-90a5-eb4ee3f86979-serving-cert\") pod \"57071a98-7587-4bd9-90a5-eb4ee3f86979\" (UID: \"57071a98-7587-4bd9-90a5-eb4ee3f86979\") " Feb 23 17:36:12 crc kubenswrapper[4724]: I0223 17:36:12.230109 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57071a98-7587-4bd9-90a5-eb4ee3f86979-config\") pod \"57071a98-7587-4bd9-90a5-eb4ee3f86979\" (UID: \"57071a98-7587-4bd9-90a5-eb4ee3f86979\") " Feb 23 17:36:12 crc kubenswrapper[4724]: I0223 17:36:12.230153 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/57071a98-7587-4bd9-90a5-eb4ee3f86979-proxy-ca-bundles\") pod \"57071a98-7587-4bd9-90a5-eb4ee3f86979\" (UID: \"57071a98-7587-4bd9-90a5-eb4ee3f86979\") " Feb 23 17:36:12 crc kubenswrapper[4724]: I0223 17:36:12.231268 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57071a98-7587-4bd9-90a5-eb4ee3f86979-client-ca" (OuterVolumeSpecName: "client-ca") pod "57071a98-7587-4bd9-90a5-eb4ee3f86979" (UID: "57071a98-7587-4bd9-90a5-eb4ee3f86979"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:36:12 crc kubenswrapper[4724]: I0223 17:36:12.231314 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57071a98-7587-4bd9-90a5-eb4ee3f86979-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "57071a98-7587-4bd9-90a5-eb4ee3f86979" (UID: "57071a98-7587-4bd9-90a5-eb4ee3f86979"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:36:12 crc kubenswrapper[4724]: I0223 17:36:12.231877 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57071a98-7587-4bd9-90a5-eb4ee3f86979-config" (OuterVolumeSpecName: "config") pod "57071a98-7587-4bd9-90a5-eb4ee3f86979" (UID: "57071a98-7587-4bd9-90a5-eb4ee3f86979"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:36:12 crc kubenswrapper[4724]: I0223 17:36:12.237363 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57071a98-7587-4bd9-90a5-eb4ee3f86979-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "57071a98-7587-4bd9-90a5-eb4ee3f86979" (UID: "57071a98-7587-4bd9-90a5-eb4ee3f86979"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:36:12 crc kubenswrapper[4724]: I0223 17:36:12.237594 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57071a98-7587-4bd9-90a5-eb4ee3f86979-kube-api-access-87wxj" (OuterVolumeSpecName: "kube-api-access-87wxj") pod "57071a98-7587-4bd9-90a5-eb4ee3f86979" (UID: "57071a98-7587-4bd9-90a5-eb4ee3f86979"). InnerVolumeSpecName "kube-api-access-87wxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:36:12 crc kubenswrapper[4724]: I0223 17:36:12.331311 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ee8459a-b10f-4b12-9222-b3d7407d98a8-client-ca\") pod \"5ee8459a-b10f-4b12-9222-b3d7407d98a8\" (UID: \"5ee8459a-b10f-4b12-9222-b3d7407d98a8\") " Feb 23 17:36:12 crc kubenswrapper[4724]: I0223 17:36:12.331466 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ee8459a-b10f-4b12-9222-b3d7407d98a8-config\") pod \"5ee8459a-b10f-4b12-9222-b3d7407d98a8\" (UID: \"5ee8459a-b10f-4b12-9222-b3d7407d98a8\") " Feb 23 17:36:12 crc kubenswrapper[4724]: I0223 17:36:12.331534 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ee8459a-b10f-4b12-9222-b3d7407d98a8-serving-cert\") pod \"5ee8459a-b10f-4b12-9222-b3d7407d98a8\" (UID: \"5ee8459a-b10f-4b12-9222-b3d7407d98a8\") " Feb 23 17:36:12 crc kubenswrapper[4724]: I0223 17:36:12.331577 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6jvb\" (UniqueName: \"kubernetes.io/projected/5ee8459a-b10f-4b12-9222-b3d7407d98a8-kube-api-access-n6jvb\") pod \"5ee8459a-b10f-4b12-9222-b3d7407d98a8\" (UID: \"5ee8459a-b10f-4b12-9222-b3d7407d98a8\") " Feb 23 17:36:12 crc kubenswrapper[4724]: I0223 17:36:12.331846 4724 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/57071a98-7587-4bd9-90a5-eb4ee3f86979-client-ca\") on node \"crc\" DevicePath \"\"" Feb 23 17:36:12 crc kubenswrapper[4724]: I0223 17:36:12.331869 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87wxj\" (UniqueName: \"kubernetes.io/projected/57071a98-7587-4bd9-90a5-eb4ee3f86979-kube-api-access-87wxj\") on node \"crc\" DevicePath \"\"" Feb 23 17:36:12 crc kubenswrapper[4724]: I0223 17:36:12.331883 4724 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57071a98-7587-4bd9-90a5-eb4ee3f86979-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 17:36:12 crc kubenswrapper[4724]: I0223 17:36:12.331898 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57071a98-7587-4bd9-90a5-eb4ee3f86979-config\") on node \"crc\" DevicePath \"\"" Feb 23 17:36:12 crc kubenswrapper[4724]: I0223 17:36:12.331909 4724 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/57071a98-7587-4bd9-90a5-eb4ee3f86979-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 23 17:36:12 crc kubenswrapper[4724]: I0223 17:36:12.332789 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ee8459a-b10f-4b12-9222-b3d7407d98a8-config" (OuterVolumeSpecName: "config") pod "5ee8459a-b10f-4b12-9222-b3d7407d98a8" (UID: "5ee8459a-b10f-4b12-9222-b3d7407d98a8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:36:12 crc kubenswrapper[4724]: I0223 17:36:12.332835 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ee8459a-b10f-4b12-9222-b3d7407d98a8-client-ca" (OuterVolumeSpecName: "client-ca") pod "5ee8459a-b10f-4b12-9222-b3d7407d98a8" (UID: "5ee8459a-b10f-4b12-9222-b3d7407d98a8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:36:12 crc kubenswrapper[4724]: I0223 17:36:12.334762 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ee8459a-b10f-4b12-9222-b3d7407d98a8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5ee8459a-b10f-4b12-9222-b3d7407d98a8" (UID: "5ee8459a-b10f-4b12-9222-b3d7407d98a8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:36:12 crc kubenswrapper[4724]: I0223 17:36:12.339796 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ee8459a-b10f-4b12-9222-b3d7407d98a8-kube-api-access-n6jvb" (OuterVolumeSpecName: "kube-api-access-n6jvb") pod "5ee8459a-b10f-4b12-9222-b3d7407d98a8" (UID: "5ee8459a-b10f-4b12-9222-b3d7407d98a8"). InnerVolumeSpecName "kube-api-access-n6jvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:36:12 crc kubenswrapper[4724]: I0223 17:36:12.341305 4724 generic.go:334] "Generic (PLEG): container finished" podID="57071a98-7587-4bd9-90a5-eb4ee3f86979" containerID="c0e023da4e634a008f8ee753e420cc4c02cbca36534b2e38487f668ae2c72742" exitCode=0 Feb 23 17:36:12 crc kubenswrapper[4724]: I0223 17:36:12.341416 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7bd75d5955-5vvll" event={"ID":"57071a98-7587-4bd9-90a5-eb4ee3f86979","Type":"ContainerDied","Data":"c0e023da4e634a008f8ee753e420cc4c02cbca36534b2e38487f668ae2c72742"} Feb 23 17:36:12 crc kubenswrapper[4724]: I0223 17:36:12.341457 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bd75d5955-5vvll" Feb 23 17:36:12 crc kubenswrapper[4724]: I0223 17:36:12.341481 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7bd75d5955-5vvll" event={"ID":"57071a98-7587-4bd9-90a5-eb4ee3f86979","Type":"ContainerDied","Data":"61e9b7173e29bba4f89b76f9fb728a12dc2524e99325050dfd3b6774336c2776"} Feb 23 17:36:12 crc kubenswrapper[4724]: I0223 17:36:12.341504 4724 scope.go:117] "RemoveContainer" containerID="c0e023da4e634a008f8ee753e420cc4c02cbca36534b2e38487f668ae2c72742" Feb 23 17:36:12 crc kubenswrapper[4724]: I0223 17:36:12.346653 4724 generic.go:334] "Generic (PLEG): container finished" podID="5ee8459a-b10f-4b12-9222-b3d7407d98a8" containerID="d54cbd485d8ec387dd3230159a071b413fdab9cfe83a27e04ce481daeee2b352" exitCode=0 Feb 23 17:36:12 crc kubenswrapper[4724]: I0223 17:36:12.346701 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77f99799b5-558d5" event={"ID":"5ee8459a-b10f-4b12-9222-b3d7407d98a8","Type":"ContainerDied","Data":"d54cbd485d8ec387dd3230159a071b413fdab9cfe83a27e04ce481daeee2b352"} Feb 23 17:36:12 crc kubenswrapper[4724]: I0223 17:36:12.346722 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77f99799b5-558d5" event={"ID":"5ee8459a-b10f-4b12-9222-b3d7407d98a8","Type":"ContainerDied","Data":"770e1e0374709054d5724f5abcfeef300ee1c5d7f29f64d3287e156888e71f23"} Feb 23 17:36:12 crc kubenswrapper[4724]: I0223 17:36:12.346799 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77f99799b5-558d5" Feb 23 17:36:12 crc kubenswrapper[4724]: I0223 17:36:12.375873 4724 scope.go:117] "RemoveContainer" containerID="c0e023da4e634a008f8ee753e420cc4c02cbca36534b2e38487f668ae2c72742" Feb 23 17:36:12 crc kubenswrapper[4724]: E0223 17:36:12.377069 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0e023da4e634a008f8ee753e420cc4c02cbca36534b2e38487f668ae2c72742\": container with ID starting with c0e023da4e634a008f8ee753e420cc4c02cbca36534b2e38487f668ae2c72742 not found: ID does not exist" containerID="c0e023da4e634a008f8ee753e420cc4c02cbca36534b2e38487f668ae2c72742" Feb 23 17:36:12 crc kubenswrapper[4724]: I0223 17:36:12.377126 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0e023da4e634a008f8ee753e420cc4c02cbca36534b2e38487f668ae2c72742"} err="failed to get container status \"c0e023da4e634a008f8ee753e420cc4c02cbca36534b2e38487f668ae2c72742\": rpc error: code = NotFound desc = could not find container \"c0e023da4e634a008f8ee753e420cc4c02cbca36534b2e38487f668ae2c72742\": container with ID starting with c0e023da4e634a008f8ee753e420cc4c02cbca36534b2e38487f668ae2c72742 not found: ID does not exist" Feb 23 17:36:12 crc kubenswrapper[4724]: I0223 17:36:12.377203 4724 scope.go:117] "RemoveContainer" containerID="d54cbd485d8ec387dd3230159a071b413fdab9cfe83a27e04ce481daeee2b352" Feb 23 17:36:12 crc kubenswrapper[4724]: I0223 17:36:12.382551 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7bd75d5955-5vvll"] Feb 23 17:36:12 crc kubenswrapper[4724]: I0223 17:36:12.390163 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7bd75d5955-5vvll"] Feb 23 17:36:12 crc kubenswrapper[4724]: I0223 17:36:12.393998 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77f99799b5-558d5"] Feb 23 17:36:12 crc kubenswrapper[4724]: I0223 17:36:12.396879 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77f99799b5-558d5"] Feb 23 17:36:12 crc kubenswrapper[4724]: I0223 17:36:12.404946 4724 scope.go:117] "RemoveContainer" containerID="d54cbd485d8ec387dd3230159a071b413fdab9cfe83a27e04ce481daeee2b352" Feb 23 17:36:12 crc kubenswrapper[4724]: E0223 17:36:12.405765 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d54cbd485d8ec387dd3230159a071b413fdab9cfe83a27e04ce481daeee2b352\": container with ID starting with d54cbd485d8ec387dd3230159a071b413fdab9cfe83a27e04ce481daeee2b352 not found: ID does not exist" containerID="d54cbd485d8ec387dd3230159a071b413fdab9cfe83a27e04ce481daeee2b352" Feb 23 17:36:12 crc kubenswrapper[4724]: I0223 17:36:12.405797 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d54cbd485d8ec387dd3230159a071b413fdab9cfe83a27e04ce481daeee2b352"} err="failed to get container status \"d54cbd485d8ec387dd3230159a071b413fdab9cfe83a27e04ce481daeee2b352\": rpc error: code = NotFound desc = could not find container \"d54cbd485d8ec387dd3230159a071b413fdab9cfe83a27e04ce481daeee2b352\": container with ID starting with d54cbd485d8ec387dd3230159a071b413fdab9cfe83a27e04ce481daeee2b352 not found: ID does not exist" Feb 23 17:36:12 crc kubenswrapper[4724]: I0223 17:36:12.433689 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ee8459a-b10f-4b12-9222-b3d7407d98a8-config\") on node \"crc\" DevicePath \"\"" Feb 23 17:36:12 crc kubenswrapper[4724]: I0223 17:36:12.433728 4724 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ee8459a-b10f-4b12-9222-b3d7407d98a8-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 17:36:12 crc kubenswrapper[4724]: I0223 17:36:12.433750 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6jvb\" (UniqueName: \"kubernetes.io/projected/5ee8459a-b10f-4b12-9222-b3d7407d98a8-kube-api-access-n6jvb\") on node \"crc\" DevicePath \"\"" Feb 23 17:36:12 crc kubenswrapper[4724]: I0223 17:36:12.433763 4724 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ee8459a-b10f-4b12-9222-b3d7407d98a8-client-ca\") on node \"crc\" DevicePath \"\"" Feb 23 17:36:12 crc kubenswrapper[4724]: I0223 17:36:12.959833 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57071a98-7587-4bd9-90a5-eb4ee3f86979" path="/var/lib/kubelet/pods/57071a98-7587-4bd9-90a5-eb4ee3f86979/volumes" Feb 23 17:36:12 crc kubenswrapper[4724]: I0223 17:36:12.960640 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ee8459a-b10f-4b12-9222-b3d7407d98a8" path="/var/lib/kubelet/pods/5ee8459a-b10f-4b12-9222-b3d7407d98a8/volumes" Feb 23 17:36:13 crc kubenswrapper[4724]: I0223 17:36:13.652837 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7b74f5d554-jg44t"] Feb 23 17:36:13 crc kubenswrapper[4724]: E0223 17:36:13.653195 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57071a98-7587-4bd9-90a5-eb4ee3f86979" containerName="controller-manager" Feb 23 17:36:13 crc kubenswrapper[4724]: I0223 17:36:13.653214 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="57071a98-7587-4bd9-90a5-eb4ee3f86979" containerName="controller-manager" Feb 23 17:36:13 crc kubenswrapper[4724]: E0223 17:36:13.653236 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ee8459a-b10f-4b12-9222-b3d7407d98a8" containerName="route-controller-manager" Feb 23 17:36:13 crc kubenswrapper[4724]: I0223 17:36:13.653249 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ee8459a-b10f-4b12-9222-b3d7407d98a8" containerName="route-controller-manager" Feb 23 17:36:13 crc kubenswrapper[4724]: I0223 17:36:13.653408 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="57071a98-7587-4bd9-90a5-eb4ee3f86979" containerName="controller-manager" Feb 23 17:36:13 crc kubenswrapper[4724]: I0223 17:36:13.653429 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ee8459a-b10f-4b12-9222-b3d7407d98a8" containerName="route-controller-manager" Feb 23 17:36:13 crc kubenswrapper[4724]: I0223 17:36:13.653984 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b74f5d554-jg44t" Feb 23 17:36:13 crc kubenswrapper[4724]: I0223 17:36:13.658342 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 23 17:36:13 crc kubenswrapper[4724]: I0223 17:36:13.658633 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 23 17:36:13 crc kubenswrapper[4724]: I0223 17:36:13.658806 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 23 17:36:13 crc kubenswrapper[4724]: I0223 17:36:13.659133 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 23 17:36:13 crc kubenswrapper[4724]: I0223 17:36:13.659221 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 23 17:36:13 crc kubenswrapper[4724]: I0223 17:36:13.659329 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 23 17:36:13 crc kubenswrapper[4724]: I0223 17:36:13.663030 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6854cd686-hbmp6"] Feb 23 17:36:13 crc kubenswrapper[4724]: I0223 17:36:13.683859 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 23 17:36:13 crc kubenswrapper[4724]: I0223 17:36:13.685235 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6854cd686-hbmp6"] Feb 23 17:36:13 crc kubenswrapper[4724]: I0223 17:36:13.685277 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7b74f5d554-jg44t"] Feb 23 17:36:13 crc kubenswrapper[4724]: I0223 17:36:13.685431 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6854cd686-hbmp6" Feb 23 17:36:13 crc kubenswrapper[4724]: I0223 17:36:13.687544 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 23 17:36:13 crc kubenswrapper[4724]: I0223 17:36:13.689007 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 23 17:36:13 crc kubenswrapper[4724]: I0223 17:36:13.690829 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 23 17:36:13 crc kubenswrapper[4724]: I0223 17:36:13.691334 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 23 17:36:13 crc kubenswrapper[4724]: I0223 17:36:13.692330 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 23 17:36:13 crc kubenswrapper[4724]: I0223 17:36:13.700296 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 23 17:36:13 crc kubenswrapper[4724]: I0223 17:36:13.752021 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/404f1c04-6c70-455b-ba20-6a76eda71441-config\") pod \"route-controller-manager-6854cd686-hbmp6\" (UID: \"404f1c04-6c70-455b-ba20-6a76eda71441\") " pod="openshift-route-controller-manager/route-controller-manager-6854cd686-hbmp6" Feb 23 17:36:13 crc kubenswrapper[4724]: I0223 17:36:13.752314 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/404f1c04-6c70-455b-ba20-6a76eda71441-client-ca\") pod \"route-controller-manager-6854cd686-hbmp6\" (UID: \"404f1c04-6c70-455b-ba20-6a76eda71441\") " pod="openshift-route-controller-manager/route-controller-manager-6854cd686-hbmp6" Feb 23 17:36:13 crc kubenswrapper[4724]: I0223 17:36:13.752475 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd5686c9-119a-431d-96ae-eff93362a7f8-config\") pod \"controller-manager-7b74f5d554-jg44t\" (UID: \"bd5686c9-119a-431d-96ae-eff93362a7f8\") " pod="openshift-controller-manager/controller-manager-7b74f5d554-jg44t" Feb 23 17:36:13 crc kubenswrapper[4724]: I0223 17:36:13.752628 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd5686c9-119a-431d-96ae-eff93362a7f8-serving-cert\") pod \"controller-manager-7b74f5d554-jg44t\" (UID: \"bd5686c9-119a-431d-96ae-eff93362a7f8\") " pod="openshift-controller-manager/controller-manager-7b74f5d554-jg44t" Feb 23 17:36:13 crc kubenswrapper[4724]: I0223 17:36:13.752718 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt4p6\" (UniqueName: \"kubernetes.io/projected/bd5686c9-119a-431d-96ae-eff93362a7f8-kube-api-access-zt4p6\") pod \"controller-manager-7b74f5d554-jg44t\" (UID: \"bd5686c9-119a-431d-96ae-eff93362a7f8\") " pod="openshift-controller-manager/controller-manager-7b74f5d554-jg44t" Feb 23 17:36:13 crc kubenswrapper[4724]: I0223 17:36:13.752799 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/404f1c04-6c70-455b-ba20-6a76eda71441-serving-cert\") pod \"route-controller-manager-6854cd686-hbmp6\" (UID: \"404f1c04-6c70-455b-ba20-6a76eda71441\") " pod="openshift-route-controller-manager/route-controller-manager-6854cd686-hbmp6" Feb 23 17:36:13 crc kubenswrapper[4724]: I0223 17:36:13.752878 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvx8b\" (UniqueName: \"kubernetes.io/projected/404f1c04-6c70-455b-ba20-6a76eda71441-kube-api-access-gvx8b\") pod \"route-controller-manager-6854cd686-hbmp6\" (UID: \"404f1c04-6c70-455b-ba20-6a76eda71441\") " pod="openshift-route-controller-manager/route-controller-manager-6854cd686-hbmp6" Feb 23 17:36:13 crc kubenswrapper[4724]: I0223 17:36:13.752961 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bd5686c9-119a-431d-96ae-eff93362a7f8-proxy-ca-bundles\") pod \"controller-manager-7b74f5d554-jg44t\" (UID: \"bd5686c9-119a-431d-96ae-eff93362a7f8\") " pod="openshift-controller-manager/controller-manager-7b74f5d554-jg44t" Feb 23 17:36:13 crc kubenswrapper[4724]: I0223 17:36:13.753051 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bd5686c9-119a-431d-96ae-eff93362a7f8-client-ca\") pod \"controller-manager-7b74f5d554-jg44t\" (UID: \"bd5686c9-119a-431d-96ae-eff93362a7f8\") " pod="openshift-controller-manager/controller-manager-7b74f5d554-jg44t" Feb 23 17:36:13 crc kubenswrapper[4724]: I0223 17:36:13.854698 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd5686c9-119a-431d-96ae-eff93362a7f8-serving-cert\") pod \"controller-manager-7b74f5d554-jg44t\" (UID: \"bd5686c9-119a-431d-96ae-eff93362a7f8\") " pod="openshift-controller-manager/controller-manager-7b74f5d554-jg44t" Feb 23 17:36:13 crc kubenswrapper[4724]: I0223 17:36:13.854802 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt4p6\" (UniqueName: \"kubernetes.io/projected/bd5686c9-119a-431d-96ae-eff93362a7f8-kube-api-access-zt4p6\") pod \"controller-manager-7b74f5d554-jg44t\" (UID: \"bd5686c9-119a-431d-96ae-eff93362a7f8\") " pod="openshift-controller-manager/controller-manager-7b74f5d554-jg44t" Feb 23 17:36:13 crc kubenswrapper[4724]: I0223 17:36:13.854871 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/404f1c04-6c70-455b-ba20-6a76eda71441-serving-cert\") pod \"route-controller-manager-6854cd686-hbmp6\" (UID: \"404f1c04-6c70-455b-ba20-6a76eda71441\") " pod="openshift-route-controller-manager/route-controller-manager-6854cd686-hbmp6" Feb 23 17:36:13 crc kubenswrapper[4724]: I0223 17:36:13.854919 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvx8b\" (UniqueName: \"kubernetes.io/projected/404f1c04-6c70-455b-ba20-6a76eda71441-kube-api-access-gvx8b\") pod \"route-controller-manager-6854cd686-hbmp6\" (UID: \"404f1c04-6c70-455b-ba20-6a76eda71441\") " pod="openshift-route-controller-manager/route-controller-manager-6854cd686-hbmp6" Feb 23 17:36:13 crc kubenswrapper[4724]: I0223 17:36:13.854955 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bd5686c9-119a-431d-96ae-eff93362a7f8-proxy-ca-bundles\") pod \"controller-manager-7b74f5d554-jg44t\" (UID: \"bd5686c9-119a-431d-96ae-eff93362a7f8\") " pod="openshift-controller-manager/controller-manager-7b74f5d554-jg44t" Feb 23 17:36:13 crc kubenswrapper[4724]: I0223 17:36:13.855018 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bd5686c9-119a-431d-96ae-eff93362a7f8-client-ca\") pod \"controller-manager-7b74f5d554-jg44t\" (UID: \"bd5686c9-119a-431d-96ae-eff93362a7f8\") " pod="openshift-controller-manager/controller-manager-7b74f5d554-jg44t" Feb 23 17:36:13 crc kubenswrapper[4724]: I0223 17:36:13.855070 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/404f1c04-6c70-455b-ba20-6a76eda71441-config\") pod \"route-controller-manager-6854cd686-hbmp6\" (UID: \"404f1c04-6c70-455b-ba20-6a76eda71441\") " pod="openshift-route-controller-manager/route-controller-manager-6854cd686-hbmp6" Feb 23 17:36:13 crc kubenswrapper[4724]: I0223 17:36:13.855131 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/404f1c04-6c70-455b-ba20-6a76eda71441-client-ca\") pod \"route-controller-manager-6854cd686-hbmp6\" (UID: \"404f1c04-6c70-455b-ba20-6a76eda71441\") " pod="openshift-route-controller-manager/route-controller-manager-6854cd686-hbmp6" Feb 23 17:36:13 crc kubenswrapper[4724]: I0223 17:36:13.855200 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd5686c9-119a-431d-96ae-eff93362a7f8-config\") pod \"controller-manager-7b74f5d554-jg44t\" (UID: \"bd5686c9-119a-431d-96ae-eff93362a7f8\") " pod="openshift-controller-manager/controller-manager-7b74f5d554-jg44t" Feb 23 17:36:13 crc kubenswrapper[4724]: I0223 17:36:13.856243 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bd5686c9-119a-431d-96ae-eff93362a7f8-client-ca\") pod \"controller-manager-7b74f5d554-jg44t\" (UID: \"bd5686c9-119a-431d-96ae-eff93362a7f8\") " pod="openshift-controller-manager/controller-manager-7b74f5d554-jg44t" Feb 23 17:36:13 crc kubenswrapper[4724]: I0223 17:36:13.856445 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bd5686c9-119a-431d-96ae-eff93362a7f8-proxy-ca-bundles\") pod \"controller-manager-7b74f5d554-jg44t\" (UID: \"bd5686c9-119a-431d-96ae-eff93362a7f8\") " pod="openshift-controller-manager/controller-manager-7b74f5d554-jg44t" Feb 23 17:36:13 crc kubenswrapper[4724]: I0223 17:36:13.857254 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/404f1c04-6c70-455b-ba20-6a76eda71441-config\") pod \"route-controller-manager-6854cd686-hbmp6\" (UID: \"404f1c04-6c70-455b-ba20-6a76eda71441\") " pod="openshift-route-controller-manager/route-controller-manager-6854cd686-hbmp6" Feb 23 17:36:13 crc kubenswrapper[4724]: I0223 17:36:13.857576 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/404f1c04-6c70-455b-ba20-6a76eda71441-client-ca\") pod \"route-controller-manager-6854cd686-hbmp6\" (UID: \"404f1c04-6c70-455b-ba20-6a76eda71441\") " pod="openshift-route-controller-manager/route-controller-manager-6854cd686-hbmp6" Feb 23 17:36:13 crc kubenswrapper[4724]: I0223 17:36:13.858183 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd5686c9-119a-431d-96ae-eff93362a7f8-config\") pod \"controller-manager-7b74f5d554-jg44t\" (UID: \"bd5686c9-119a-431d-96ae-eff93362a7f8\") " pod="openshift-controller-manager/controller-manager-7b74f5d554-jg44t" Feb 23 17:36:13 crc kubenswrapper[4724]: I0223 17:36:13.866576 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd5686c9-119a-431d-96ae-eff93362a7f8-serving-cert\") pod \"controller-manager-7b74f5d554-jg44t\" (UID: \"bd5686c9-119a-431d-96ae-eff93362a7f8\") " pod="openshift-controller-manager/controller-manager-7b74f5d554-jg44t" Feb 23 17:36:13 crc kubenswrapper[4724]: I0223 17:36:13.866628 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/404f1c04-6c70-455b-ba20-6a76eda71441-serving-cert\") pod \"route-controller-manager-6854cd686-hbmp6\" (UID: \"404f1c04-6c70-455b-ba20-6a76eda71441\") " pod="openshift-route-controller-manager/route-controller-manager-6854cd686-hbmp6" Feb 23 17:36:13 crc kubenswrapper[4724]: I0223 17:36:13.879901 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt4p6\" (UniqueName: \"kubernetes.io/projected/bd5686c9-119a-431d-96ae-eff93362a7f8-kube-api-access-zt4p6\") pod \"controller-manager-7b74f5d554-jg44t\" (UID: \"bd5686c9-119a-431d-96ae-eff93362a7f8\") " pod="openshift-controller-manager/controller-manager-7b74f5d554-jg44t" Feb 23 17:36:13 crc kubenswrapper[4724]: I0223 17:36:13.883041 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvx8b\" (UniqueName: \"kubernetes.io/projected/404f1c04-6c70-455b-ba20-6a76eda71441-kube-api-access-gvx8b\") pod \"route-controller-manager-6854cd686-hbmp6\" (UID: \"404f1c04-6c70-455b-ba20-6a76eda71441\") " pod="openshift-route-controller-manager/route-controller-manager-6854cd686-hbmp6" Feb 23 17:36:13 crc kubenswrapper[4724]: I0223 17:36:13.983130 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b74f5d554-jg44t" Feb 23 17:36:14 crc kubenswrapper[4724]: I0223 17:36:14.003744 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6854cd686-hbmp6" Feb 23 17:36:14 crc kubenswrapper[4724]: I0223 17:36:14.216547 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7b74f5d554-jg44t"] Feb 23 17:36:14 crc kubenswrapper[4724]: I0223 17:36:14.261895 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6854cd686-hbmp6"] Feb 23 17:36:14 crc kubenswrapper[4724]: I0223 17:36:14.363313 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b74f5d554-jg44t" event={"ID":"bd5686c9-119a-431d-96ae-eff93362a7f8","Type":"ContainerStarted","Data":"7a933858ef2f511d74792f68baa1f2d31a6e512ec650a5fbc9c4990cf87ee5db"} Feb 23 17:36:14 crc kubenswrapper[4724]: I0223 17:36:14.364182 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6854cd686-hbmp6" event={"ID":"404f1c04-6c70-455b-ba20-6a76eda71441","Type":"ContainerStarted","Data":"45c7ac215bdafdbb564905a7c083cd32b613165653a88720d4a113363df38485"} Feb 23 17:36:14 crc kubenswrapper[4724]: I0223 17:36:14.846481 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7b74f5d554-jg44t"] Feb 23 17:36:14 crc kubenswrapper[4724]: I0223 17:36:14.865037 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6854cd686-hbmp6"] Feb 23 17:36:15 crc kubenswrapper[4724]: I0223 17:36:15.370323 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b74f5d554-jg44t" event={"ID":"bd5686c9-119a-431d-96ae-eff93362a7f8","Type":"ContainerStarted","Data":"22835c930d303dac59aa2699289b7e8a4c2adf41380010f6d1867349d63f7c32"} Feb 23 17:36:15 crc kubenswrapper[4724]: I0223 17:36:15.370630 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7b74f5d554-jg44t" Feb 23 17:36:15 crc kubenswrapper[4724]: I0223 17:36:15.374875 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6854cd686-hbmp6" event={"ID":"404f1c04-6c70-455b-ba20-6a76eda71441","Type":"ContainerStarted","Data":"97b6a623a35f92589a501153ecd297193e71ce18245c6d43a6bf9d37475a80c2"} Feb 23 17:36:15 crc kubenswrapper[4724]: I0223 17:36:15.375286 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6854cd686-hbmp6" Feb 23 17:36:15 crc kubenswrapper[4724]: I0223 17:36:15.377165 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7b74f5d554-jg44t" Feb 23 17:36:15 crc kubenswrapper[4724]: I0223 17:36:15.382015 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6854cd686-hbmp6" Feb 23 17:36:15 crc kubenswrapper[4724]: I0223 17:36:15.392161 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7b74f5d554-jg44t" podStartSLOduration=4.392139826 podStartE2EDuration="4.392139826s" podCreationTimestamp="2026-02-23 17:36:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:36:15.390261877 +0000 UTC m=+331.206461497" watchObservedRunningTime="2026-02-23 17:36:15.392139826 +0000 UTC m=+331.208339426" Feb 23 17:36:15 crc kubenswrapper[4724]: I0223 17:36:15.412936 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6854cd686-hbmp6" podStartSLOduration=4.412907858 podStartE2EDuration="4.412907858s" podCreationTimestamp="2026-02-23 17:36:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:36:15.405730958 +0000 UTC m=+331.221930558" watchObservedRunningTime="2026-02-23 17:36:15.412907858 +0000 UTC m=+331.229107478" Feb 23 17:36:16 crc kubenswrapper[4724]: I0223 17:36:16.380911 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6854cd686-hbmp6" podUID="404f1c04-6c70-455b-ba20-6a76eda71441" containerName="route-controller-manager" containerID="cri-o://97b6a623a35f92589a501153ecd297193e71ce18245c6d43a6bf9d37475a80c2" gracePeriod=30 Feb 23 17:36:16 crc kubenswrapper[4724]: I0223 17:36:16.381485 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7b74f5d554-jg44t" podUID="bd5686c9-119a-431d-96ae-eff93362a7f8" containerName="controller-manager" containerID="cri-o://22835c930d303dac59aa2699289b7e8a4c2adf41380010f6d1867349d63f7c32" gracePeriod=30 Feb 23 17:36:16 crc kubenswrapper[4724]: I0223 17:36:16.818839 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6854cd686-hbmp6" Feb 23 17:36:16 crc kubenswrapper[4724]: I0223 17:36:16.824723 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b74f5d554-jg44t" Feb 23 17:36:16 crc kubenswrapper[4724]: I0223 17:36:16.875144 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b6f7b46b7-vfpwb"] Feb 23 17:36:16 crc kubenswrapper[4724]: E0223 17:36:16.875683 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd5686c9-119a-431d-96ae-eff93362a7f8" containerName="controller-manager" Feb 23 17:36:16 crc kubenswrapper[4724]: I0223 17:36:16.875707 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd5686c9-119a-431d-96ae-eff93362a7f8" containerName="controller-manager" Feb 23 17:36:16 crc kubenswrapper[4724]: E0223 17:36:16.875721 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="404f1c04-6c70-455b-ba20-6a76eda71441" containerName="route-controller-manager" Feb 23 17:36:16 crc kubenswrapper[4724]: I0223 17:36:16.875732 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="404f1c04-6c70-455b-ba20-6a76eda71441" containerName="route-controller-manager" Feb 23 17:36:16 crc kubenswrapper[4724]: I0223 17:36:16.876105 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd5686c9-119a-431d-96ae-eff93362a7f8" containerName="controller-manager" Feb 23 17:36:16 crc kubenswrapper[4724]: I0223 17:36:16.876144 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="404f1c04-6c70-455b-ba20-6a76eda71441" containerName="route-controller-manager" Feb 23 17:36:16 crc kubenswrapper[4724]: I0223 17:36:16.877197 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b6f7b46b7-vfpwb" Feb 23 17:36:16 crc kubenswrapper[4724]: I0223 17:36:16.883541 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b6f7b46b7-vfpwb"] Feb 23 17:36:16 crc kubenswrapper[4724]: I0223 17:36:16.897508 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/404f1c04-6c70-455b-ba20-6a76eda71441-client-ca\") pod \"404f1c04-6c70-455b-ba20-6a76eda71441\" (UID: \"404f1c04-6c70-455b-ba20-6a76eda71441\") " Feb 23 17:36:16 crc kubenswrapper[4724]: I0223 17:36:16.897664 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd5686c9-119a-431d-96ae-eff93362a7f8-config\") pod \"bd5686c9-119a-431d-96ae-eff93362a7f8\" (UID: \"bd5686c9-119a-431d-96ae-eff93362a7f8\") " Feb 23 17:36:16 crc kubenswrapper[4724]: I0223 17:36:16.897745 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bd5686c9-119a-431d-96ae-eff93362a7f8-proxy-ca-bundles\") pod \"bd5686c9-119a-431d-96ae-eff93362a7f8\" (UID: \"bd5686c9-119a-431d-96ae-eff93362a7f8\") " Feb 23 17:36:16 crc kubenswrapper[4724]: I0223 17:36:16.897781 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zt4p6\" (UniqueName: \"kubernetes.io/projected/bd5686c9-119a-431d-96ae-eff93362a7f8-kube-api-access-zt4p6\") pod \"bd5686c9-119a-431d-96ae-eff93362a7f8\" (UID: \"bd5686c9-119a-431d-96ae-eff93362a7f8\") " Feb 23 17:36:16 crc kubenswrapper[4724]: I0223 17:36:16.897830 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd5686c9-119a-431d-96ae-eff93362a7f8-serving-cert\") pod \"bd5686c9-119a-431d-96ae-eff93362a7f8\" (UID: \"bd5686c9-119a-431d-96ae-eff93362a7f8\") " Feb 23 17:36:16 crc kubenswrapper[4724]: I0223 17:36:16.897860 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/404f1c04-6c70-455b-ba20-6a76eda71441-config\") pod \"404f1c04-6c70-455b-ba20-6a76eda71441\" (UID: \"404f1c04-6c70-455b-ba20-6a76eda71441\") " Feb 23 17:36:16 crc kubenswrapper[4724]: I0223 17:36:16.897901 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/404f1c04-6c70-455b-ba20-6a76eda71441-serving-cert\") pod \"404f1c04-6c70-455b-ba20-6a76eda71441\" (UID: \"404f1c04-6c70-455b-ba20-6a76eda71441\") " Feb 23 17:36:16 crc kubenswrapper[4724]: I0223 17:36:16.897935 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bd5686c9-119a-431d-96ae-eff93362a7f8-client-ca\") pod \"bd5686c9-119a-431d-96ae-eff93362a7f8\" (UID: \"bd5686c9-119a-431d-96ae-eff93362a7f8\") " Feb 23 17:36:16 crc kubenswrapper[4724]: I0223 17:36:16.897956 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvx8b\" (UniqueName: \"kubernetes.io/projected/404f1c04-6c70-455b-ba20-6a76eda71441-kube-api-access-gvx8b\") pod \"404f1c04-6c70-455b-ba20-6a76eda71441\" (UID: \"404f1c04-6c70-455b-ba20-6a76eda71441\") " Feb 23 17:36:16 crc kubenswrapper[4724]: I0223 17:36:16.898256 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5b4e0f34-629d-499a-8440-8c0df6c7c5db-client-ca\") pod \"route-controller-manager-7b6f7b46b7-vfpwb\" (UID: \"5b4e0f34-629d-499a-8440-8c0df6c7c5db\") " pod="openshift-route-controller-manager/route-controller-manager-7b6f7b46b7-vfpwb" Feb 23 17:36:16 crc kubenswrapper[4724]: I0223 17:36:16.898292 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9hjn\" (UniqueName: \"kubernetes.io/projected/5b4e0f34-629d-499a-8440-8c0df6c7c5db-kube-api-access-h9hjn\") pod \"route-controller-manager-7b6f7b46b7-vfpwb\" (UID: \"5b4e0f34-629d-499a-8440-8c0df6c7c5db\") " pod="openshift-route-controller-manager/route-controller-manager-7b6f7b46b7-vfpwb" Feb 23 17:36:16 crc kubenswrapper[4724]: I0223 17:36:16.898397 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b4e0f34-629d-499a-8440-8c0df6c7c5db-config\") pod \"route-controller-manager-7b6f7b46b7-vfpwb\" (UID: \"5b4e0f34-629d-499a-8440-8c0df6c7c5db\") " pod="openshift-route-controller-manager/route-controller-manager-7b6f7b46b7-vfpwb" Feb 23 17:36:16 crc kubenswrapper[4724]: I0223 17:36:16.898443 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b4e0f34-629d-499a-8440-8c0df6c7c5db-serving-cert\") pod \"route-controller-manager-7b6f7b46b7-vfpwb\" (UID: \"5b4e0f34-629d-499a-8440-8c0df6c7c5db\") " pod="openshift-route-controller-manager/route-controller-manager-7b6f7b46b7-vfpwb" Feb 23 17:36:16 crc kubenswrapper[4724]: I0223 17:36:16.899374 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd5686c9-119a-431d-96ae-eff93362a7f8-client-ca" (OuterVolumeSpecName: "client-ca") pod "bd5686c9-119a-431d-96ae-eff93362a7f8" (UID: "bd5686c9-119a-431d-96ae-eff93362a7f8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:36:16 crc kubenswrapper[4724]: I0223 17:36:16.900662 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd5686c9-119a-431d-96ae-eff93362a7f8-config" (OuterVolumeSpecName: "config") pod "bd5686c9-119a-431d-96ae-eff93362a7f8" (UID: "bd5686c9-119a-431d-96ae-eff93362a7f8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:36:16 crc kubenswrapper[4724]: I0223 17:36:16.901256 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/404f1c04-6c70-455b-ba20-6a76eda71441-client-ca" (OuterVolumeSpecName: "client-ca") pod "404f1c04-6c70-455b-ba20-6a76eda71441" (UID: "404f1c04-6c70-455b-ba20-6a76eda71441"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:36:16 crc kubenswrapper[4724]: I0223 17:36:16.902031 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/404f1c04-6c70-455b-ba20-6a76eda71441-config" (OuterVolumeSpecName: "config") pod "404f1c04-6c70-455b-ba20-6a76eda71441" (UID: "404f1c04-6c70-455b-ba20-6a76eda71441"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:36:16 crc kubenswrapper[4724]: I0223 17:36:16.902138 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd5686c9-119a-431d-96ae-eff93362a7f8-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "bd5686c9-119a-431d-96ae-eff93362a7f8" (UID: "bd5686c9-119a-431d-96ae-eff93362a7f8"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:36:16 crc kubenswrapper[4724]: I0223 17:36:16.904656 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/404f1c04-6c70-455b-ba20-6a76eda71441-kube-api-access-gvx8b" (OuterVolumeSpecName: "kube-api-access-gvx8b") pod "404f1c04-6c70-455b-ba20-6a76eda71441" (UID: "404f1c04-6c70-455b-ba20-6a76eda71441"). InnerVolumeSpecName "kube-api-access-gvx8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:36:16 crc kubenswrapper[4724]: I0223 17:36:16.904734 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/404f1c04-6c70-455b-ba20-6a76eda71441-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "404f1c04-6c70-455b-ba20-6a76eda71441" (UID: "404f1c04-6c70-455b-ba20-6a76eda71441"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:36:16 crc kubenswrapper[4724]: I0223 17:36:16.906809 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd5686c9-119a-431d-96ae-eff93362a7f8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bd5686c9-119a-431d-96ae-eff93362a7f8" (UID: "bd5686c9-119a-431d-96ae-eff93362a7f8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:36:16 crc kubenswrapper[4724]: I0223 17:36:16.908026 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd5686c9-119a-431d-96ae-eff93362a7f8-kube-api-access-zt4p6" (OuterVolumeSpecName: "kube-api-access-zt4p6") pod "bd5686c9-119a-431d-96ae-eff93362a7f8" (UID: "bd5686c9-119a-431d-96ae-eff93362a7f8"). InnerVolumeSpecName "kube-api-access-zt4p6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:36:16 crc kubenswrapper[4724]: I0223 17:36:16.999462 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b4e0f34-629d-499a-8440-8c0df6c7c5db-config\") pod \"route-controller-manager-7b6f7b46b7-vfpwb\" (UID: \"5b4e0f34-629d-499a-8440-8c0df6c7c5db\") " pod="openshift-route-controller-manager/route-controller-manager-7b6f7b46b7-vfpwb" Feb 23 17:36:16 crc kubenswrapper[4724]: I0223 17:36:16.999805 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b4e0f34-629d-499a-8440-8c0df6c7c5db-serving-cert\") pod \"route-controller-manager-7b6f7b46b7-vfpwb\" (UID: \"5b4e0f34-629d-499a-8440-8c0df6c7c5db\") " pod="openshift-route-controller-manager/route-controller-manager-7b6f7b46b7-vfpwb" Feb 23 17:36:16 crc kubenswrapper[4724]: I0223 17:36:16.999860 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5b4e0f34-629d-499a-8440-8c0df6c7c5db-client-ca\") pod \"route-controller-manager-7b6f7b46b7-vfpwb\" (UID: \"5b4e0f34-629d-499a-8440-8c0df6c7c5db\") " pod="openshift-route-controller-manager/route-controller-manager-7b6f7b46b7-vfpwb" Feb 23 17:36:16 crc kubenswrapper[4724]: I0223 17:36:16.999904 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9hjn\" (UniqueName: \"kubernetes.io/projected/5b4e0f34-629d-499a-8440-8c0df6c7c5db-kube-api-access-h9hjn\") pod \"route-controller-manager-7b6f7b46b7-vfpwb\" (UID: \"5b4e0f34-629d-499a-8440-8c0df6c7c5db\") " pod="openshift-route-controller-manager/route-controller-manager-7b6f7b46b7-vfpwb" Feb 23 17:36:17 crc kubenswrapper[4724]: I0223 17:36:17.000779 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b4e0f34-629d-499a-8440-8c0df6c7c5db-config\") pod \"route-controller-manager-7b6f7b46b7-vfpwb\" (UID: \"5b4e0f34-629d-499a-8440-8c0df6c7c5db\") " pod="openshift-route-controller-manager/route-controller-manager-7b6f7b46b7-vfpwb" Feb 23 17:36:17 crc kubenswrapper[4724]: I0223 17:36:17.001005 4724 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bd5686c9-119a-431d-96ae-eff93362a7f8-client-ca\") on node \"crc\" DevicePath \"\"" Feb 23 17:36:17 crc kubenswrapper[4724]: I0223 17:36:17.001027 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvx8b\" (UniqueName: \"kubernetes.io/projected/404f1c04-6c70-455b-ba20-6a76eda71441-kube-api-access-gvx8b\") on node \"crc\" DevicePath \"\"" Feb 23 17:36:17 crc kubenswrapper[4724]: I0223 17:36:17.001039 4724 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/404f1c04-6c70-455b-ba20-6a76eda71441-client-ca\") on node \"crc\" DevicePath \"\"" Feb 23 17:36:17 crc kubenswrapper[4724]: I0223 17:36:17.001050 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd5686c9-119a-431d-96ae-eff93362a7f8-config\") on node \"crc\" DevicePath \"\"" Feb 23 17:36:17 crc kubenswrapper[4724]: I0223 17:36:17.001059 4724 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bd5686c9-119a-431d-96ae-eff93362a7f8-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 23 17:36:17 crc kubenswrapper[4724]: I0223 17:36:17.001069 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zt4p6\" (UniqueName: \"kubernetes.io/projected/bd5686c9-119a-431d-96ae-eff93362a7f8-kube-api-access-zt4p6\") on node \"crc\" DevicePath \"\"" Feb 23 17:36:17 crc kubenswrapper[4724]: I0223 17:36:17.001078 4724 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd5686c9-119a-431d-96ae-eff93362a7f8-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 17:36:17 crc kubenswrapper[4724]: I0223 17:36:17.001090 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/404f1c04-6c70-455b-ba20-6a76eda71441-config\") on node \"crc\" DevicePath \"\"" Feb 23 17:36:17 crc kubenswrapper[4724]: I0223 17:36:17.001102 4724 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/404f1c04-6c70-455b-ba20-6a76eda71441-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 17:36:17 crc kubenswrapper[4724]: I0223 17:36:17.001250 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5b4e0f34-629d-499a-8440-8c0df6c7c5db-client-ca\") pod \"route-controller-manager-7b6f7b46b7-vfpwb\" (UID: \"5b4e0f34-629d-499a-8440-8c0df6c7c5db\") " pod="openshift-route-controller-manager/route-controller-manager-7b6f7b46b7-vfpwb" Feb 23 17:36:17 crc kubenswrapper[4724]: I0223 17:36:17.003325 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b4e0f34-629d-499a-8440-8c0df6c7c5db-serving-cert\") pod \"route-controller-manager-7b6f7b46b7-vfpwb\" (UID: \"5b4e0f34-629d-499a-8440-8c0df6c7c5db\") " pod="openshift-route-controller-manager/route-controller-manager-7b6f7b46b7-vfpwb" Feb 23 17:36:17 crc kubenswrapper[4724]: I0223 17:36:17.017006 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9hjn\" (UniqueName: \"kubernetes.io/projected/5b4e0f34-629d-499a-8440-8c0df6c7c5db-kube-api-access-h9hjn\") pod \"route-controller-manager-7b6f7b46b7-vfpwb\" (UID: \"5b4e0f34-629d-499a-8440-8c0df6c7c5db\") " pod="openshift-route-controller-manager/route-controller-manager-7b6f7b46b7-vfpwb" Feb 23 17:36:17 crc kubenswrapper[4724]: I0223 17:36:17.202132 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b6f7b46b7-vfpwb" Feb 23 17:36:17 crc kubenswrapper[4724]: I0223 17:36:17.390370 4724 generic.go:334] "Generic (PLEG): container finished" podID="bd5686c9-119a-431d-96ae-eff93362a7f8" containerID="22835c930d303dac59aa2699289b7e8a4c2adf41380010f6d1867349d63f7c32" exitCode=0 Feb 23 17:36:17 crc kubenswrapper[4724]: I0223 17:36:17.390448 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b74f5d554-jg44t" Feb 23 17:36:17 crc kubenswrapper[4724]: I0223 17:36:17.390450 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b74f5d554-jg44t" event={"ID":"bd5686c9-119a-431d-96ae-eff93362a7f8","Type":"ContainerDied","Data":"22835c930d303dac59aa2699289b7e8a4c2adf41380010f6d1867349d63f7c32"} Feb 23 17:36:17 crc kubenswrapper[4724]: I0223 17:36:17.390534 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b74f5d554-jg44t" event={"ID":"bd5686c9-119a-431d-96ae-eff93362a7f8","Type":"ContainerDied","Data":"7a933858ef2f511d74792f68baa1f2d31a6e512ec650a5fbc9c4990cf87ee5db"} Feb 23 17:36:17 crc kubenswrapper[4724]: I0223 17:36:17.390557 4724 scope.go:117] "RemoveContainer" containerID="22835c930d303dac59aa2699289b7e8a4c2adf41380010f6d1867349d63f7c32" Feb 23 17:36:17 crc kubenswrapper[4724]: I0223 17:36:17.392985 4724 generic.go:334] "Generic (PLEG): container finished" podID="404f1c04-6c70-455b-ba20-6a76eda71441" containerID="97b6a623a35f92589a501153ecd297193e71ce18245c6d43a6bf9d37475a80c2" exitCode=0 Feb 23 17:36:17 crc kubenswrapper[4724]: I0223 17:36:17.393011 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6854cd686-hbmp6" event={"ID":"404f1c04-6c70-455b-ba20-6a76eda71441","Type":"ContainerDied","Data":"97b6a623a35f92589a501153ecd297193e71ce18245c6d43a6bf9d37475a80c2"} Feb 23 17:36:17 crc kubenswrapper[4724]: I0223 17:36:17.393036 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6854cd686-hbmp6" event={"ID":"404f1c04-6c70-455b-ba20-6a76eda71441","Type":"ContainerDied","Data":"45c7ac215bdafdbb564905a7c083cd32b613165653a88720d4a113363df38485"} Feb 23 17:36:17 crc kubenswrapper[4724]: I0223 17:36:17.393087 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6854cd686-hbmp6" Feb 23 17:36:17 crc kubenswrapper[4724]: I0223 17:36:17.414064 4724 scope.go:117] "RemoveContainer" containerID="22835c930d303dac59aa2699289b7e8a4c2adf41380010f6d1867349d63f7c32" Feb 23 17:36:17 crc kubenswrapper[4724]: E0223 17:36:17.418066 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22835c930d303dac59aa2699289b7e8a4c2adf41380010f6d1867349d63f7c32\": container with ID starting with 22835c930d303dac59aa2699289b7e8a4c2adf41380010f6d1867349d63f7c32 not found: ID does not exist" containerID="22835c930d303dac59aa2699289b7e8a4c2adf41380010f6d1867349d63f7c32" Feb 23 17:36:17 crc kubenswrapper[4724]: I0223 17:36:17.418163 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22835c930d303dac59aa2699289b7e8a4c2adf41380010f6d1867349d63f7c32"} err="failed to get container status \"22835c930d303dac59aa2699289b7e8a4c2adf41380010f6d1867349d63f7c32\": rpc error: code = NotFound desc = could not find container \"22835c930d303dac59aa2699289b7e8a4c2adf41380010f6d1867349d63f7c32\": container with ID starting with 22835c930d303dac59aa2699289b7e8a4c2adf41380010f6d1867349d63f7c32 not found: ID does not exist" Feb 23 17:36:17 crc kubenswrapper[4724]: I0223 17:36:17.418284 4724 scope.go:117] "RemoveContainer" containerID="97b6a623a35f92589a501153ecd297193e71ce18245c6d43a6bf9d37475a80c2" Feb 23 17:36:17 crc kubenswrapper[4724]: I0223 17:36:17.424319 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7b74f5d554-jg44t"] Feb 23 17:36:17 crc kubenswrapper[4724]: I0223 17:36:17.429786 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7b74f5d554-jg44t"] Feb 23 17:36:17 crc kubenswrapper[4724]: I0223 17:36:17.435564 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6854cd686-hbmp6"] Feb 23 17:36:17 crc kubenswrapper[4724]: I0223 17:36:17.438638 4724 scope.go:117] "RemoveContainer" containerID="97b6a623a35f92589a501153ecd297193e71ce18245c6d43a6bf9d37475a80c2" Feb 23 17:36:17 crc kubenswrapper[4724]: E0223 17:36:17.439181 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97b6a623a35f92589a501153ecd297193e71ce18245c6d43a6bf9d37475a80c2\": container with ID starting with 97b6a623a35f92589a501153ecd297193e71ce18245c6d43a6bf9d37475a80c2 not found: ID does not exist" containerID="97b6a623a35f92589a501153ecd297193e71ce18245c6d43a6bf9d37475a80c2" Feb 23 17:36:17 crc kubenswrapper[4724]: I0223 17:36:17.439224 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97b6a623a35f92589a501153ecd297193e71ce18245c6d43a6bf9d37475a80c2"} err="failed to get container status \"97b6a623a35f92589a501153ecd297193e71ce18245c6d43a6bf9d37475a80c2\": rpc error: code = NotFound desc = could not find container \"97b6a623a35f92589a501153ecd297193e71ce18245c6d43a6bf9d37475a80c2\": container with ID starting with 97b6a623a35f92589a501153ecd297193e71ce18245c6d43a6bf9d37475a80c2 not found: ID does not exist" Feb 23 17:36:17 crc kubenswrapper[4724]: I0223 17:36:17.440360 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6854cd686-hbmp6"] Feb 23 17:36:17 crc kubenswrapper[4724]: W0223 17:36:17.508615 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b4e0f34_629d_499a_8440_8c0df6c7c5db.slice/crio-aeceafbb3115a5b52b1c642cd8ad136e59f0ec16bd90b98a0e79885250986c67 WatchSource:0}: Error finding container aeceafbb3115a5b52b1c642cd8ad136e59f0ec16bd90b98a0e79885250986c67: Status 404 returned error can't find the container with id aeceafbb3115a5b52b1c642cd8ad136e59f0ec16bd90b98a0e79885250986c67 Feb 23 17:36:17 crc kubenswrapper[4724]: I0223 17:36:17.511557 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b6f7b46b7-vfpwb"] Feb 23 17:36:18 crc kubenswrapper[4724]: I0223 17:36:18.404269 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b6f7b46b7-vfpwb" event={"ID":"5b4e0f34-629d-499a-8440-8c0df6c7c5db","Type":"ContainerStarted","Data":"f58c5fcb3465897ab18dc5510cc3a664a08a9b6f8613a75583a3bc086d19b987"} Feb 23 17:36:18 crc kubenswrapper[4724]: I0223 17:36:18.404981 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b6f7b46b7-vfpwb" event={"ID":"5b4e0f34-629d-499a-8440-8c0df6c7c5db","Type":"ContainerStarted","Data":"aeceafbb3115a5b52b1c642cd8ad136e59f0ec16bd90b98a0e79885250986c67"} Feb 23 17:36:18 crc kubenswrapper[4724]: I0223 17:36:18.405029 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7b6f7b46b7-vfpwb" Feb 23 17:36:18 crc kubenswrapper[4724]: I0223 17:36:18.413922 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7b6f7b46b7-vfpwb" Feb 23 17:36:18 crc kubenswrapper[4724]: I0223 17:36:18.429349 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7b6f7b46b7-vfpwb" podStartSLOduration=3.429327078 podStartE2EDuration="3.429327078s" podCreationTimestamp="2026-02-23 17:36:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:36:18.42527092 +0000 UTC m=+334.241470560" watchObservedRunningTime="2026-02-23 17:36:18.429327078 +0000 UTC m=+334.245526678" Feb 23 17:36:18 crc kubenswrapper[4724]: I0223 17:36:18.958114 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="404f1c04-6c70-455b-ba20-6a76eda71441" path="/var/lib/kubelet/pods/404f1c04-6c70-455b-ba20-6a76eda71441/volumes" Feb 23 17:36:18 crc kubenswrapper[4724]: I0223 17:36:18.959171 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd5686c9-119a-431d-96ae-eff93362a7f8" path="/var/lib/kubelet/pods/bd5686c9-119a-431d-96ae-eff93362a7f8/volumes" Feb 23 17:36:19 crc kubenswrapper[4724]: I0223 17:36:19.654967 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-dcf7dddf5-wfkrv"] Feb 23 17:36:19 crc kubenswrapper[4724]: I0223 17:36:19.656446 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-dcf7dddf5-wfkrv" Feb 23 17:36:19 crc kubenswrapper[4724]: I0223 17:36:19.658870 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 23 17:36:19 crc kubenswrapper[4724]: I0223 17:36:19.659621 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 23 17:36:19 crc kubenswrapper[4724]: I0223 17:36:19.659848 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 23 17:36:19 crc kubenswrapper[4724]: I0223 17:36:19.660275 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 23 17:36:19 crc kubenswrapper[4724]: I0223 17:36:19.660890 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 23 17:36:19 crc kubenswrapper[4724]: I0223 17:36:19.661473 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 23 17:36:19 crc kubenswrapper[4724]: I0223 17:36:19.667064 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 23 17:36:19 crc kubenswrapper[4724]: I0223 17:36:19.678472 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-dcf7dddf5-wfkrv"] Feb 23 17:36:19 crc kubenswrapper[4724]: I0223 17:36:19.735773 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7e1d2f2-9092-4cbc-8405-7467f7e702e6-serving-cert\") pod \"controller-manager-dcf7dddf5-wfkrv\" (UID: \"f7e1d2f2-9092-4cbc-8405-7467f7e702e6\") " pod="openshift-controller-manager/controller-manager-dcf7dddf5-wfkrv" Feb 23 17:36:19 crc kubenswrapper[4724]: I0223 17:36:19.735850 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5sfh\" (UniqueName: \"kubernetes.io/projected/f7e1d2f2-9092-4cbc-8405-7467f7e702e6-kube-api-access-c5sfh\") pod \"controller-manager-dcf7dddf5-wfkrv\" (UID: \"f7e1d2f2-9092-4cbc-8405-7467f7e702e6\") " pod="openshift-controller-manager/controller-manager-dcf7dddf5-wfkrv" Feb 23 17:36:19 crc kubenswrapper[4724]: I0223 17:36:19.736153 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7e1d2f2-9092-4cbc-8405-7467f7e702e6-config\") pod \"controller-manager-dcf7dddf5-wfkrv\" (UID: \"f7e1d2f2-9092-4cbc-8405-7467f7e702e6\") " pod="openshift-controller-manager/controller-manager-dcf7dddf5-wfkrv" Feb 23 17:36:19 crc kubenswrapper[4724]: I0223 17:36:19.736212 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f7e1d2f2-9092-4cbc-8405-7467f7e702e6-client-ca\") pod \"controller-manager-dcf7dddf5-wfkrv\" (UID: \"f7e1d2f2-9092-4cbc-8405-7467f7e702e6\") " pod="openshift-controller-manager/controller-manager-dcf7dddf5-wfkrv" Feb 23 17:36:19 crc kubenswrapper[4724]: I0223 17:36:19.736290 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f7e1d2f2-9092-4cbc-8405-7467f7e702e6-proxy-ca-bundles\") pod \"controller-manager-dcf7dddf5-wfkrv\" (UID: \"f7e1d2f2-9092-4cbc-8405-7467f7e702e6\") " pod="openshift-controller-manager/controller-manager-dcf7dddf5-wfkrv" Feb 23 17:36:19 crc kubenswrapper[4724]: I0223 17:36:19.837865 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f7e1d2f2-9092-4cbc-8405-7467f7e702e6-proxy-ca-bundles\") pod \"controller-manager-dcf7dddf5-wfkrv\" (UID: \"f7e1d2f2-9092-4cbc-8405-7467f7e702e6\") " pod="openshift-controller-manager/controller-manager-dcf7dddf5-wfkrv" Feb 23 17:36:19 crc kubenswrapper[4724]: I0223 17:36:19.837959 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7e1d2f2-9092-4cbc-8405-7467f7e702e6-serving-cert\") pod \"controller-manager-dcf7dddf5-wfkrv\" (UID: \"f7e1d2f2-9092-4cbc-8405-7467f7e702e6\") " pod="openshift-controller-manager/controller-manager-dcf7dddf5-wfkrv" Feb 23 17:36:19 crc kubenswrapper[4724]: I0223 17:36:19.837994 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5sfh\" (UniqueName: \"kubernetes.io/projected/f7e1d2f2-9092-4cbc-8405-7467f7e702e6-kube-api-access-c5sfh\") pod \"controller-manager-dcf7dddf5-wfkrv\" (UID: \"f7e1d2f2-9092-4cbc-8405-7467f7e702e6\") " pod="openshift-controller-manager/controller-manager-dcf7dddf5-wfkrv" Feb 23 17:36:19 crc kubenswrapper[4724]: I0223 17:36:19.838064 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7e1d2f2-9092-4cbc-8405-7467f7e702e6-config\") pod \"controller-manager-dcf7dddf5-wfkrv\" (UID: \"f7e1d2f2-9092-4cbc-8405-7467f7e702e6\") " pod="openshift-controller-manager/controller-manager-dcf7dddf5-wfkrv" Feb 23 17:36:19 crc kubenswrapper[4724]: I0223 17:36:19.838087 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f7e1d2f2-9092-4cbc-8405-7467f7e702e6-client-ca\") pod \"controller-manager-dcf7dddf5-wfkrv\" (UID: \"f7e1d2f2-9092-4cbc-8405-7467f7e702e6\") " pod="openshift-controller-manager/controller-manager-dcf7dddf5-wfkrv" Feb 23 17:36:19 crc kubenswrapper[4724]: I0223 17:36:19.839544 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f7e1d2f2-9092-4cbc-8405-7467f7e702e6-client-ca\") pod \"controller-manager-dcf7dddf5-wfkrv\" (UID: \"f7e1d2f2-9092-4cbc-8405-7467f7e702e6\") " pod="openshift-controller-manager/controller-manager-dcf7dddf5-wfkrv" Feb 23 17:36:19 crc kubenswrapper[4724]: I0223 17:36:19.839762 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f7e1d2f2-9092-4cbc-8405-7467f7e702e6-proxy-ca-bundles\") pod \"controller-manager-dcf7dddf5-wfkrv\" (UID: \"f7e1d2f2-9092-4cbc-8405-7467f7e702e6\") " pod="openshift-controller-manager/controller-manager-dcf7dddf5-wfkrv" Feb 23 17:36:19 crc kubenswrapper[4724]: I0223 17:36:19.840151 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7e1d2f2-9092-4cbc-8405-7467f7e702e6-config\") pod \"controller-manager-dcf7dddf5-wfkrv\" (UID: \"f7e1d2f2-9092-4cbc-8405-7467f7e702e6\") " pod="openshift-controller-manager/controller-manager-dcf7dddf5-wfkrv" Feb 23 17:36:19 crc kubenswrapper[4724]: I0223 17:36:19.850607 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7e1d2f2-9092-4cbc-8405-7467f7e702e6-serving-cert\") pod \"controller-manager-dcf7dddf5-wfkrv\" (UID: \"f7e1d2f2-9092-4cbc-8405-7467f7e702e6\") " pod="openshift-controller-manager/controller-manager-dcf7dddf5-wfkrv" Feb 23 17:36:19 crc kubenswrapper[4724]: I0223 17:36:19.869742 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5sfh\" (UniqueName: \"kubernetes.io/projected/f7e1d2f2-9092-4cbc-8405-7467f7e702e6-kube-api-access-c5sfh\") pod \"controller-manager-dcf7dddf5-wfkrv\" (UID: \"f7e1d2f2-9092-4cbc-8405-7467f7e702e6\") " pod="openshift-controller-manager/controller-manager-dcf7dddf5-wfkrv" Feb 23 17:36:19 crc kubenswrapper[4724]: I0223 17:36:19.974238 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-dcf7dddf5-wfkrv" Feb 23 17:36:20 crc kubenswrapper[4724]: I0223 17:36:20.196791 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-dcf7dddf5-wfkrv"] Feb 23 17:36:20 crc kubenswrapper[4724]: I0223 17:36:20.417032 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-dcf7dddf5-wfkrv" event={"ID":"f7e1d2f2-9092-4cbc-8405-7467f7e702e6","Type":"ContainerStarted","Data":"932564b6cb2fd599a17421201a91b81d14add0f557167b1c0239d1b687e90032"} Feb 23 17:36:20 crc kubenswrapper[4724]: I0223 17:36:20.417338 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-dcf7dddf5-wfkrv" Feb 23 17:36:20 crc kubenswrapper[4724]: I0223 17:36:20.417351 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-dcf7dddf5-wfkrv" event={"ID":"f7e1d2f2-9092-4cbc-8405-7467f7e702e6","Type":"ContainerStarted","Data":"0d5b3b91e0a50ad4dbeecd45c9b9dfe4990045391863352dd4e555ed10a88810"} Feb 23 17:36:20 crc kubenswrapper[4724]: I0223 17:36:20.419457 4724 patch_prober.go:28] interesting pod/controller-manager-dcf7dddf5-wfkrv container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": dial tcp 10.217.0.64:8443: connect: connection refused" start-of-body= Feb 23 17:36:20 crc kubenswrapper[4724]: I0223 17:36:20.419552 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-dcf7dddf5-wfkrv" podUID="f7e1d2f2-9092-4cbc-8405-7467f7e702e6" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": dial tcp 10.217.0.64:8443: connect: connection refused" Feb 23 17:36:21 crc kubenswrapper[4724]: I0223 17:36:21.432629 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 23 17:36:21 crc kubenswrapper[4724]: I0223 17:36:21.436419 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 23 17:36:21 crc kubenswrapper[4724]: I0223 17:36:21.436998 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 23 17:36:21 crc kubenswrapper[4724]: I0223 17:36:21.437037 4724 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="ee0dfff5e7a7c8abbec1206901ad7c143cd328ca364dcd5dc27520bc2035d4d1" exitCode=137 Feb 23 17:36:21 crc kubenswrapper[4724]: I0223 17:36:21.437112 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"ee0dfff5e7a7c8abbec1206901ad7c143cd328ca364dcd5dc27520bc2035d4d1"} Feb 23 17:36:21 crc kubenswrapper[4724]: I0223 17:36:21.437281 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"03540c077eddfeb08cb3e44ccb2f91915e02b61bd970642cb30260e88104217a"} Feb 23 17:36:21 crc kubenswrapper[4724]: I0223 17:36:21.437326 4724 scope.go:117] "RemoveContainer" containerID="cbce4ddd851479ee9b40b8c773a052d5a1d1420df107d8ea0e9fa2b927300910" Feb 23 17:36:21 crc kubenswrapper[4724]: I0223 17:36:21.446285 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-dcf7dddf5-wfkrv" Feb 23 17:36:21 crc kubenswrapper[4724]: I0223 17:36:21.473758 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-dcf7dddf5-wfkrv" podStartSLOduration=7.47373474 podStartE2EDuration="7.47373474s" podCreationTimestamp="2026-02-23 17:36:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:36:20.443238215 +0000 UTC m=+336.259437815" watchObservedRunningTime="2026-02-23 17:36:21.47373474 +0000 UTC m=+337.289934340" Feb 23 17:36:22 crc kubenswrapper[4724]: I0223 17:36:22.447169 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 23 17:36:22 crc kubenswrapper[4724]: I0223 17:36:22.449215 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 23 17:36:29 crc kubenswrapper[4724]: I0223 17:36:29.972024 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 17:36:30 crc kubenswrapper[4724]: I0223 17:36:30.508648 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 17:36:30 crc kubenswrapper[4724]: I0223 17:36:30.513300 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 17:36:31 crc kubenswrapper[4724]: I0223 17:36:31.509168 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 17:36:40 crc kubenswrapper[4724]: I0223 17:36:40.695676 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-fwwcq"] Feb 23 17:36:40 crc kubenswrapper[4724]: I0223 17:36:40.697452 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-fwwcq" Feb 23 17:36:40 crc kubenswrapper[4724]: I0223 17:36:40.714574 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-fwwcq"] Feb 23 17:36:40 crc kubenswrapper[4724]: I0223 17:36:40.720210 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-dcf7dddf5-wfkrv"] Feb 23 17:36:40 crc kubenswrapper[4724]: I0223 17:36:40.721918 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-dcf7dddf5-wfkrv" podUID="f7e1d2f2-9092-4cbc-8405-7467f7e702e6" containerName="controller-manager" containerID="cri-o://932564b6cb2fd599a17421201a91b81d14add0f557167b1c0239d1b687e90032" gracePeriod=30 Feb 23 17:36:40 crc kubenswrapper[4724]: I0223 17:36:40.847239 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/64e4997f-79a2-4ddd-afa2-c8b13f631b80-ca-trust-extracted\") pod \"image-registry-66df7c8f76-fwwcq\" (UID: \"64e4997f-79a2-4ddd-afa2-c8b13f631b80\") " pod="openshift-image-registry/image-registry-66df7c8f76-fwwcq" Feb 23 17:36:40 crc kubenswrapper[4724]: I0223 17:36:40.847294 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/64e4997f-79a2-4ddd-afa2-c8b13f631b80-registry-tls\") pod \"image-registry-66df7c8f76-fwwcq\" (UID: \"64e4997f-79a2-4ddd-afa2-c8b13f631b80\") " pod="openshift-image-registry/image-registry-66df7c8f76-fwwcq" Feb 23 17:36:40 crc kubenswrapper[4724]: I0223 17:36:40.847324 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/64e4997f-79a2-4ddd-afa2-c8b13f631b80-registry-certificates\") pod \"image-registry-66df7c8f76-fwwcq\" (UID: \"64e4997f-79a2-4ddd-afa2-c8b13f631b80\") " pod="openshift-image-registry/image-registry-66df7c8f76-fwwcq" Feb 23 17:36:40 crc kubenswrapper[4724]: I0223 17:36:40.847491 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/64e4997f-79a2-4ddd-afa2-c8b13f631b80-installation-pull-secrets\") pod \"image-registry-66df7c8f76-fwwcq\" (UID: \"64e4997f-79a2-4ddd-afa2-c8b13f631b80\") " pod="openshift-image-registry/image-registry-66df7c8f76-fwwcq" Feb 23 17:36:40 crc kubenswrapper[4724]: I0223 17:36:40.847573 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/64e4997f-79a2-4ddd-afa2-c8b13f631b80-bound-sa-token\") pod \"image-registry-66df7c8f76-fwwcq\" (UID: \"64e4997f-79a2-4ddd-afa2-c8b13f631b80\") " pod="openshift-image-registry/image-registry-66df7c8f76-fwwcq" Feb 23 17:36:40 crc kubenswrapper[4724]: I0223 17:36:40.847809 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-fwwcq\" (UID: \"64e4997f-79a2-4ddd-afa2-c8b13f631b80\") " pod="openshift-image-registry/image-registry-66df7c8f76-fwwcq" Feb 23 17:36:40 crc kubenswrapper[4724]: I0223 17:36:40.847966 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv7nn\" (UniqueName: \"kubernetes.io/projected/64e4997f-79a2-4ddd-afa2-c8b13f631b80-kube-api-access-kv7nn\") pod \"image-registry-66df7c8f76-fwwcq\" (UID: \"64e4997f-79a2-4ddd-afa2-c8b13f631b80\") " pod="openshift-image-registry/image-registry-66df7c8f76-fwwcq" Feb 23 17:36:40 crc kubenswrapper[4724]: I0223 17:36:40.848130 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/64e4997f-79a2-4ddd-afa2-c8b13f631b80-trusted-ca\") pod \"image-registry-66df7c8f76-fwwcq\" (UID: \"64e4997f-79a2-4ddd-afa2-c8b13f631b80\") " pod="openshift-image-registry/image-registry-66df7c8f76-fwwcq" Feb 23 17:36:40 crc kubenswrapper[4724]: I0223 17:36:40.900552 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-fwwcq\" (UID: \"64e4997f-79a2-4ddd-afa2-c8b13f631b80\") " pod="openshift-image-registry/image-registry-66df7c8f76-fwwcq" Feb 23 17:36:40 crc kubenswrapper[4724]: I0223 17:36:40.949614 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kv7nn\" (UniqueName: \"kubernetes.io/projected/64e4997f-79a2-4ddd-afa2-c8b13f631b80-kube-api-access-kv7nn\") pod \"image-registry-66df7c8f76-fwwcq\" (UID: \"64e4997f-79a2-4ddd-afa2-c8b13f631b80\") " pod="openshift-image-registry/image-registry-66df7c8f76-fwwcq" Feb 23 17:36:40 crc kubenswrapper[4724]: I0223 17:36:40.950070 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/64e4997f-79a2-4ddd-afa2-c8b13f631b80-trusted-ca\") pod \"image-registry-66df7c8f76-fwwcq\" (UID: \"64e4997f-79a2-4ddd-afa2-c8b13f631b80\") " pod="openshift-image-registry/image-registry-66df7c8f76-fwwcq" Feb 23 17:36:40 crc kubenswrapper[4724]: I0223 17:36:40.950168 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/64e4997f-79a2-4ddd-afa2-c8b13f631b80-ca-trust-extracted\") pod \"image-registry-66df7c8f76-fwwcq\" (UID: \"64e4997f-79a2-4ddd-afa2-c8b13f631b80\") " pod="openshift-image-registry/image-registry-66df7c8f76-fwwcq" Feb 23 17:36:40 crc kubenswrapper[4724]: I0223 17:36:40.950254 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/64e4997f-79a2-4ddd-afa2-c8b13f631b80-registry-tls\") pod \"image-registry-66df7c8f76-fwwcq\" (UID: \"64e4997f-79a2-4ddd-afa2-c8b13f631b80\") " pod="openshift-image-registry/image-registry-66df7c8f76-fwwcq" Feb 23 17:36:40 crc kubenswrapper[4724]: I0223 17:36:40.950344 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/64e4997f-79a2-4ddd-afa2-c8b13f631b80-registry-certificates\") pod \"image-registry-66df7c8f76-fwwcq\" (UID: \"64e4997f-79a2-4ddd-afa2-c8b13f631b80\") " pod="openshift-image-registry/image-registry-66df7c8f76-fwwcq" Feb 23 17:36:40 crc kubenswrapper[4724]: I0223 17:36:40.950452 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/64e4997f-79a2-4ddd-afa2-c8b13f631b80-installation-pull-secrets\") pod \"image-registry-66df7c8f76-fwwcq\" (UID: \"64e4997f-79a2-4ddd-afa2-c8b13f631b80\") " pod="openshift-image-registry/image-registry-66df7c8f76-fwwcq" Feb 23 17:36:40 crc kubenswrapper[4724]: I0223 17:36:40.950543 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/64e4997f-79a2-4ddd-afa2-c8b13f631b80-bound-sa-token\") pod \"image-registry-66df7c8f76-fwwcq\" (UID: \"64e4997f-79a2-4ddd-afa2-c8b13f631b80\") " pod="openshift-image-registry/image-registry-66df7c8f76-fwwcq" Feb 23 17:36:40 crc kubenswrapper[4724]: I0223 17:36:40.951327 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/64e4997f-79a2-4ddd-afa2-c8b13f631b80-ca-trust-extracted\") pod \"image-registry-66df7c8f76-fwwcq\" (UID: \"64e4997f-79a2-4ddd-afa2-c8b13f631b80\") " pod="openshift-image-registry/image-registry-66df7c8f76-fwwcq" Feb 23 17:36:40 crc kubenswrapper[4724]: I0223 17:36:40.952011 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/64e4997f-79a2-4ddd-afa2-c8b13f631b80-trusted-ca\") pod \"image-registry-66df7c8f76-fwwcq\" (UID: \"64e4997f-79a2-4ddd-afa2-c8b13f631b80\") " pod="openshift-image-registry/image-registry-66df7c8f76-fwwcq" Feb 23 17:36:40 crc kubenswrapper[4724]: I0223 17:36:40.953172 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/64e4997f-79a2-4ddd-afa2-c8b13f631b80-registry-certificates\") pod \"image-registry-66df7c8f76-fwwcq\" (UID: \"64e4997f-79a2-4ddd-afa2-c8b13f631b80\") " pod="openshift-image-registry/image-registry-66df7c8f76-fwwcq" Feb 23 17:36:40 crc kubenswrapper[4724]: I0223 17:36:40.958010 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/64e4997f-79a2-4ddd-afa2-c8b13f631b80-registry-tls\") pod \"image-registry-66df7c8f76-fwwcq\" (UID: \"64e4997f-79a2-4ddd-afa2-c8b13f631b80\") " pod="openshift-image-registry/image-registry-66df7c8f76-fwwcq" Feb 23 17:36:40 crc kubenswrapper[4724]: I0223 17:36:40.958709 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/64e4997f-79a2-4ddd-afa2-c8b13f631b80-installation-pull-secrets\") pod \"image-registry-66df7c8f76-fwwcq\" (UID: \"64e4997f-79a2-4ddd-afa2-c8b13f631b80\") " pod="openshift-image-registry/image-registry-66df7c8f76-fwwcq" Feb 23 17:36:40 crc kubenswrapper[4724]: I0223 17:36:40.970648 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kv7nn\" (UniqueName: \"kubernetes.io/projected/64e4997f-79a2-4ddd-afa2-c8b13f631b80-kube-api-access-kv7nn\") pod \"image-registry-66df7c8f76-fwwcq\" (UID: \"64e4997f-79a2-4ddd-afa2-c8b13f631b80\") " pod="openshift-image-registry/image-registry-66df7c8f76-fwwcq" Feb 23 17:36:40 crc kubenswrapper[4724]: I0223 17:36:40.977445 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/64e4997f-79a2-4ddd-afa2-c8b13f631b80-bound-sa-token\") pod \"image-registry-66df7c8f76-fwwcq\" (UID: \"64e4997f-79a2-4ddd-afa2-c8b13f631b80\") " pod="openshift-image-registry/image-registry-66df7c8f76-fwwcq" Feb 23 17:36:41 crc kubenswrapper[4724]: I0223 17:36:41.028725 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-fwwcq" Feb 23 17:36:41 crc kubenswrapper[4724]: I0223 17:36:41.324287 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-dcf7dddf5-wfkrv" Feb 23 17:36:41 crc kubenswrapper[4724]: I0223 17:36:41.459205 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7e1d2f2-9092-4cbc-8405-7467f7e702e6-config\") pod \"f7e1d2f2-9092-4cbc-8405-7467f7e702e6\" (UID: \"f7e1d2f2-9092-4cbc-8405-7467f7e702e6\") " Feb 23 17:36:41 crc kubenswrapper[4724]: I0223 17:36:41.459287 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5sfh\" (UniqueName: \"kubernetes.io/projected/f7e1d2f2-9092-4cbc-8405-7467f7e702e6-kube-api-access-c5sfh\") pod \"f7e1d2f2-9092-4cbc-8405-7467f7e702e6\" (UID: \"f7e1d2f2-9092-4cbc-8405-7467f7e702e6\") " Feb 23 17:36:41 crc kubenswrapper[4724]: I0223 17:36:41.459323 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f7e1d2f2-9092-4cbc-8405-7467f7e702e6-client-ca\") pod \"f7e1d2f2-9092-4cbc-8405-7467f7e702e6\" (UID: \"f7e1d2f2-9092-4cbc-8405-7467f7e702e6\") " Feb 23 17:36:41 crc kubenswrapper[4724]: I0223 17:36:41.459376 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f7e1d2f2-9092-4cbc-8405-7467f7e702e6-proxy-ca-bundles\") pod \"f7e1d2f2-9092-4cbc-8405-7467f7e702e6\" (UID: \"f7e1d2f2-9092-4cbc-8405-7467f7e702e6\") " Feb 23 17:36:41 crc kubenswrapper[4724]: I0223 17:36:41.460173 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7e1d2f2-9092-4cbc-8405-7467f7e702e6-client-ca" (OuterVolumeSpecName: "client-ca") pod "f7e1d2f2-9092-4cbc-8405-7467f7e702e6" (UID: "f7e1d2f2-9092-4cbc-8405-7467f7e702e6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:36:41 crc kubenswrapper[4724]: I0223 17:36:41.460286 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7e1d2f2-9092-4cbc-8405-7467f7e702e6-config" (OuterVolumeSpecName: "config") pod "f7e1d2f2-9092-4cbc-8405-7467f7e702e6" (UID: "f7e1d2f2-9092-4cbc-8405-7467f7e702e6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:36:41 crc kubenswrapper[4724]: I0223 17:36:41.460333 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7e1d2f2-9092-4cbc-8405-7467f7e702e6-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f7e1d2f2-9092-4cbc-8405-7467f7e702e6" (UID: "f7e1d2f2-9092-4cbc-8405-7467f7e702e6"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:36:41 crc kubenswrapper[4724]: I0223 17:36:41.460507 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7e1d2f2-9092-4cbc-8405-7467f7e702e6-serving-cert\") pod \"f7e1d2f2-9092-4cbc-8405-7467f7e702e6\" (UID: \"f7e1d2f2-9092-4cbc-8405-7467f7e702e6\") " Feb 23 17:36:41 crc kubenswrapper[4724]: I0223 17:36:41.461284 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7e1d2f2-9092-4cbc-8405-7467f7e702e6-config\") on node \"crc\" DevicePath \"\"" Feb 23 17:36:41 crc kubenswrapper[4724]: I0223 17:36:41.461317 4724 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f7e1d2f2-9092-4cbc-8405-7467f7e702e6-client-ca\") on node \"crc\" DevicePath \"\"" Feb 23 17:36:41 crc kubenswrapper[4724]: I0223 17:36:41.461333 4724 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f7e1d2f2-9092-4cbc-8405-7467f7e702e6-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 23 17:36:41 crc kubenswrapper[4724]: I0223 17:36:41.465029 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7e1d2f2-9092-4cbc-8405-7467f7e702e6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f7e1d2f2-9092-4cbc-8405-7467f7e702e6" (UID: "f7e1d2f2-9092-4cbc-8405-7467f7e702e6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:36:41 crc kubenswrapper[4724]: I0223 17:36:41.465146 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e1d2f2-9092-4cbc-8405-7467f7e702e6-kube-api-access-c5sfh" (OuterVolumeSpecName: "kube-api-access-c5sfh") pod "f7e1d2f2-9092-4cbc-8405-7467f7e702e6" (UID: "f7e1d2f2-9092-4cbc-8405-7467f7e702e6"). InnerVolumeSpecName "kube-api-access-c5sfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:36:41 crc kubenswrapper[4724]: I0223 17:36:41.489940 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-fwwcq"] Feb 23 17:36:41 crc kubenswrapper[4724]: I0223 17:36:41.562370 4724 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7e1d2f2-9092-4cbc-8405-7467f7e702e6-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 17:36:41 crc kubenswrapper[4724]: I0223 17:36:41.562748 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5sfh\" (UniqueName: \"kubernetes.io/projected/f7e1d2f2-9092-4cbc-8405-7467f7e702e6-kube-api-access-c5sfh\") on node \"crc\" DevicePath \"\"" Feb 23 17:36:41 crc kubenswrapper[4724]: I0223 17:36:41.562504 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-fwwcq" event={"ID":"64e4997f-79a2-4ddd-afa2-c8b13f631b80","Type":"ContainerStarted","Data":"2b206c476d9a778ab1e06bd2d0c86ec92080e272d3d368c3843dc60d0b3e43b9"} Feb 23 17:36:41 crc kubenswrapper[4724]: I0223 17:36:41.564723 4724 generic.go:334] "Generic (PLEG): container finished" podID="f7e1d2f2-9092-4cbc-8405-7467f7e702e6" containerID="932564b6cb2fd599a17421201a91b81d14add0f557167b1c0239d1b687e90032" exitCode=0 Feb 23 17:36:41 crc kubenswrapper[4724]: I0223 17:36:41.564772 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-dcf7dddf5-wfkrv" event={"ID":"f7e1d2f2-9092-4cbc-8405-7467f7e702e6","Type":"ContainerDied","Data":"932564b6cb2fd599a17421201a91b81d14add0f557167b1c0239d1b687e90032"} Feb 23 17:36:41 crc kubenswrapper[4724]: I0223 17:36:41.564798 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-dcf7dddf5-wfkrv" event={"ID":"f7e1d2f2-9092-4cbc-8405-7467f7e702e6","Type":"ContainerDied","Data":"0d5b3b91e0a50ad4dbeecd45c9b9dfe4990045391863352dd4e555ed10a88810"} Feb 23 17:36:41 crc kubenswrapper[4724]: I0223 17:36:41.564825 4724 scope.go:117] "RemoveContainer" containerID="932564b6cb2fd599a17421201a91b81d14add0f557167b1c0239d1b687e90032" Feb 23 17:36:41 crc kubenswrapper[4724]: I0223 17:36:41.564862 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-dcf7dddf5-wfkrv" Feb 23 17:36:41 crc kubenswrapper[4724]: I0223 17:36:41.582553 4724 scope.go:117] "RemoveContainer" containerID="932564b6cb2fd599a17421201a91b81d14add0f557167b1c0239d1b687e90032" Feb 23 17:36:41 crc kubenswrapper[4724]: E0223 17:36:41.583047 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"932564b6cb2fd599a17421201a91b81d14add0f557167b1c0239d1b687e90032\": container with ID starting with 932564b6cb2fd599a17421201a91b81d14add0f557167b1c0239d1b687e90032 not found: ID does not exist" containerID="932564b6cb2fd599a17421201a91b81d14add0f557167b1c0239d1b687e90032" Feb 23 17:36:41 crc kubenswrapper[4724]: I0223 17:36:41.583102 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"932564b6cb2fd599a17421201a91b81d14add0f557167b1c0239d1b687e90032"} err="failed to get container status \"932564b6cb2fd599a17421201a91b81d14add0f557167b1c0239d1b687e90032\": rpc error: code = NotFound desc = could not find container \"932564b6cb2fd599a17421201a91b81d14add0f557167b1c0239d1b687e90032\": container with ID starting with 932564b6cb2fd599a17421201a91b81d14add0f557167b1c0239d1b687e90032 not found: ID does not exist" Feb 23 17:36:41 crc kubenswrapper[4724]: I0223 17:36:41.601663 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-dcf7dddf5-wfkrv"] Feb 23 17:36:41 crc kubenswrapper[4724]: I0223 17:36:41.606181 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-dcf7dddf5-wfkrv"] Feb 23 17:36:42 crc kubenswrapper[4724]: I0223 17:36:42.575121 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-fwwcq" event={"ID":"64e4997f-79a2-4ddd-afa2-c8b13f631b80","Type":"ContainerStarted","Data":"d970ae4a2aef8eab4958d3a6bf24e5175feb106a30fa60c347827ada6e95233a"} Feb 23 17:36:42 crc kubenswrapper[4724]: I0223 17:36:42.575209 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-fwwcq" Feb 23 17:36:42 crc kubenswrapper[4724]: I0223 17:36:42.675453 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-fwwcq" podStartSLOduration=2.675430514 podStartE2EDuration="2.675430514s" podCreationTimestamp="2026-02-23 17:36:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:36:42.6048945 +0000 UTC m=+358.421094090" watchObservedRunningTime="2026-02-23 17:36:42.675430514 +0000 UTC m=+358.491630124" Feb 23 17:36:42 crc kubenswrapper[4724]: I0223 17:36:42.678043 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-575cd6c7fd-72ffq"] Feb 23 17:36:42 crc kubenswrapper[4724]: E0223 17:36:42.678319 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7e1d2f2-9092-4cbc-8405-7467f7e702e6" containerName="controller-manager" Feb 23 17:36:42 crc kubenswrapper[4724]: I0223 17:36:42.678341 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7e1d2f2-9092-4cbc-8405-7467f7e702e6" containerName="controller-manager" Feb 23 17:36:42 crc kubenswrapper[4724]: I0223 17:36:42.678504 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7e1d2f2-9092-4cbc-8405-7467f7e702e6" containerName="controller-manager" Feb 23 17:36:42 crc kubenswrapper[4724]: I0223 17:36:42.679039 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-575cd6c7fd-72ffq" Feb 23 17:36:42 crc kubenswrapper[4724]: I0223 17:36:42.685533 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 23 17:36:42 crc kubenswrapper[4724]: I0223 17:36:42.697691 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 23 17:36:42 crc kubenswrapper[4724]: I0223 17:36:42.709683 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 23 17:36:42 crc kubenswrapper[4724]: I0223 17:36:42.710307 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 23 17:36:42 crc kubenswrapper[4724]: I0223 17:36:42.718762 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 23 17:36:42 crc kubenswrapper[4724]: I0223 17:36:42.719119 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 23 17:36:42 crc kubenswrapper[4724]: I0223 17:36:42.722729 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-575cd6c7fd-72ffq"] Feb 23 17:36:42 crc kubenswrapper[4724]: I0223 17:36:42.722839 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 23 17:36:42 crc kubenswrapper[4724]: I0223 17:36:42.778194 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1b3b78a-5538-4342-a5de-afcb48e0ed87-serving-cert\") pod \"controller-manager-575cd6c7fd-72ffq\" (UID: \"a1b3b78a-5538-4342-a5de-afcb48e0ed87\") " pod="openshift-controller-manager/controller-manager-575cd6c7fd-72ffq" Feb 23 17:36:42 crc kubenswrapper[4724]: I0223 17:36:42.778653 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1b3b78a-5538-4342-a5de-afcb48e0ed87-config\") pod \"controller-manager-575cd6c7fd-72ffq\" (UID: \"a1b3b78a-5538-4342-a5de-afcb48e0ed87\") " pod="openshift-controller-manager/controller-manager-575cd6c7fd-72ffq" Feb 23 17:36:42 crc kubenswrapper[4724]: I0223 17:36:42.778682 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a1b3b78a-5538-4342-a5de-afcb48e0ed87-proxy-ca-bundles\") pod \"controller-manager-575cd6c7fd-72ffq\" (UID: \"a1b3b78a-5538-4342-a5de-afcb48e0ed87\") " pod="openshift-controller-manager/controller-manager-575cd6c7fd-72ffq" Feb 23 17:36:42 crc kubenswrapper[4724]: I0223 17:36:42.778728 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a1b3b78a-5538-4342-a5de-afcb48e0ed87-client-ca\") pod \"controller-manager-575cd6c7fd-72ffq\" (UID: \"a1b3b78a-5538-4342-a5de-afcb48e0ed87\") " pod="openshift-controller-manager/controller-manager-575cd6c7fd-72ffq" Feb 23 17:36:42 crc kubenswrapper[4724]: I0223 17:36:42.779013 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmxzm\" (UniqueName: \"kubernetes.io/projected/a1b3b78a-5538-4342-a5de-afcb48e0ed87-kube-api-access-cmxzm\") pod \"controller-manager-575cd6c7fd-72ffq\" (UID: \"a1b3b78a-5538-4342-a5de-afcb48e0ed87\") " pod="openshift-controller-manager/controller-manager-575cd6c7fd-72ffq" Feb 23 17:36:42 crc kubenswrapper[4724]: I0223 17:36:42.882594 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmxzm\" (UniqueName: \"kubernetes.io/projected/a1b3b78a-5538-4342-a5de-afcb48e0ed87-kube-api-access-cmxzm\") pod \"controller-manager-575cd6c7fd-72ffq\" (UID: \"a1b3b78a-5538-4342-a5de-afcb48e0ed87\") " pod="openshift-controller-manager/controller-manager-575cd6c7fd-72ffq" Feb 23 17:36:42 crc kubenswrapper[4724]: I0223 17:36:42.882673 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1b3b78a-5538-4342-a5de-afcb48e0ed87-serving-cert\") pod \"controller-manager-575cd6c7fd-72ffq\" (UID: \"a1b3b78a-5538-4342-a5de-afcb48e0ed87\") " pod="openshift-controller-manager/controller-manager-575cd6c7fd-72ffq" Feb 23 17:36:42 crc kubenswrapper[4724]: I0223 17:36:42.882705 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1b3b78a-5538-4342-a5de-afcb48e0ed87-config\") pod \"controller-manager-575cd6c7fd-72ffq\" (UID: \"a1b3b78a-5538-4342-a5de-afcb48e0ed87\") " pod="openshift-controller-manager/controller-manager-575cd6c7fd-72ffq" Feb 23 17:36:42 crc kubenswrapper[4724]: I0223 17:36:42.882734 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a1b3b78a-5538-4342-a5de-afcb48e0ed87-proxy-ca-bundles\") pod \"controller-manager-575cd6c7fd-72ffq\" (UID: \"a1b3b78a-5538-4342-a5de-afcb48e0ed87\") " pod="openshift-controller-manager/controller-manager-575cd6c7fd-72ffq" Feb 23 17:36:42 crc kubenswrapper[4724]: I0223 17:36:42.882779 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a1b3b78a-5538-4342-a5de-afcb48e0ed87-client-ca\") pod \"controller-manager-575cd6c7fd-72ffq\" (UID: \"a1b3b78a-5538-4342-a5de-afcb48e0ed87\") " pod="openshift-controller-manager/controller-manager-575cd6c7fd-72ffq" Feb 23 17:36:42 crc kubenswrapper[4724]: I0223 17:36:42.884308 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a1b3b78a-5538-4342-a5de-afcb48e0ed87-proxy-ca-bundles\") pod \"controller-manager-575cd6c7fd-72ffq\" (UID: \"a1b3b78a-5538-4342-a5de-afcb48e0ed87\") " pod="openshift-controller-manager/controller-manager-575cd6c7fd-72ffq" Feb 23 17:36:42 crc kubenswrapper[4724]: I0223 17:36:42.885143 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1b3b78a-5538-4342-a5de-afcb48e0ed87-config\") pod \"controller-manager-575cd6c7fd-72ffq\" (UID: \"a1b3b78a-5538-4342-a5de-afcb48e0ed87\") " pod="openshift-controller-manager/controller-manager-575cd6c7fd-72ffq" Feb 23 17:36:42 crc kubenswrapper[4724]: I0223 17:36:42.887176 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a1b3b78a-5538-4342-a5de-afcb48e0ed87-client-ca\") pod \"controller-manager-575cd6c7fd-72ffq\" (UID: \"a1b3b78a-5538-4342-a5de-afcb48e0ed87\") " pod="openshift-controller-manager/controller-manager-575cd6c7fd-72ffq" Feb 23 17:36:42 crc kubenswrapper[4724]: I0223 17:36:42.889553 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1b3b78a-5538-4342-a5de-afcb48e0ed87-serving-cert\") pod \"controller-manager-575cd6c7fd-72ffq\" (UID: \"a1b3b78a-5538-4342-a5de-afcb48e0ed87\") " pod="openshift-controller-manager/controller-manager-575cd6c7fd-72ffq" Feb 23 17:36:42 crc kubenswrapper[4724]: I0223 17:36:42.902911 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmxzm\" (UniqueName: \"kubernetes.io/projected/a1b3b78a-5538-4342-a5de-afcb48e0ed87-kube-api-access-cmxzm\") pod \"controller-manager-575cd6c7fd-72ffq\" (UID: \"a1b3b78a-5538-4342-a5de-afcb48e0ed87\") " pod="openshift-controller-manager/controller-manager-575cd6c7fd-72ffq" Feb 23 17:36:42 crc kubenswrapper[4724]: I0223 17:36:42.960079 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e1d2f2-9092-4cbc-8405-7467f7e702e6" path="/var/lib/kubelet/pods/f7e1d2f2-9092-4cbc-8405-7467f7e702e6/volumes" Feb 23 17:36:43 crc kubenswrapper[4724]: I0223 17:36:43.021364 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-575cd6c7fd-72ffq" Feb 23 17:36:43 crc kubenswrapper[4724]: I0223 17:36:43.456689 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-575cd6c7fd-72ffq"] Feb 23 17:36:43 crc kubenswrapper[4724]: W0223 17:36:43.467675 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1b3b78a_5538_4342_a5de_afcb48e0ed87.slice/crio-e6f07c9e6f927a11061029b760b12a03d37d99f8e3e34b00c57753cf6b7d853d WatchSource:0}: Error finding container e6f07c9e6f927a11061029b760b12a03d37d99f8e3e34b00c57753cf6b7d853d: Status 404 returned error can't find the container with id e6f07c9e6f927a11061029b760b12a03d37d99f8e3e34b00c57753cf6b7d853d Feb 23 17:36:43 crc kubenswrapper[4724]: I0223 17:36:43.583190 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-575cd6c7fd-72ffq" event={"ID":"a1b3b78a-5538-4342-a5de-afcb48e0ed87","Type":"ContainerStarted","Data":"e6f07c9e6f927a11061029b760b12a03d37d99f8e3e34b00c57753cf6b7d853d"} Feb 23 17:36:43 crc kubenswrapper[4724]: I0223 17:36:43.695732 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xwjqh"] Feb 23 17:36:43 crc kubenswrapper[4724]: I0223 17:36:43.696949 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xwjqh" Feb 23 17:36:43 crc kubenswrapper[4724]: I0223 17:36:43.699716 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 23 17:36:43 crc kubenswrapper[4724]: I0223 17:36:43.707277 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xwjqh"] Feb 23 17:36:43 crc kubenswrapper[4724]: I0223 17:36:43.795442 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24d64c3d-d544-4a74-ae90-36b17131a812-catalog-content\") pod \"community-operators-xwjqh\" (UID: \"24d64c3d-d544-4a74-ae90-36b17131a812\") " pod="openshift-marketplace/community-operators-xwjqh" Feb 23 17:36:43 crc kubenswrapper[4724]: I0223 17:36:43.795513 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24d64c3d-d544-4a74-ae90-36b17131a812-utilities\") pod \"community-operators-xwjqh\" (UID: \"24d64c3d-d544-4a74-ae90-36b17131a812\") " pod="openshift-marketplace/community-operators-xwjqh" Feb 23 17:36:43 crc kubenswrapper[4724]: I0223 17:36:43.795551 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9csv\" (UniqueName: \"kubernetes.io/projected/24d64c3d-d544-4a74-ae90-36b17131a812-kube-api-access-p9csv\") pod \"community-operators-xwjqh\" (UID: \"24d64c3d-d544-4a74-ae90-36b17131a812\") " pod="openshift-marketplace/community-operators-xwjqh" Feb 23 17:36:43 crc kubenswrapper[4724]: I0223 17:36:43.896963 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24d64c3d-d544-4a74-ae90-36b17131a812-catalog-content\") pod \"community-operators-xwjqh\" (UID: \"24d64c3d-d544-4a74-ae90-36b17131a812\") " pod="openshift-marketplace/community-operators-xwjqh" Feb 23 17:36:43 crc kubenswrapper[4724]: I0223 17:36:43.897044 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24d64c3d-d544-4a74-ae90-36b17131a812-utilities\") pod \"community-operators-xwjqh\" (UID: \"24d64c3d-d544-4a74-ae90-36b17131a812\") " pod="openshift-marketplace/community-operators-xwjqh" Feb 23 17:36:43 crc kubenswrapper[4724]: I0223 17:36:43.897074 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9csv\" (UniqueName: \"kubernetes.io/projected/24d64c3d-d544-4a74-ae90-36b17131a812-kube-api-access-p9csv\") pod \"community-operators-xwjqh\" (UID: \"24d64c3d-d544-4a74-ae90-36b17131a812\") " pod="openshift-marketplace/community-operators-xwjqh" Feb 23 17:36:43 crc kubenswrapper[4724]: I0223 17:36:43.897562 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24d64c3d-d544-4a74-ae90-36b17131a812-catalog-content\") pod \"community-operators-xwjqh\" (UID: \"24d64c3d-d544-4a74-ae90-36b17131a812\") " pod="openshift-marketplace/community-operators-xwjqh" Feb 23 17:36:43 crc kubenswrapper[4724]: I0223 17:36:43.897693 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24d64c3d-d544-4a74-ae90-36b17131a812-utilities\") pod \"community-operators-xwjqh\" (UID: \"24d64c3d-d544-4a74-ae90-36b17131a812\") " pod="openshift-marketplace/community-operators-xwjqh" Feb 23 17:36:43 crc kubenswrapper[4724]: I0223 17:36:43.906215 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pmtdz"] Feb 23 17:36:43 crc kubenswrapper[4724]: I0223 17:36:43.907374 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pmtdz" Feb 23 17:36:43 crc kubenswrapper[4724]: I0223 17:36:43.909960 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 23 17:36:43 crc kubenswrapper[4724]: I0223 17:36:43.919946 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9csv\" (UniqueName: \"kubernetes.io/projected/24d64c3d-d544-4a74-ae90-36b17131a812-kube-api-access-p9csv\") pod \"community-operators-xwjqh\" (UID: \"24d64c3d-d544-4a74-ae90-36b17131a812\") " pod="openshift-marketplace/community-operators-xwjqh" Feb 23 17:36:44 crc kubenswrapper[4724]: I0223 17:36:44.005063 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pmtdz"] Feb 23 17:36:44 crc kubenswrapper[4724]: I0223 17:36:44.015577 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xwjqh" Feb 23 17:36:44 crc kubenswrapper[4724]: I0223 17:36:44.099552 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bba3085-58d9-4a69-b93b-f4b0034fa2ec-catalog-content\") pod \"certified-operators-pmtdz\" (UID: \"7bba3085-58d9-4a69-b93b-f4b0034fa2ec\") " pod="openshift-marketplace/certified-operators-pmtdz" Feb 23 17:36:44 crc kubenswrapper[4724]: I0223 17:36:44.099996 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bba3085-58d9-4a69-b93b-f4b0034fa2ec-utilities\") pod \"certified-operators-pmtdz\" (UID: \"7bba3085-58d9-4a69-b93b-f4b0034fa2ec\") " pod="openshift-marketplace/certified-operators-pmtdz" Feb 23 17:36:44 crc kubenswrapper[4724]: I0223 17:36:44.100042 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sl4z\" (UniqueName: \"kubernetes.io/projected/7bba3085-58d9-4a69-b93b-f4b0034fa2ec-kube-api-access-8sl4z\") pod \"certified-operators-pmtdz\" (UID: \"7bba3085-58d9-4a69-b93b-f4b0034fa2ec\") " pod="openshift-marketplace/certified-operators-pmtdz" Feb 23 17:36:44 crc kubenswrapper[4724]: I0223 17:36:44.201699 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bba3085-58d9-4a69-b93b-f4b0034fa2ec-catalog-content\") pod \"certified-operators-pmtdz\" (UID: \"7bba3085-58d9-4a69-b93b-f4b0034fa2ec\") " pod="openshift-marketplace/certified-operators-pmtdz" Feb 23 17:36:44 crc kubenswrapper[4724]: I0223 17:36:44.201772 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bba3085-58d9-4a69-b93b-f4b0034fa2ec-utilities\") pod \"certified-operators-pmtdz\" (UID: \"7bba3085-58d9-4a69-b93b-f4b0034fa2ec\") " pod="openshift-marketplace/certified-operators-pmtdz" Feb 23 17:36:44 crc kubenswrapper[4724]: I0223 17:36:44.201805 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sl4z\" (UniqueName: \"kubernetes.io/projected/7bba3085-58d9-4a69-b93b-f4b0034fa2ec-kube-api-access-8sl4z\") pod \"certified-operators-pmtdz\" (UID: \"7bba3085-58d9-4a69-b93b-f4b0034fa2ec\") " pod="openshift-marketplace/certified-operators-pmtdz" Feb 23 17:36:44 crc kubenswrapper[4724]: I0223 17:36:44.202453 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bba3085-58d9-4a69-b93b-f4b0034fa2ec-catalog-content\") pod \"certified-operators-pmtdz\" (UID: \"7bba3085-58d9-4a69-b93b-f4b0034fa2ec\") " pod="openshift-marketplace/certified-operators-pmtdz" Feb 23 17:36:44 crc kubenswrapper[4724]: I0223 17:36:44.202662 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bba3085-58d9-4a69-b93b-f4b0034fa2ec-utilities\") pod \"certified-operators-pmtdz\" (UID: \"7bba3085-58d9-4a69-b93b-f4b0034fa2ec\") " pod="openshift-marketplace/certified-operators-pmtdz" Feb 23 17:36:44 crc kubenswrapper[4724]: I0223 17:36:44.234440 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sl4z\" (UniqueName: \"kubernetes.io/projected/7bba3085-58d9-4a69-b93b-f4b0034fa2ec-kube-api-access-8sl4z\") pod \"certified-operators-pmtdz\" (UID: \"7bba3085-58d9-4a69-b93b-f4b0034fa2ec\") " pod="openshift-marketplace/certified-operators-pmtdz" Feb 23 17:36:44 crc kubenswrapper[4724]: I0223 17:36:44.276631 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pmtdz" Feb 23 17:36:44 crc kubenswrapper[4724]: I0223 17:36:44.486028 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xwjqh"] Feb 23 17:36:44 crc kubenswrapper[4724]: W0223 17:36:44.496651 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24d64c3d_d544_4a74_ae90_36b17131a812.slice/crio-c257f66d9e5529869b315245359703fb8c7cbbcd63a8e9fa525ee82236f1432b WatchSource:0}: Error finding container c257f66d9e5529869b315245359703fb8c7cbbcd63a8e9fa525ee82236f1432b: Status 404 returned error can't find the container with id c257f66d9e5529869b315245359703fb8c7cbbcd63a8e9fa525ee82236f1432b Feb 23 17:36:44 crc kubenswrapper[4724]: I0223 17:36:44.590716 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-575cd6c7fd-72ffq" event={"ID":"a1b3b78a-5538-4342-a5de-afcb48e0ed87","Type":"ContainerStarted","Data":"cc9131b8177d98a2f2d329ab0c12178a9629e6316613aa61d60d34b39f27c529"} Feb 23 17:36:44 crc kubenswrapper[4724]: I0223 17:36:44.591082 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-575cd6c7fd-72ffq" Feb 23 17:36:44 crc kubenswrapper[4724]: I0223 17:36:44.592180 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xwjqh" event={"ID":"24d64c3d-d544-4a74-ae90-36b17131a812","Type":"ContainerStarted","Data":"c257f66d9e5529869b315245359703fb8c7cbbcd63a8e9fa525ee82236f1432b"} Feb 23 17:36:44 crc kubenswrapper[4724]: I0223 17:36:44.597203 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-575cd6c7fd-72ffq" Feb 23 17:36:44 crc kubenswrapper[4724]: I0223 17:36:44.608642 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-575cd6c7fd-72ffq" podStartSLOduration=4.608617946 podStartE2EDuration="4.608617946s" podCreationTimestamp="2026-02-23 17:36:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:36:44.606787727 +0000 UTC m=+360.422987327" watchObservedRunningTime="2026-02-23 17:36:44.608617946 +0000 UTC m=+360.424817556" Feb 23 17:36:44 crc kubenswrapper[4724]: I0223 17:36:44.698880 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pmtdz"] Feb 23 17:36:45 crc kubenswrapper[4724]: I0223 17:36:45.600541 4724 generic.go:334] "Generic (PLEG): container finished" podID="7bba3085-58d9-4a69-b93b-f4b0034fa2ec" containerID="6722805b4a29d1072b977535720df2db87c25093c6fa5e79398aa16faf9ac487" exitCode=0 Feb 23 17:36:45 crc kubenswrapper[4724]: I0223 17:36:45.600619 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pmtdz" event={"ID":"7bba3085-58d9-4a69-b93b-f4b0034fa2ec","Type":"ContainerDied","Data":"6722805b4a29d1072b977535720df2db87c25093c6fa5e79398aa16faf9ac487"} Feb 23 17:36:45 crc kubenswrapper[4724]: I0223 17:36:45.600928 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pmtdz" event={"ID":"7bba3085-58d9-4a69-b93b-f4b0034fa2ec","Type":"ContainerStarted","Data":"873a8a0102b29910cace14e4bc748b3bce566ff1d709be1a763f538de5b52f5c"} Feb 23 17:36:45 crc kubenswrapper[4724]: I0223 17:36:45.603029 4724 generic.go:334] "Generic (PLEG): container finished" podID="24d64c3d-d544-4a74-ae90-36b17131a812" containerID="ceb55b4378c27482944f1bea84c091af6a461e9b73e9dacf20c24335c0077c7e" exitCode=0 Feb 23 17:36:45 crc kubenswrapper[4724]: I0223 17:36:45.603646 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xwjqh" event={"ID":"24d64c3d-d544-4a74-ae90-36b17131a812","Type":"ContainerDied","Data":"ceb55b4378c27482944f1bea84c091af6a461e9b73e9dacf20c24335c0077c7e"} Feb 23 17:36:46 crc kubenswrapper[4724]: I0223 17:36:46.102276 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mb467"] Feb 23 17:36:46 crc kubenswrapper[4724]: I0223 17:36:46.103722 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mb467" Feb 23 17:36:46 crc kubenswrapper[4724]: I0223 17:36:46.107332 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 23 17:36:46 crc kubenswrapper[4724]: I0223 17:36:46.116144 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mb467"] Feb 23 17:36:46 crc kubenswrapper[4724]: I0223 17:36:46.240229 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fd0393f-f7c2-45b6-9bcd-d83bb0e7988d-catalog-content\") pod \"redhat-marketplace-mb467\" (UID: \"0fd0393f-f7c2-45b6-9bcd-d83bb0e7988d\") " pod="openshift-marketplace/redhat-marketplace-mb467" Feb 23 17:36:46 crc kubenswrapper[4724]: I0223 17:36:46.240280 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fd0393f-f7c2-45b6-9bcd-d83bb0e7988d-utilities\") pod \"redhat-marketplace-mb467\" (UID: \"0fd0393f-f7c2-45b6-9bcd-d83bb0e7988d\") " pod="openshift-marketplace/redhat-marketplace-mb467" Feb 23 17:36:46 crc kubenswrapper[4724]: I0223 17:36:46.240325 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m892r\" (UniqueName: \"kubernetes.io/projected/0fd0393f-f7c2-45b6-9bcd-d83bb0e7988d-kube-api-access-m892r\") pod \"redhat-marketplace-mb467\" (UID: \"0fd0393f-f7c2-45b6-9bcd-d83bb0e7988d\") " pod="openshift-marketplace/redhat-marketplace-mb467" Feb 23 17:36:46 crc kubenswrapper[4724]: I0223 17:36:46.293234 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-z9smz"] Feb 23 17:36:46 crc kubenswrapper[4724]: I0223 17:36:46.310366 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z9smz" Feb 23 17:36:46 crc kubenswrapper[4724]: I0223 17:36:46.314041 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 23 17:36:46 crc kubenswrapper[4724]: I0223 17:36:46.329461 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z9smz"] Feb 23 17:36:46 crc kubenswrapper[4724]: I0223 17:36:46.341424 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m892r\" (UniqueName: \"kubernetes.io/projected/0fd0393f-f7c2-45b6-9bcd-d83bb0e7988d-kube-api-access-m892r\") pod \"redhat-marketplace-mb467\" (UID: \"0fd0393f-f7c2-45b6-9bcd-d83bb0e7988d\") " pod="openshift-marketplace/redhat-marketplace-mb467" Feb 23 17:36:46 crc kubenswrapper[4724]: I0223 17:36:46.341520 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fd0393f-f7c2-45b6-9bcd-d83bb0e7988d-catalog-content\") pod \"redhat-marketplace-mb467\" (UID: \"0fd0393f-f7c2-45b6-9bcd-d83bb0e7988d\") " pod="openshift-marketplace/redhat-marketplace-mb467" Feb 23 17:36:46 crc kubenswrapper[4724]: I0223 17:36:46.341546 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fd0393f-f7c2-45b6-9bcd-d83bb0e7988d-utilities\") pod \"redhat-marketplace-mb467\" (UID: \"0fd0393f-f7c2-45b6-9bcd-d83bb0e7988d\") " pod="openshift-marketplace/redhat-marketplace-mb467" Feb 23 17:36:46 crc kubenswrapper[4724]: I0223 17:36:46.342526 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fd0393f-f7c2-45b6-9bcd-d83bb0e7988d-utilities\") pod \"redhat-marketplace-mb467\" (UID: \"0fd0393f-f7c2-45b6-9bcd-d83bb0e7988d\") " pod="openshift-marketplace/redhat-marketplace-mb467" Feb 23 17:36:46 crc kubenswrapper[4724]: I0223 17:36:46.342616 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fd0393f-f7c2-45b6-9bcd-d83bb0e7988d-catalog-content\") pod \"redhat-marketplace-mb467\" (UID: \"0fd0393f-f7c2-45b6-9bcd-d83bb0e7988d\") " pod="openshift-marketplace/redhat-marketplace-mb467" Feb 23 17:36:46 crc kubenswrapper[4724]: I0223 17:36:46.364246 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m892r\" (UniqueName: \"kubernetes.io/projected/0fd0393f-f7c2-45b6-9bcd-d83bb0e7988d-kube-api-access-m892r\") pod \"redhat-marketplace-mb467\" (UID: \"0fd0393f-f7c2-45b6-9bcd-d83bb0e7988d\") " pod="openshift-marketplace/redhat-marketplace-mb467" Feb 23 17:36:46 crc kubenswrapper[4724]: I0223 17:36:46.432262 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mb467" Feb 23 17:36:46 crc kubenswrapper[4724]: I0223 17:36:46.443252 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/474bcfee-4643-4fdc-b7c9-d823ecb79b90-catalog-content\") pod \"redhat-operators-z9smz\" (UID: \"474bcfee-4643-4fdc-b7c9-d823ecb79b90\") " pod="openshift-marketplace/redhat-operators-z9smz" Feb 23 17:36:46 crc kubenswrapper[4724]: I0223 17:36:46.443424 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2cfx\" (UniqueName: \"kubernetes.io/projected/474bcfee-4643-4fdc-b7c9-d823ecb79b90-kube-api-access-c2cfx\") pod \"redhat-operators-z9smz\" (UID: \"474bcfee-4643-4fdc-b7c9-d823ecb79b90\") " pod="openshift-marketplace/redhat-operators-z9smz" Feb 23 17:36:46 crc kubenswrapper[4724]: I0223 17:36:46.443486 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/474bcfee-4643-4fdc-b7c9-d823ecb79b90-utilities\") pod \"redhat-operators-z9smz\" (UID: \"474bcfee-4643-4fdc-b7c9-d823ecb79b90\") " pod="openshift-marketplace/redhat-operators-z9smz" Feb 23 17:36:46 crc kubenswrapper[4724]: I0223 17:36:46.544333 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/474bcfee-4643-4fdc-b7c9-d823ecb79b90-utilities\") pod \"redhat-operators-z9smz\" (UID: \"474bcfee-4643-4fdc-b7c9-d823ecb79b90\") " pod="openshift-marketplace/redhat-operators-z9smz" Feb 23 17:36:46 crc kubenswrapper[4724]: I0223 17:36:46.544673 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/474bcfee-4643-4fdc-b7c9-d823ecb79b90-catalog-content\") pod \"redhat-operators-z9smz\" (UID: \"474bcfee-4643-4fdc-b7c9-d823ecb79b90\") " pod="openshift-marketplace/redhat-operators-z9smz" Feb 23 17:36:46 crc kubenswrapper[4724]: I0223 17:36:46.544733 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2cfx\" (UniqueName: \"kubernetes.io/projected/474bcfee-4643-4fdc-b7c9-d823ecb79b90-kube-api-access-c2cfx\") pod \"redhat-operators-z9smz\" (UID: \"474bcfee-4643-4fdc-b7c9-d823ecb79b90\") " pod="openshift-marketplace/redhat-operators-z9smz" Feb 23 17:36:46 crc kubenswrapper[4724]: I0223 17:36:46.545117 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/474bcfee-4643-4fdc-b7c9-d823ecb79b90-utilities\") pod \"redhat-operators-z9smz\" (UID: \"474bcfee-4643-4fdc-b7c9-d823ecb79b90\") " pod="openshift-marketplace/redhat-operators-z9smz" Feb 23 17:36:46 crc kubenswrapper[4724]: I0223 17:36:46.545251 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/474bcfee-4643-4fdc-b7c9-d823ecb79b90-catalog-content\") pod \"redhat-operators-z9smz\" (UID: \"474bcfee-4643-4fdc-b7c9-d823ecb79b90\") " pod="openshift-marketplace/redhat-operators-z9smz" Feb 23 17:36:46 crc kubenswrapper[4724]: I0223 17:36:46.566965 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2cfx\" (UniqueName: \"kubernetes.io/projected/474bcfee-4643-4fdc-b7c9-d823ecb79b90-kube-api-access-c2cfx\") pod \"redhat-operators-z9smz\" (UID: \"474bcfee-4643-4fdc-b7c9-d823ecb79b90\") " pod="openshift-marketplace/redhat-operators-z9smz" Feb 23 17:36:46 crc kubenswrapper[4724]: I0223 17:36:46.622708 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xwjqh" event={"ID":"24d64c3d-d544-4a74-ae90-36b17131a812","Type":"ContainerStarted","Data":"de348ec923df2766289293397ff98f9dcabb37aa229ba0715b3b3168a9c54c34"} Feb 23 17:36:46 crc kubenswrapper[4724]: I0223 17:36:46.624458 4724 generic.go:334] "Generic (PLEG): container finished" podID="7bba3085-58d9-4a69-b93b-f4b0034fa2ec" containerID="24f85f756ac9accabb4273d080bb5b574cff194cb9c6037216618c80bdf61783" exitCode=0 Feb 23 17:36:46 crc kubenswrapper[4724]: I0223 17:36:46.624669 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pmtdz" event={"ID":"7bba3085-58d9-4a69-b93b-f4b0034fa2ec","Type":"ContainerDied","Data":"24f85f756ac9accabb4273d080bb5b574cff194cb9c6037216618c80bdf61783"} Feb 23 17:36:46 crc kubenswrapper[4724]: I0223 17:36:46.640372 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z9smz" Feb 23 17:36:46 crc kubenswrapper[4724]: I0223 17:36:46.886307 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mb467"] Feb 23 17:36:47 crc kubenswrapper[4724]: I0223 17:36:47.058769 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z9smz"] Feb 23 17:36:47 crc kubenswrapper[4724]: W0223 17:36:47.096193 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod474bcfee_4643_4fdc_b7c9_d823ecb79b90.slice/crio-63f7514fbdb262e0acb9c2654b91be6c7b493eceb960b23b6273a7bf0afb6fe1 WatchSource:0}: Error finding container 63f7514fbdb262e0acb9c2654b91be6c7b493eceb960b23b6273a7bf0afb6fe1: Status 404 returned error can't find the container with id 63f7514fbdb262e0acb9c2654b91be6c7b493eceb960b23b6273a7bf0afb6fe1 Feb 23 17:36:47 crc kubenswrapper[4724]: I0223 17:36:47.640001 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pmtdz" event={"ID":"7bba3085-58d9-4a69-b93b-f4b0034fa2ec","Type":"ContainerStarted","Data":"62c5c7744ea78c32a63c40dffcd6e4256ffecc90acc14f72f0681093cb630ab3"} Feb 23 17:36:47 crc kubenswrapper[4724]: I0223 17:36:47.642939 4724 generic.go:334] "Generic (PLEG): container finished" podID="474bcfee-4643-4fdc-b7c9-d823ecb79b90" containerID="9ac6b4ff60b78a9251506abb92239b36a8a3d00135bbf5e858ee4b0447ed022a" exitCode=0 Feb 23 17:36:47 crc kubenswrapper[4724]: I0223 17:36:47.643018 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9smz" event={"ID":"474bcfee-4643-4fdc-b7c9-d823ecb79b90","Type":"ContainerDied","Data":"9ac6b4ff60b78a9251506abb92239b36a8a3d00135bbf5e858ee4b0447ed022a"} Feb 23 17:36:47 crc kubenswrapper[4724]: I0223 17:36:47.643046 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9smz" event={"ID":"474bcfee-4643-4fdc-b7c9-d823ecb79b90","Type":"ContainerStarted","Data":"63f7514fbdb262e0acb9c2654b91be6c7b493eceb960b23b6273a7bf0afb6fe1"} Feb 23 17:36:47 crc kubenswrapper[4724]: I0223 17:36:47.645730 4724 generic.go:334] "Generic (PLEG): container finished" podID="24d64c3d-d544-4a74-ae90-36b17131a812" containerID="de348ec923df2766289293397ff98f9dcabb37aa229ba0715b3b3168a9c54c34" exitCode=0 Feb 23 17:36:47 crc kubenswrapper[4724]: I0223 17:36:47.645800 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xwjqh" event={"ID":"24d64c3d-d544-4a74-ae90-36b17131a812","Type":"ContainerDied","Data":"de348ec923df2766289293397ff98f9dcabb37aa229ba0715b3b3168a9c54c34"} Feb 23 17:36:47 crc kubenswrapper[4724]: I0223 17:36:47.647829 4724 generic.go:334] "Generic (PLEG): container finished" podID="0fd0393f-f7c2-45b6-9bcd-d83bb0e7988d" containerID="398bd42d35f2cdd4c83b6faed4de39ea24f482e57cd73170f0a3cffa78ae1661" exitCode=0 Feb 23 17:36:47 crc kubenswrapper[4724]: I0223 17:36:47.647853 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mb467" event={"ID":"0fd0393f-f7c2-45b6-9bcd-d83bb0e7988d","Type":"ContainerDied","Data":"398bd42d35f2cdd4c83b6faed4de39ea24f482e57cd73170f0a3cffa78ae1661"} Feb 23 17:36:47 crc kubenswrapper[4724]: I0223 17:36:47.647870 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mb467" event={"ID":"0fd0393f-f7c2-45b6-9bcd-d83bb0e7988d","Type":"ContainerStarted","Data":"86aeb0bef2233f42a50cbb13df504841556d8b9a6ad13cb7c1feb871bf56d65a"} Feb 23 17:36:47 crc kubenswrapper[4724]: I0223 17:36:47.664685 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pmtdz" podStartSLOduration=3.228950374 podStartE2EDuration="4.664664247s" podCreationTimestamp="2026-02-23 17:36:43 +0000 UTC" firstStartedPulling="2026-02-23 17:36:45.605817946 +0000 UTC m=+361.422017576" lastFinishedPulling="2026-02-23 17:36:47.041531829 +0000 UTC m=+362.857731449" observedRunningTime="2026-02-23 17:36:47.658976196 +0000 UTC m=+363.475175816" watchObservedRunningTime="2026-02-23 17:36:47.664664247 +0000 UTC m=+363.480863847" Feb 23 17:36:48 crc kubenswrapper[4724]: I0223 17:36:48.656578 4724 generic.go:334] "Generic (PLEG): container finished" podID="0fd0393f-f7c2-45b6-9bcd-d83bb0e7988d" containerID="01a3086d9ecac866af82fcc30b3a0ec050b2b0937833a15df180586c4140fb03" exitCode=0 Feb 23 17:36:48 crc kubenswrapper[4724]: I0223 17:36:48.656688 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mb467" event={"ID":"0fd0393f-f7c2-45b6-9bcd-d83bb0e7988d","Type":"ContainerDied","Data":"01a3086d9ecac866af82fcc30b3a0ec050b2b0937833a15df180586c4140fb03"} Feb 23 17:36:48 crc kubenswrapper[4724]: I0223 17:36:48.661175 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9smz" event={"ID":"474bcfee-4643-4fdc-b7c9-d823ecb79b90","Type":"ContainerStarted","Data":"62fb42607a0af641ad8168834c2c016a5354ec80acd17f2951730a2f8bfc67b3"} Feb 23 17:36:48 crc kubenswrapper[4724]: I0223 17:36:48.665725 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xwjqh" event={"ID":"24d64c3d-d544-4a74-ae90-36b17131a812","Type":"ContainerStarted","Data":"4567370954ddd59989d0f707bfd21e287741da9e4644d33f1251332045c90054"} Feb 23 17:36:48 crc kubenswrapper[4724]: I0223 17:36:48.725301 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xwjqh" podStartSLOduration=3.295078222 podStartE2EDuration="5.725283782s" podCreationTimestamp="2026-02-23 17:36:43 +0000 UTC" firstStartedPulling="2026-02-23 17:36:45.606252838 +0000 UTC m=+361.422452448" lastFinishedPulling="2026-02-23 17:36:48.036458408 +0000 UTC m=+363.852658008" observedRunningTime="2026-02-23 17:36:48.723214307 +0000 UTC m=+364.539413907" watchObservedRunningTime="2026-02-23 17:36:48.725283782 +0000 UTC m=+364.541483382" Feb 23 17:36:49 crc kubenswrapper[4724]: I0223 17:36:49.674230 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mb467" event={"ID":"0fd0393f-f7c2-45b6-9bcd-d83bb0e7988d","Type":"ContainerStarted","Data":"785ef75b18fdf9a99e81a322c8d83c2b79eb5c6b633196c45ee24a561acfdbf7"} Feb 23 17:36:49 crc kubenswrapper[4724]: I0223 17:36:49.677430 4724 generic.go:334] "Generic (PLEG): container finished" podID="474bcfee-4643-4fdc-b7c9-d823ecb79b90" containerID="62fb42607a0af641ad8168834c2c016a5354ec80acd17f2951730a2f8bfc67b3" exitCode=0 Feb 23 17:36:49 crc kubenswrapper[4724]: I0223 17:36:49.677530 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9smz" event={"ID":"474bcfee-4643-4fdc-b7c9-d823ecb79b90","Type":"ContainerDied","Data":"62fb42607a0af641ad8168834c2c016a5354ec80acd17f2951730a2f8bfc67b3"} Feb 23 17:36:49 crc kubenswrapper[4724]: I0223 17:36:49.700944 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mb467" podStartSLOduration=2.193988883 podStartE2EDuration="3.700925779s" podCreationTimestamp="2026-02-23 17:36:46 +0000 UTC" firstStartedPulling="2026-02-23 17:36:47.649575166 +0000 UTC m=+363.465774766" lastFinishedPulling="2026-02-23 17:36:49.156512062 +0000 UTC m=+364.972711662" observedRunningTime="2026-02-23 17:36:49.698650189 +0000 UTC m=+365.514849789" watchObservedRunningTime="2026-02-23 17:36:49.700925779 +0000 UTC m=+365.517125379" Feb 23 17:36:50 crc kubenswrapper[4724]: I0223 17:36:50.686995 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9smz" event={"ID":"474bcfee-4643-4fdc-b7c9-d823ecb79b90","Type":"ContainerStarted","Data":"88260a1604f7e2b288e59c938d2cb3bf9e5b3ccc0f7c7017d792bc53e1d16be6"} Feb 23 17:36:50 crc kubenswrapper[4724]: I0223 17:36:50.715540 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-z9smz" podStartSLOduration=2.298578173 podStartE2EDuration="4.715512291s" podCreationTimestamp="2026-02-23 17:36:46 +0000 UTC" firstStartedPulling="2026-02-23 17:36:47.645048486 +0000 UTC m=+363.461248086" lastFinishedPulling="2026-02-23 17:36:50.061982604 +0000 UTC m=+365.878182204" observedRunningTime="2026-02-23 17:36:50.70833076 +0000 UTC m=+366.524530370" watchObservedRunningTime="2026-02-23 17:36:50.715512291 +0000 UTC m=+366.531711931" Feb 23 17:36:54 crc kubenswrapper[4724]: I0223 17:36:54.016702 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xwjqh" Feb 23 17:36:54 crc kubenswrapper[4724]: I0223 17:36:54.018368 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xwjqh" Feb 23 17:36:54 crc kubenswrapper[4724]: I0223 17:36:54.086319 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xwjqh" Feb 23 17:36:54 crc kubenswrapper[4724]: I0223 17:36:54.277548 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pmtdz" Feb 23 17:36:54 crc kubenswrapper[4724]: I0223 17:36:54.277615 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pmtdz" Feb 23 17:36:54 crc kubenswrapper[4724]: I0223 17:36:54.342326 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pmtdz" Feb 23 17:36:54 crc kubenswrapper[4724]: I0223 17:36:54.789478 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xwjqh" Feb 23 17:36:54 crc kubenswrapper[4724]: I0223 17:36:54.797685 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pmtdz" Feb 23 17:36:56 crc kubenswrapper[4724]: I0223 17:36:56.432979 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mb467" Feb 23 17:36:56 crc kubenswrapper[4724]: I0223 17:36:56.433045 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mb467" Feb 23 17:36:56 crc kubenswrapper[4724]: I0223 17:36:56.501119 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mb467" Feb 23 17:36:56 crc kubenswrapper[4724]: I0223 17:36:56.641625 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-z9smz" Feb 23 17:36:56 crc kubenswrapper[4724]: I0223 17:36:56.641685 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-z9smz" Feb 23 17:36:56 crc kubenswrapper[4724]: I0223 17:36:56.782237 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mb467" Feb 23 17:36:57 crc kubenswrapper[4724]: I0223 17:36:57.686375 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-z9smz" podUID="474bcfee-4643-4fdc-b7c9-d823ecb79b90" containerName="registry-server" probeResult="failure" output=< Feb 23 17:36:57 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 23 17:36:57 crc kubenswrapper[4724]: > Feb 23 17:37:01 crc kubenswrapper[4724]: I0223 17:37:01.042993 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-fwwcq" Feb 23 17:37:01 crc kubenswrapper[4724]: I0223 17:37:01.119498 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qqsg7"] Feb 23 17:37:06 crc kubenswrapper[4724]: I0223 17:37:06.687192 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-z9smz" Feb 23 17:37:06 crc kubenswrapper[4724]: I0223 17:37:06.749639 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-z9smz" Feb 23 17:37:11 crc kubenswrapper[4724]: I0223 17:37:11.725590 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b6f7b46b7-vfpwb"] Feb 23 17:37:11 crc kubenswrapper[4724]: I0223 17:37:11.726491 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7b6f7b46b7-vfpwb" podUID="5b4e0f34-629d-499a-8440-8c0df6c7c5db" containerName="route-controller-manager" containerID="cri-o://f58c5fcb3465897ab18dc5510cc3a664a08a9b6f8613a75583a3bc086d19b987" gracePeriod=30 Feb 23 17:37:12 crc kubenswrapper[4724]: I0223 17:37:12.097164 4724 generic.go:334] "Generic (PLEG): container finished" podID="5b4e0f34-629d-499a-8440-8c0df6c7c5db" containerID="f58c5fcb3465897ab18dc5510cc3a664a08a9b6f8613a75583a3bc086d19b987" exitCode=0 Feb 23 17:37:12 crc kubenswrapper[4724]: I0223 17:37:12.097279 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b6f7b46b7-vfpwb" event={"ID":"5b4e0f34-629d-499a-8440-8c0df6c7c5db","Type":"ContainerDied","Data":"f58c5fcb3465897ab18dc5510cc3a664a08a9b6f8613a75583a3bc086d19b987"} Feb 23 17:37:12 crc kubenswrapper[4724]: I0223 17:37:12.220129 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b6f7b46b7-vfpwb" Feb 23 17:37:12 crc kubenswrapper[4724]: I0223 17:37:12.323413 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b4e0f34-629d-499a-8440-8c0df6c7c5db-config\") pod \"5b4e0f34-629d-499a-8440-8c0df6c7c5db\" (UID: \"5b4e0f34-629d-499a-8440-8c0df6c7c5db\") " Feb 23 17:37:12 crc kubenswrapper[4724]: I0223 17:37:12.323520 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5b4e0f34-629d-499a-8440-8c0df6c7c5db-client-ca\") pod \"5b4e0f34-629d-499a-8440-8c0df6c7c5db\" (UID: \"5b4e0f34-629d-499a-8440-8c0df6c7c5db\") " Feb 23 17:37:12 crc kubenswrapper[4724]: I0223 17:37:12.323578 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9hjn\" (UniqueName: \"kubernetes.io/projected/5b4e0f34-629d-499a-8440-8c0df6c7c5db-kube-api-access-h9hjn\") pod \"5b4e0f34-629d-499a-8440-8c0df6c7c5db\" (UID: \"5b4e0f34-629d-499a-8440-8c0df6c7c5db\") " Feb 23 17:37:12 crc kubenswrapper[4724]: I0223 17:37:12.323761 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b4e0f34-629d-499a-8440-8c0df6c7c5db-serving-cert\") pod \"5b4e0f34-629d-499a-8440-8c0df6c7c5db\" (UID: \"5b4e0f34-629d-499a-8440-8c0df6c7c5db\") " Feb 23 17:37:12 crc kubenswrapper[4724]: I0223 17:37:12.324812 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b4e0f34-629d-499a-8440-8c0df6c7c5db-client-ca" (OuterVolumeSpecName: "client-ca") pod "5b4e0f34-629d-499a-8440-8c0df6c7c5db" (UID: "5b4e0f34-629d-499a-8440-8c0df6c7c5db"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:37:12 crc kubenswrapper[4724]: I0223 17:37:12.325154 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b4e0f34-629d-499a-8440-8c0df6c7c5db-config" (OuterVolumeSpecName: "config") pod "5b4e0f34-629d-499a-8440-8c0df6c7c5db" (UID: "5b4e0f34-629d-499a-8440-8c0df6c7c5db"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:37:12 crc kubenswrapper[4724]: I0223 17:37:12.327801 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b4e0f34-629d-499a-8440-8c0df6c7c5db-config\") on node \"crc\" DevicePath \"\"" Feb 23 17:37:12 crc kubenswrapper[4724]: I0223 17:37:12.327828 4724 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5b4e0f34-629d-499a-8440-8c0df6c7c5db-client-ca\") on node \"crc\" DevicePath \"\"" Feb 23 17:37:12 crc kubenswrapper[4724]: I0223 17:37:12.330677 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b4e0f34-629d-499a-8440-8c0df6c7c5db-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5b4e0f34-629d-499a-8440-8c0df6c7c5db" (UID: "5b4e0f34-629d-499a-8440-8c0df6c7c5db"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:37:12 crc kubenswrapper[4724]: I0223 17:37:12.332738 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b4e0f34-629d-499a-8440-8c0df6c7c5db-kube-api-access-h9hjn" (OuterVolumeSpecName: "kube-api-access-h9hjn") pod "5b4e0f34-629d-499a-8440-8c0df6c7c5db" (UID: "5b4e0f34-629d-499a-8440-8c0df6c7c5db"). InnerVolumeSpecName "kube-api-access-h9hjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:37:12 crc kubenswrapper[4724]: I0223 17:37:12.428826 4724 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b4e0f34-629d-499a-8440-8c0df6c7c5db-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 17:37:12 crc kubenswrapper[4724]: I0223 17:37:12.428881 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9hjn\" (UniqueName: \"kubernetes.io/projected/5b4e0f34-629d-499a-8440-8c0df6c7c5db-kube-api-access-h9hjn\") on node \"crc\" DevicePath \"\"" Feb 23 17:37:13 crc kubenswrapper[4724]: I0223 17:37:13.017285 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c646b77ff-tt4ds"] Feb 23 17:37:13 crc kubenswrapper[4724]: E0223 17:37:13.017576 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b4e0f34-629d-499a-8440-8c0df6c7c5db" containerName="route-controller-manager" Feb 23 17:37:13 crc kubenswrapper[4724]: I0223 17:37:13.017593 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b4e0f34-629d-499a-8440-8c0df6c7c5db" containerName="route-controller-manager" Feb 23 17:37:13 crc kubenswrapper[4724]: I0223 17:37:13.017729 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b4e0f34-629d-499a-8440-8c0df6c7c5db" containerName="route-controller-manager" Feb 23 17:37:13 crc kubenswrapper[4724]: I0223 17:37:13.018211 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c646b77ff-tt4ds" Feb 23 17:37:13 crc kubenswrapper[4724]: I0223 17:37:13.037431 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d57da179-ea78-4c34-8a0e-9b9910b4cf81-client-ca\") pod \"route-controller-manager-6c646b77ff-tt4ds\" (UID: \"d57da179-ea78-4c34-8a0e-9b9910b4cf81\") " pod="openshift-route-controller-manager/route-controller-manager-6c646b77ff-tt4ds" Feb 23 17:37:13 crc kubenswrapper[4724]: I0223 17:37:13.037593 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-572nd\" (UniqueName: \"kubernetes.io/projected/d57da179-ea78-4c34-8a0e-9b9910b4cf81-kube-api-access-572nd\") pod \"route-controller-manager-6c646b77ff-tt4ds\" (UID: \"d57da179-ea78-4c34-8a0e-9b9910b4cf81\") " pod="openshift-route-controller-manager/route-controller-manager-6c646b77ff-tt4ds" Feb 23 17:37:13 crc kubenswrapper[4724]: I0223 17:37:13.037652 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d57da179-ea78-4c34-8a0e-9b9910b4cf81-config\") pod \"route-controller-manager-6c646b77ff-tt4ds\" (UID: \"d57da179-ea78-4c34-8a0e-9b9910b4cf81\") " pod="openshift-route-controller-manager/route-controller-manager-6c646b77ff-tt4ds" Feb 23 17:37:13 crc kubenswrapper[4724]: I0223 17:37:13.037747 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d57da179-ea78-4c34-8a0e-9b9910b4cf81-serving-cert\") pod \"route-controller-manager-6c646b77ff-tt4ds\" (UID: \"d57da179-ea78-4c34-8a0e-9b9910b4cf81\") " pod="openshift-route-controller-manager/route-controller-manager-6c646b77ff-tt4ds" Feb 23 17:37:13 crc kubenswrapper[4724]: I0223 17:37:13.037881 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c646b77ff-tt4ds"] Feb 23 17:37:13 crc kubenswrapper[4724]: I0223 17:37:13.106369 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b6f7b46b7-vfpwb" event={"ID":"5b4e0f34-629d-499a-8440-8c0df6c7c5db","Type":"ContainerDied","Data":"aeceafbb3115a5b52b1c642cd8ad136e59f0ec16bd90b98a0e79885250986c67"} Feb 23 17:37:13 crc kubenswrapper[4724]: I0223 17:37:13.106448 4724 scope.go:117] "RemoveContainer" containerID="f58c5fcb3465897ab18dc5510cc3a664a08a9b6f8613a75583a3bc086d19b987" Feb 23 17:37:13 crc kubenswrapper[4724]: I0223 17:37:13.106545 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b6f7b46b7-vfpwb" Feb 23 17:37:13 crc kubenswrapper[4724]: I0223 17:37:13.129602 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b6f7b46b7-vfpwb"] Feb 23 17:37:13 crc kubenswrapper[4724]: I0223 17:37:13.133681 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b6f7b46b7-vfpwb"] Feb 23 17:37:13 crc kubenswrapper[4724]: I0223 17:37:13.138777 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d57da179-ea78-4c34-8a0e-9b9910b4cf81-serving-cert\") pod \"route-controller-manager-6c646b77ff-tt4ds\" (UID: \"d57da179-ea78-4c34-8a0e-9b9910b4cf81\") " pod="openshift-route-controller-manager/route-controller-manager-6c646b77ff-tt4ds" Feb 23 17:37:13 crc kubenswrapper[4724]: I0223 17:37:13.138861 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d57da179-ea78-4c34-8a0e-9b9910b4cf81-client-ca\") pod \"route-controller-manager-6c646b77ff-tt4ds\" (UID: \"d57da179-ea78-4c34-8a0e-9b9910b4cf81\") " pod="openshift-route-controller-manager/route-controller-manager-6c646b77ff-tt4ds" Feb 23 17:37:13 crc kubenswrapper[4724]: I0223 17:37:13.138903 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-572nd\" (UniqueName: \"kubernetes.io/projected/d57da179-ea78-4c34-8a0e-9b9910b4cf81-kube-api-access-572nd\") pod \"route-controller-manager-6c646b77ff-tt4ds\" (UID: \"d57da179-ea78-4c34-8a0e-9b9910b4cf81\") " pod="openshift-route-controller-manager/route-controller-manager-6c646b77ff-tt4ds" Feb 23 17:37:13 crc kubenswrapper[4724]: I0223 17:37:13.138950 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d57da179-ea78-4c34-8a0e-9b9910b4cf81-config\") pod \"route-controller-manager-6c646b77ff-tt4ds\" (UID: \"d57da179-ea78-4c34-8a0e-9b9910b4cf81\") " pod="openshift-route-controller-manager/route-controller-manager-6c646b77ff-tt4ds" Feb 23 17:37:13 crc kubenswrapper[4724]: I0223 17:37:13.141291 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d57da179-ea78-4c34-8a0e-9b9910b4cf81-config\") pod \"route-controller-manager-6c646b77ff-tt4ds\" (UID: \"d57da179-ea78-4c34-8a0e-9b9910b4cf81\") " pod="openshift-route-controller-manager/route-controller-manager-6c646b77ff-tt4ds" Feb 23 17:37:13 crc kubenswrapper[4724]: I0223 17:37:13.141964 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d57da179-ea78-4c34-8a0e-9b9910b4cf81-client-ca\") pod \"route-controller-manager-6c646b77ff-tt4ds\" (UID: \"d57da179-ea78-4c34-8a0e-9b9910b4cf81\") " pod="openshift-route-controller-manager/route-controller-manager-6c646b77ff-tt4ds" Feb 23 17:37:13 crc kubenswrapper[4724]: I0223 17:37:13.144379 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d57da179-ea78-4c34-8a0e-9b9910b4cf81-serving-cert\") pod \"route-controller-manager-6c646b77ff-tt4ds\" (UID: \"d57da179-ea78-4c34-8a0e-9b9910b4cf81\") " pod="openshift-route-controller-manager/route-controller-manager-6c646b77ff-tt4ds" Feb 23 17:37:13 crc kubenswrapper[4724]: I0223 17:37:13.161075 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-572nd\" (UniqueName: \"kubernetes.io/projected/d57da179-ea78-4c34-8a0e-9b9910b4cf81-kube-api-access-572nd\") pod \"route-controller-manager-6c646b77ff-tt4ds\" (UID: \"d57da179-ea78-4c34-8a0e-9b9910b4cf81\") " pod="openshift-route-controller-manager/route-controller-manager-6c646b77ff-tt4ds" Feb 23 17:37:13 crc kubenswrapper[4724]: I0223 17:37:13.344280 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c646b77ff-tt4ds" Feb 23 17:37:13 crc kubenswrapper[4724]: I0223 17:37:13.821256 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c646b77ff-tt4ds"] Feb 23 17:37:13 crc kubenswrapper[4724]: W0223 17:37:13.831031 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd57da179_ea78_4c34_8a0e_9b9910b4cf81.slice/crio-767bf76dfb41f3f1c949f1e851df2b045735d707fed1b516aed1a2a35c64dc81 WatchSource:0}: Error finding container 767bf76dfb41f3f1c949f1e851df2b045735d707fed1b516aed1a2a35c64dc81: Status 404 returned error can't find the container with id 767bf76dfb41f3f1c949f1e851df2b045735d707fed1b516aed1a2a35c64dc81 Feb 23 17:37:14 crc kubenswrapper[4724]: I0223 17:37:14.115342 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c646b77ff-tt4ds" event={"ID":"d57da179-ea78-4c34-8a0e-9b9910b4cf81","Type":"ContainerStarted","Data":"be12ee9c28c399001bd74a07a5ec29266c947529aa45b343f09f03fd9034e685"} Feb 23 17:37:14 crc kubenswrapper[4724]: I0223 17:37:14.115693 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c646b77ff-tt4ds" event={"ID":"d57da179-ea78-4c34-8a0e-9b9910b4cf81","Type":"ContainerStarted","Data":"767bf76dfb41f3f1c949f1e851df2b045735d707fed1b516aed1a2a35c64dc81"} Feb 23 17:37:14 crc kubenswrapper[4724]: I0223 17:37:14.115715 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6c646b77ff-tt4ds" Feb 23 17:37:14 crc kubenswrapper[4724]: I0223 17:37:14.133696 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6c646b77ff-tt4ds" podStartSLOduration=3.133672935 podStartE2EDuration="3.133672935s" podCreationTimestamp="2026-02-23 17:37:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:37:14.13309838 +0000 UTC m=+389.949298000" watchObservedRunningTime="2026-02-23 17:37:14.133672935 +0000 UTC m=+389.949872535" Feb 23 17:37:14 crc kubenswrapper[4724]: I0223 17:37:14.492759 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6c646b77ff-tt4ds" Feb 23 17:37:14 crc kubenswrapper[4724]: I0223 17:37:14.957648 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b4e0f34-629d-499a-8440-8c0df6c7c5db" path="/var/lib/kubelet/pods/5b4e0f34-629d-499a-8440-8c0df6c7c5db/volumes" Feb 23 17:37:26 crc kubenswrapper[4724]: I0223 17:37:26.170689 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" podUID="9d52ec05-b283-48f8-aed2-50c0a6dcc9e3" containerName="registry" containerID="cri-o://e502cc0d195ea248d7175d235ccec9cc0041327716c5a06092f4dccc680da6bc" gracePeriod=30 Feb 23 17:37:26 crc kubenswrapper[4724]: I0223 17:37:26.639712 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:37:26 crc kubenswrapper[4724]: I0223 17:37:26.765037 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2llc\" (UniqueName: \"kubernetes.io/projected/9d52ec05-b283-48f8-aed2-50c0a6dcc9e3-kube-api-access-r2llc\") pod \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " Feb 23 17:37:26 crc kubenswrapper[4724]: I0223 17:37:26.765195 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9d52ec05-b283-48f8-aed2-50c0a6dcc9e3-registry-tls\") pod \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " Feb 23 17:37:26 crc kubenswrapper[4724]: I0223 17:37:26.765240 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9d52ec05-b283-48f8-aed2-50c0a6dcc9e3-registry-certificates\") pod \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " Feb 23 17:37:26 crc kubenswrapper[4724]: I0223 17:37:26.765277 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d52ec05-b283-48f8-aed2-50c0a6dcc9e3-trusted-ca\") pod \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " Feb 23 17:37:26 crc kubenswrapper[4724]: I0223 17:37:26.765524 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " Feb 23 17:37:26 crc kubenswrapper[4724]: I0223 17:37:26.765578 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9d52ec05-b283-48f8-aed2-50c0a6dcc9e3-installation-pull-secrets\") pod \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " Feb 23 17:37:26 crc kubenswrapper[4724]: I0223 17:37:26.765638 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9d52ec05-b283-48f8-aed2-50c0a6dcc9e3-bound-sa-token\") pod \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " Feb 23 17:37:26 crc kubenswrapper[4724]: I0223 17:37:26.765692 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9d52ec05-b283-48f8-aed2-50c0a6dcc9e3-ca-trust-extracted\") pod \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\" (UID: \"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3\") " Feb 23 17:37:26 crc kubenswrapper[4724]: I0223 17:37:26.765984 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d52ec05-b283-48f8-aed2-50c0a6dcc9e3-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:37:26 crc kubenswrapper[4724]: I0223 17:37:26.766013 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d52ec05-b283-48f8-aed2-50c0a6dcc9e3-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:37:26 crc kubenswrapper[4724]: I0223 17:37:26.766273 4724 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9d52ec05-b283-48f8-aed2-50c0a6dcc9e3-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 23 17:37:26 crc kubenswrapper[4724]: I0223 17:37:26.766289 4724 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d52ec05-b283-48f8-aed2-50c0a6dcc9e3-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 23 17:37:26 crc kubenswrapper[4724]: I0223 17:37:26.773698 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d52ec05-b283-48f8-aed2-50c0a6dcc9e3-kube-api-access-r2llc" (OuterVolumeSpecName: "kube-api-access-r2llc") pod "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3"). InnerVolumeSpecName "kube-api-access-r2llc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:37:26 crc kubenswrapper[4724]: I0223 17:37:26.774039 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d52ec05-b283-48f8-aed2-50c0a6dcc9e3-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:37:26 crc kubenswrapper[4724]: I0223 17:37:26.781494 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d52ec05-b283-48f8-aed2-50c0a6dcc9e3-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:37:26 crc kubenswrapper[4724]: I0223 17:37:26.782546 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 23 17:37:26 crc kubenswrapper[4724]: I0223 17:37:26.783567 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d52ec05-b283-48f8-aed2-50c0a6dcc9e3-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:37:26 crc kubenswrapper[4724]: I0223 17:37:26.791553 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d52ec05-b283-48f8-aed2-50c0a6dcc9e3-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3" (UID: "9d52ec05-b283-48f8-aed2-50c0a6dcc9e3"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:37:26 crc kubenswrapper[4724]: I0223 17:37:26.867832 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2llc\" (UniqueName: \"kubernetes.io/projected/9d52ec05-b283-48f8-aed2-50c0a6dcc9e3-kube-api-access-r2llc\") on node \"crc\" DevicePath \"\"" Feb 23 17:37:26 crc kubenswrapper[4724]: I0223 17:37:26.868296 4724 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9d52ec05-b283-48f8-aed2-50c0a6dcc9e3-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 23 17:37:26 crc kubenswrapper[4724]: I0223 17:37:26.868319 4724 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9d52ec05-b283-48f8-aed2-50c0a6dcc9e3-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 23 17:37:26 crc kubenswrapper[4724]: I0223 17:37:26.868336 4724 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9d52ec05-b283-48f8-aed2-50c0a6dcc9e3-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 23 17:37:26 crc kubenswrapper[4724]: I0223 17:37:26.868355 4724 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9d52ec05-b283-48f8-aed2-50c0a6dcc9e3-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 23 17:37:27 crc kubenswrapper[4724]: I0223 17:37:27.206197 4724 generic.go:334] "Generic (PLEG): container finished" podID="9d52ec05-b283-48f8-aed2-50c0a6dcc9e3" containerID="e502cc0d195ea248d7175d235ccec9cc0041327716c5a06092f4dccc680da6bc" exitCode=0 Feb 23 17:37:27 crc kubenswrapper[4724]: I0223 17:37:27.206285 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" event={"ID":"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3","Type":"ContainerDied","Data":"e502cc0d195ea248d7175d235ccec9cc0041327716c5a06092f4dccc680da6bc"} Feb 23 17:37:27 crc kubenswrapper[4724]: I0223 17:37:27.206325 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" Feb 23 17:37:27 crc kubenswrapper[4724]: I0223 17:37:27.206379 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-qqsg7" event={"ID":"9d52ec05-b283-48f8-aed2-50c0a6dcc9e3","Type":"ContainerDied","Data":"6094ca6d4531c565a0f33aadc2dc17cb69ab55c666b6a51abec7b48e55731764"} Feb 23 17:37:27 crc kubenswrapper[4724]: I0223 17:37:27.206472 4724 scope.go:117] "RemoveContainer" containerID="e502cc0d195ea248d7175d235ccec9cc0041327716c5a06092f4dccc680da6bc" Feb 23 17:37:27 crc kubenswrapper[4724]: I0223 17:37:27.248242 4724 scope.go:117] "RemoveContainer" containerID="e502cc0d195ea248d7175d235ccec9cc0041327716c5a06092f4dccc680da6bc" Feb 23 17:37:27 crc kubenswrapper[4724]: E0223 17:37:27.249169 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e502cc0d195ea248d7175d235ccec9cc0041327716c5a06092f4dccc680da6bc\": container with ID starting with e502cc0d195ea248d7175d235ccec9cc0041327716c5a06092f4dccc680da6bc not found: ID does not exist" containerID="e502cc0d195ea248d7175d235ccec9cc0041327716c5a06092f4dccc680da6bc" Feb 23 17:37:27 crc kubenswrapper[4724]: I0223 17:37:27.249280 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e502cc0d195ea248d7175d235ccec9cc0041327716c5a06092f4dccc680da6bc"} err="failed to get container status \"e502cc0d195ea248d7175d235ccec9cc0041327716c5a06092f4dccc680da6bc\": rpc error: code = NotFound desc = could not find container \"e502cc0d195ea248d7175d235ccec9cc0041327716c5a06092f4dccc680da6bc\": container with ID starting with e502cc0d195ea248d7175d235ccec9cc0041327716c5a06092f4dccc680da6bc not found: ID does not exist" Feb 23 17:37:27 crc kubenswrapper[4724]: I0223 17:37:27.251382 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qqsg7"] Feb 23 17:37:27 crc kubenswrapper[4724]: I0223 17:37:27.259345 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qqsg7"] Feb 23 17:37:28 crc kubenswrapper[4724]: I0223 17:37:28.967677 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d52ec05-b283-48f8-aed2-50c0a6dcc9e3" path="/var/lib/kubelet/pods/9d52ec05-b283-48f8-aed2-50c0a6dcc9e3/volumes" Feb 23 17:37:57 crc kubenswrapper[4724]: I0223 17:37:57.752178 4724 patch_prober.go:28] interesting pod/machine-config-daemon-rw78r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 17:37:57 crc kubenswrapper[4724]: I0223 17:37:57.753018 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 17:38:27 crc kubenswrapper[4724]: I0223 17:38:27.752035 4724 patch_prober.go:28] interesting pod/machine-config-daemon-rw78r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 17:38:27 crc kubenswrapper[4724]: I0223 17:38:27.752833 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 17:38:57 crc kubenswrapper[4724]: I0223 17:38:57.751951 4724 patch_prober.go:28] interesting pod/machine-config-daemon-rw78r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 17:38:57 crc kubenswrapper[4724]: I0223 17:38:57.752890 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 17:38:57 crc kubenswrapper[4724]: I0223 17:38:57.752961 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" Feb 23 17:38:57 crc kubenswrapper[4724]: I0223 17:38:57.754043 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bf38d9a5a1d2630175dcd94c9e597b013cf2712dd646e5ede28f7464d6d184a5"} pod="openshift-machine-config-operator/machine-config-daemon-rw78r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 17:38:57 crc kubenswrapper[4724]: I0223 17:38:57.754177 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" containerID="cri-o://bf38d9a5a1d2630175dcd94c9e597b013cf2712dd646e5ede28f7464d6d184a5" gracePeriod=600 Feb 23 17:38:58 crc kubenswrapper[4724]: I0223 17:38:58.901235 4724 generic.go:334] "Generic (PLEG): container finished" podID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerID="bf38d9a5a1d2630175dcd94c9e597b013cf2712dd646e5ede28f7464d6d184a5" exitCode=0 Feb 23 17:38:58 crc kubenswrapper[4724]: I0223 17:38:58.901515 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" event={"ID":"a065b197-b354-4d9b-b2e9-7d4882a3d1a2","Type":"ContainerDied","Data":"bf38d9a5a1d2630175dcd94c9e597b013cf2712dd646e5ede28f7464d6d184a5"} Feb 23 17:38:58 crc kubenswrapper[4724]: I0223 17:38:58.901717 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" event={"ID":"a065b197-b354-4d9b-b2e9-7d4882a3d1a2","Type":"ContainerStarted","Data":"9be474f9627637d77fe947efade6a752f0ba58fbd772db2e8c59cd37a04b285e"} Feb 23 17:38:58 crc kubenswrapper[4724]: I0223 17:38:58.901763 4724 scope.go:117] "RemoveContainer" containerID="716e3c7a8293727fc9315beb9da7fea72ec299ea5d8f15035ab77347c201a7db" Feb 23 17:39:45 crc kubenswrapper[4724]: I0223 17:39:45.140200 4724 scope.go:117] "RemoveContainer" containerID="a57eb595fa93ecaedb32a080094709af0ecc7a1433b861be3244510d99225e53" Feb 23 17:39:45 crc kubenswrapper[4724]: I0223 17:39:45.168494 4724 scope.go:117] "RemoveContainer" containerID="35a4e5e1ed3010b4c084c7200b6b2bd0e4e9d13275a81c92b6cbdc70da6aadd7" Feb 23 17:41:00 crc kubenswrapper[4724]: I0223 17:41:00.570017 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-pns6j"] Feb 23 17:41:00 crc kubenswrapper[4724]: E0223 17:41:00.571033 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d52ec05-b283-48f8-aed2-50c0a6dcc9e3" containerName="registry" Feb 23 17:41:00 crc kubenswrapper[4724]: I0223 17:41:00.571052 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d52ec05-b283-48f8-aed2-50c0a6dcc9e3" containerName="registry" Feb 23 17:41:00 crc kubenswrapper[4724]: I0223 17:41:00.571150 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d52ec05-b283-48f8-aed2-50c0a6dcc9e3" containerName="registry" Feb 23 17:41:00 crc kubenswrapper[4724]: I0223 17:41:00.571591 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-pns6j" Feb 23 17:41:00 crc kubenswrapper[4724]: I0223 17:41:00.573525 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 23 17:41:00 crc kubenswrapper[4724]: I0223 17:41:00.578328 4724 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-2mghr" Feb 23 17:41:00 crc kubenswrapper[4724]: I0223 17:41:00.583435 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 23 17:41:00 crc kubenswrapper[4724]: I0223 17:41:00.586337 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-pns6j"] Feb 23 17:41:00 crc kubenswrapper[4724]: I0223 17:41:00.593305 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-vlrjb"] Feb 23 17:41:00 crc kubenswrapper[4724]: I0223 17:41:00.594203 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-vlrjb" Feb 23 17:41:00 crc kubenswrapper[4724]: I0223 17:41:00.595953 4724 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-vljtc" Feb 23 17:41:00 crc kubenswrapper[4724]: I0223 17:41:00.600052 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-zzpcv"] Feb 23 17:41:00 crc kubenswrapper[4724]: I0223 17:41:00.600645 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-zzpcv" Feb 23 17:41:00 crc kubenswrapper[4724]: I0223 17:41:00.603066 4724 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-r6wrk" Feb 23 17:41:00 crc kubenswrapper[4724]: I0223 17:41:00.606065 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-vlrjb"] Feb 23 17:41:00 crc kubenswrapper[4724]: I0223 17:41:00.615736 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-zzpcv"] Feb 23 17:41:00 crc kubenswrapper[4724]: I0223 17:41:00.654276 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvjqb\" (UniqueName: \"kubernetes.io/projected/2e8372fe-4e2d-49f4-94b7-0e6000bd0f5b-kube-api-access-kvjqb\") pod \"cert-manager-webhook-687f57d79b-zzpcv\" (UID: \"2e8372fe-4e2d-49f4-94b7-0e6000bd0f5b\") " pod="cert-manager/cert-manager-webhook-687f57d79b-zzpcv" Feb 23 17:41:00 crc kubenswrapper[4724]: I0223 17:41:00.654329 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9kmd\" (UniqueName: \"kubernetes.io/projected/209587a2-48da-480c-93b0-17a306f362a3-kube-api-access-h9kmd\") pod \"cert-manager-cainjector-cf98fcc89-pns6j\" (UID: \"209587a2-48da-480c-93b0-17a306f362a3\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-pns6j" Feb 23 17:41:00 crc kubenswrapper[4724]: I0223 17:41:00.654589 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lnrh\" (UniqueName: \"kubernetes.io/projected/4a08b754-7169-4f53-9212-84ed962b15dd-kube-api-access-5lnrh\") pod \"cert-manager-858654f9db-vlrjb\" (UID: \"4a08b754-7169-4f53-9212-84ed962b15dd\") " pod="cert-manager/cert-manager-858654f9db-vlrjb" Feb 23 17:41:00 crc kubenswrapper[4724]: I0223 17:41:00.756227 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvjqb\" (UniqueName: \"kubernetes.io/projected/2e8372fe-4e2d-49f4-94b7-0e6000bd0f5b-kube-api-access-kvjqb\") pod \"cert-manager-webhook-687f57d79b-zzpcv\" (UID: \"2e8372fe-4e2d-49f4-94b7-0e6000bd0f5b\") " pod="cert-manager/cert-manager-webhook-687f57d79b-zzpcv" Feb 23 17:41:00 crc kubenswrapper[4724]: I0223 17:41:00.756287 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9kmd\" (UniqueName: \"kubernetes.io/projected/209587a2-48da-480c-93b0-17a306f362a3-kube-api-access-h9kmd\") pod \"cert-manager-cainjector-cf98fcc89-pns6j\" (UID: \"209587a2-48da-480c-93b0-17a306f362a3\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-pns6j" Feb 23 17:41:00 crc kubenswrapper[4724]: I0223 17:41:00.756352 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lnrh\" (UniqueName: \"kubernetes.io/projected/4a08b754-7169-4f53-9212-84ed962b15dd-kube-api-access-5lnrh\") pod \"cert-manager-858654f9db-vlrjb\" (UID: \"4a08b754-7169-4f53-9212-84ed962b15dd\") " pod="cert-manager/cert-manager-858654f9db-vlrjb" Feb 23 17:41:00 crc kubenswrapper[4724]: I0223 17:41:00.780647 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9kmd\" (UniqueName: \"kubernetes.io/projected/209587a2-48da-480c-93b0-17a306f362a3-kube-api-access-h9kmd\") pod \"cert-manager-cainjector-cf98fcc89-pns6j\" (UID: \"209587a2-48da-480c-93b0-17a306f362a3\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-pns6j" Feb 23 17:41:00 crc kubenswrapper[4724]: I0223 17:41:00.780717 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvjqb\" (UniqueName: \"kubernetes.io/projected/2e8372fe-4e2d-49f4-94b7-0e6000bd0f5b-kube-api-access-kvjqb\") pod \"cert-manager-webhook-687f57d79b-zzpcv\" (UID: \"2e8372fe-4e2d-49f4-94b7-0e6000bd0f5b\") " pod="cert-manager/cert-manager-webhook-687f57d79b-zzpcv" Feb 23 17:41:00 crc kubenswrapper[4724]: I0223 17:41:00.780849 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lnrh\" (UniqueName: \"kubernetes.io/projected/4a08b754-7169-4f53-9212-84ed962b15dd-kube-api-access-5lnrh\") pod \"cert-manager-858654f9db-vlrjb\" (UID: \"4a08b754-7169-4f53-9212-84ed962b15dd\") " pod="cert-manager/cert-manager-858654f9db-vlrjb" Feb 23 17:41:00 crc kubenswrapper[4724]: I0223 17:41:00.890854 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-pns6j" Feb 23 17:41:00 crc kubenswrapper[4724]: I0223 17:41:00.912953 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-vlrjb" Feb 23 17:41:00 crc kubenswrapper[4724]: I0223 17:41:00.925807 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-zzpcv" Feb 23 17:41:01 crc kubenswrapper[4724]: I0223 17:41:01.361050 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-zzpcv"] Feb 23 17:41:01 crc kubenswrapper[4724]: I0223 17:41:01.372242 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-pns6j"] Feb 23 17:41:01 crc kubenswrapper[4724]: I0223 17:41:01.381310 4724 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 17:41:01 crc kubenswrapper[4724]: I0223 17:41:01.694228 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-zzpcv" event={"ID":"2e8372fe-4e2d-49f4-94b7-0e6000bd0f5b","Type":"ContainerStarted","Data":"fbcd9e1e41cbce20835ca83124658903b7f2423afa3a79fa2f584d039aa8d1d4"} Feb 23 17:41:01 crc kubenswrapper[4724]: I0223 17:41:01.695757 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-pns6j" event={"ID":"209587a2-48da-480c-93b0-17a306f362a3","Type":"ContainerStarted","Data":"62baf8a9f1bc4cbc5256bc6185f551ade30bd53d5061b1a02760637bc6d41409"} Feb 23 17:41:01 crc kubenswrapper[4724]: I0223 17:41:01.930511 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-vlrjb"] Feb 23 17:41:02 crc kubenswrapper[4724]: I0223 17:41:02.703734 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-vlrjb" event={"ID":"4a08b754-7169-4f53-9212-84ed962b15dd","Type":"ContainerStarted","Data":"0d058ea460d9ba21985e542550c4c208a266bcdd9a5f8fbefddd35e8146d4482"} Feb 23 17:41:05 crc kubenswrapper[4724]: I0223 17:41:05.723703 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-zzpcv" event={"ID":"2e8372fe-4e2d-49f4-94b7-0e6000bd0f5b","Type":"ContainerStarted","Data":"20ef8bac693792419050da0c62539011d88ebf071a78a7b6fc2af006d1b5d872"} Feb 23 17:41:05 crc kubenswrapper[4724]: I0223 17:41:05.724327 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-zzpcv" Feb 23 17:41:05 crc kubenswrapper[4724]: I0223 17:41:05.725259 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-pns6j" event={"ID":"209587a2-48da-480c-93b0-17a306f362a3","Type":"ContainerStarted","Data":"f329d9bca6c65d5869c5f81ddb2584cc0b2f291d3c7e55f1b20cda56dbebee74"} Feb 23 17:41:05 crc kubenswrapper[4724]: I0223 17:41:05.727645 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-vlrjb" event={"ID":"4a08b754-7169-4f53-9212-84ed962b15dd","Type":"ContainerStarted","Data":"8472ce1de7a9fe26613a2b844d263be52ba71689891326b2c5f05d965dbdc86c"} Feb 23 17:41:05 crc kubenswrapper[4724]: I0223 17:41:05.739010 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-zzpcv" podStartSLOduration=1.772364728 podStartE2EDuration="5.738987251s" podCreationTimestamp="2026-02-23 17:41:00 +0000 UTC" firstStartedPulling="2026-02-23 17:41:01.384978741 +0000 UTC m=+617.201178341" lastFinishedPulling="2026-02-23 17:41:05.351601264 +0000 UTC m=+621.167800864" observedRunningTime="2026-02-23 17:41:05.738687334 +0000 UTC m=+621.554886934" watchObservedRunningTime="2026-02-23 17:41:05.738987251 +0000 UTC m=+621.555186851" Feb 23 17:41:05 crc kubenswrapper[4724]: I0223 17:41:05.772836 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-vlrjb" podStartSLOduration=2.364734333 podStartE2EDuration="5.772806318s" podCreationTimestamp="2026-02-23 17:41:00 +0000 UTC" firstStartedPulling="2026-02-23 17:41:01.962352545 +0000 UTC m=+617.778552145" lastFinishedPulling="2026-02-23 17:41:05.3704245 +0000 UTC m=+621.186624130" observedRunningTime="2026-02-23 17:41:05.769336862 +0000 UTC m=+621.585536482" watchObservedRunningTime="2026-02-23 17:41:05.772806318 +0000 UTC m=+621.589005918" Feb 23 17:41:05 crc kubenswrapper[4724]: I0223 17:41:05.793037 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-pns6j" podStartSLOduration=1.817949765 podStartE2EDuration="5.793012777s" podCreationTimestamp="2026-02-23 17:41:00 +0000 UTC" firstStartedPulling="2026-02-23 17:41:01.378915961 +0000 UTC m=+617.195115561" lastFinishedPulling="2026-02-23 17:41:05.353978943 +0000 UTC m=+621.170178573" observedRunningTime="2026-02-23 17:41:05.792638318 +0000 UTC m=+621.608837928" watchObservedRunningTime="2026-02-23 17:41:05.793012777 +0000 UTC m=+621.609212377" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.521336 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-78fmj"] Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.526042 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" podUID="8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" containerName="ovn-controller" containerID="cri-o://6574641e63ac4e0e79550d6b28b2a683f11be521784b19d2fbfc9226dd5b7ef0" gracePeriod=30 Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.526143 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" podUID="8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" containerName="nbdb" containerID="cri-o://fcecf13d69c640f2efa0b9abefbb13a08c15b82126e11159faed92da3b2addf7" gracePeriod=30 Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.526184 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" podUID="8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://9724a1e6d050110c119972cc34cf64fa8dc6c515f8bc0aec4dbf82463c18d3f9" gracePeriod=30 Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.526233 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" podUID="8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" containerName="ovn-acl-logging" containerID="cri-o://0cd0bf4870b6f1d6369d4d1a12817188568f572708708271de41e71e46f2002e" gracePeriod=30 Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.526291 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" podUID="8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" containerName="kube-rbac-proxy-node" containerID="cri-o://c4cc6f0c20370fbabd53ec282c0d89022adfd7706f939163b00f543f7086aad1" gracePeriod=30 Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.526293 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" podUID="8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" containerName="northd" containerID="cri-o://16f5feb6e9a7c7f7013f55ac108791b423b2ba7e1e443d1653c7a85e5622cfc1" gracePeriod=30 Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.526223 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" podUID="8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" containerName="sbdb" containerID="cri-o://0c79e6fcccea909f19d459103ec63aed7de2f0a498ffab190e63a647f2f20bdb" gracePeriod=30 Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.564211 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" podUID="8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" containerName="ovnkube-controller" containerID="cri-o://88158ddc63919d018224228d921f0c979df519c84676428b368e05e5728e7216" gracePeriod=30 Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.770256 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-78fmj_8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1/ovnkube-controller/2.log" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.773538 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-78fmj_8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1/ovn-acl-logging/0.log" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.774219 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-78fmj_8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1/ovn-controller/0.log" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.774771 4724 generic.go:334] "Generic (PLEG): container finished" podID="8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" containerID="88158ddc63919d018224228d921f0c979df519c84676428b368e05e5728e7216" exitCode=0 Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.774797 4724 generic.go:334] "Generic (PLEG): container finished" podID="8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" containerID="0c79e6fcccea909f19d459103ec63aed7de2f0a498ffab190e63a647f2f20bdb" exitCode=0 Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.774805 4724 generic.go:334] "Generic (PLEG): container finished" podID="8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" containerID="fcecf13d69c640f2efa0b9abefbb13a08c15b82126e11159faed92da3b2addf7" exitCode=0 Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.774813 4724 generic.go:334] "Generic (PLEG): container finished" podID="8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" containerID="16f5feb6e9a7c7f7013f55ac108791b423b2ba7e1e443d1653c7a85e5622cfc1" exitCode=0 Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.774822 4724 generic.go:334] "Generic (PLEG): container finished" podID="8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" containerID="9724a1e6d050110c119972cc34cf64fa8dc6c515f8bc0aec4dbf82463c18d3f9" exitCode=0 Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.774830 4724 generic.go:334] "Generic (PLEG): container finished" podID="8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" containerID="c4cc6f0c20370fbabd53ec282c0d89022adfd7706f939163b00f543f7086aad1" exitCode=0 Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.774838 4724 generic.go:334] "Generic (PLEG): container finished" podID="8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" containerID="0cd0bf4870b6f1d6369d4d1a12817188568f572708708271de41e71e46f2002e" exitCode=143 Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.774846 4724 generic.go:334] "Generic (PLEG): container finished" podID="8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" containerID="6574641e63ac4e0e79550d6b28b2a683f11be521784b19d2fbfc9226dd5b7ef0" exitCode=143 Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.774872 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" event={"ID":"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1","Type":"ContainerDied","Data":"88158ddc63919d018224228d921f0c979df519c84676428b368e05e5728e7216"} Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.774936 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" event={"ID":"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1","Type":"ContainerDied","Data":"0c79e6fcccea909f19d459103ec63aed7de2f0a498ffab190e63a647f2f20bdb"} Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.774959 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" event={"ID":"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1","Type":"ContainerDied","Data":"fcecf13d69c640f2efa0b9abefbb13a08c15b82126e11159faed92da3b2addf7"} Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.775024 4724 scope.go:117] "RemoveContainer" containerID="087c3ef86529801a82bd5e2a43ee86c6b7c0edee49efeae580674a4f19e47d26" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.775018 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" event={"ID":"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1","Type":"ContainerDied","Data":"16f5feb6e9a7c7f7013f55ac108791b423b2ba7e1e443d1653c7a85e5622cfc1"} Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.775060 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" event={"ID":"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1","Type":"ContainerDied","Data":"9724a1e6d050110c119972cc34cf64fa8dc6c515f8bc0aec4dbf82463c18d3f9"} Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.775074 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" event={"ID":"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1","Type":"ContainerDied","Data":"c4cc6f0c20370fbabd53ec282c0d89022adfd7706f939163b00f543f7086aad1"} Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.775087 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" event={"ID":"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1","Type":"ContainerDied","Data":"0cd0bf4870b6f1d6369d4d1a12817188568f572708708271de41e71e46f2002e"} Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.775102 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" event={"ID":"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1","Type":"ContainerDied","Data":"6574641e63ac4e0e79550d6b28b2a683f11be521784b19d2fbfc9226dd5b7ef0"} Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.780027 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mmxrg_45a042db-4057-4913-8091-da7d8c79feba/kube-multus/1.log" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.780755 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mmxrg_45a042db-4057-4913-8091-da7d8c79feba/kube-multus/0.log" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.780791 4724 generic.go:334] "Generic (PLEG): container finished" podID="45a042db-4057-4913-8091-da7d8c79feba" containerID="226aa2be31b966ee054e9088dea89c730f96f6f6438d8c45123ad5997ba318a1" exitCode=2 Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.780820 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mmxrg" event={"ID":"45a042db-4057-4913-8091-da7d8c79feba","Type":"ContainerDied","Data":"226aa2be31b966ee054e9088dea89c730f96f6f6438d8c45123ad5997ba318a1"} Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.781522 4724 scope.go:117] "RemoveContainer" containerID="226aa2be31b966ee054e9088dea89c730f96f6f6438d8c45123ad5997ba318a1" Feb 23 17:41:10 crc kubenswrapper[4724]: E0223 17:41:10.781820 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-mmxrg_openshift-multus(45a042db-4057-4913-8091-da7d8c79feba)\"" pod="openshift-multus/multus-mmxrg" podUID="45a042db-4057-4913-8091-da7d8c79feba" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.836576 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-78fmj_8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1/ovn-acl-logging/0.log" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.837820 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-78fmj_8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1/ovn-controller/0.log" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.838686 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.839970 4724 scope.go:117] "RemoveContainer" containerID="1b6df06ac80efb6de6b872bec086095e2b4ba0c13dba6e949a4f7309ef3e4300" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.894499 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-x6bj7"] Feb 23 17:41:10 crc kubenswrapper[4724]: E0223 17:41:10.894803 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" containerName="kubecfg-setup" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.894827 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" containerName="kubecfg-setup" Feb 23 17:41:10 crc kubenswrapper[4724]: E0223 17:41:10.894843 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" containerName="northd" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.894854 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" containerName="northd" Feb 23 17:41:10 crc kubenswrapper[4724]: E0223 17:41:10.894863 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" containerName="sbdb" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.894871 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" containerName="sbdb" Feb 23 17:41:10 crc kubenswrapper[4724]: E0223 17:41:10.894883 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" containerName="ovnkube-controller" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.894892 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" containerName="ovnkube-controller" Feb 23 17:41:10 crc kubenswrapper[4724]: E0223 17:41:10.894904 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" containerName="ovnkube-controller" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.894914 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" containerName="ovnkube-controller" Feb 23 17:41:10 crc kubenswrapper[4724]: E0223 17:41:10.894929 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" containerName="ovnkube-controller" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.894938 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" containerName="ovnkube-controller" Feb 23 17:41:10 crc kubenswrapper[4724]: E0223 17:41:10.894950 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" containerName="ovn-acl-logging" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.894958 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" containerName="ovn-acl-logging" Feb 23 17:41:10 crc kubenswrapper[4724]: E0223 17:41:10.894973 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" containerName="kube-rbac-proxy-node" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.894982 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" containerName="kube-rbac-proxy-node" Feb 23 17:41:10 crc kubenswrapper[4724]: E0223 17:41:10.894990 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" containerName="kube-rbac-proxy-ovn-metrics" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.894999 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" containerName="kube-rbac-proxy-ovn-metrics" Feb 23 17:41:10 crc kubenswrapper[4724]: E0223 17:41:10.895012 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" containerName="nbdb" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.895020 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" containerName="nbdb" Feb 23 17:41:10 crc kubenswrapper[4724]: E0223 17:41:10.895031 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" containerName="ovn-controller" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.895040 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" containerName="ovn-controller" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.895200 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" containerName="sbdb" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.895219 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" containerName="kube-rbac-proxy-node" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.895231 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" containerName="ovnkube-controller" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.895243 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" containerName="kube-rbac-proxy-ovn-metrics" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.895252 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" containerName="ovnkube-controller" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.895263 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" containerName="northd" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.895275 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" containerName="ovnkube-controller" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.895285 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" containerName="ovn-acl-logging" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.895294 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" containerName="ovnkube-controller" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.895304 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" containerName="nbdb" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.895313 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" containerName="ovn-controller" Feb 23 17:41:10 crc kubenswrapper[4724]: E0223 17:41:10.895448 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" containerName="ovnkube-controller" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.895461 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" containerName="ovnkube-controller" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.897841 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.927749 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-var-lib-openvswitch\") pod \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.928087 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-host-cni-netd\") pod \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.928199 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpsrk\" (UniqueName: \"kubernetes.io/projected/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-kube-api-access-vpsrk\") pod \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.928303 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-run-systemd\") pod \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.928456 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-systemd-units\") pod \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.927904 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" (UID: "8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.928162 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" (UID: "8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.928535 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" (UID: "8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.928599 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-host-run-ovn-kubernetes\") pod \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.928724 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-host-slash\") pod \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.928751 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-run-ovn\") pod \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.928775 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-host-kubelet\") pod \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.928802 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-host-slash" (OuterVolumeSpecName: "host-slash") pod "8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" (UID: "8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.928815 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-ovn-node-metrics-cert\") pod \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.928836 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-log-socket\") pod \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.928858 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-host-var-lib-cni-networks-ovn-kubernetes\") pod \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.928837 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" (UID: "8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.928856 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" (UID: "8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.928882 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-log-socket" (OuterVolumeSpecName: "log-socket") pod "8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" (UID: "8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.928906 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-host-run-netns\") pod \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.928945 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" (UID: "8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.928970 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-run-openvswitch\") pod \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.928974 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" (UID: "8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.929024 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-node-log\") pod \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.929044 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" (UID: "8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.929057 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-host-cni-bin\") pod \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.929069 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-node-log" (OuterVolumeSpecName: "node-log") pod "8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" (UID: "8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.929077 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-ovnkube-config\") pod \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.929112 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-etc-openvswitch\") pod \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.929140 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-env-overrides\") pod \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.929127 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" (UID: "8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.929167 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" (UID: "8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.929175 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-ovnkube-script-lib\") pod \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\" (UID: \"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1\") " Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.929335 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8c73dc1e-320c-48ac-8c43-7e0b93f7ba41-host-run-netns\") pod \"ovnkube-node-x6bj7\" (UID: \"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.929359 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8c73dc1e-320c-48ac-8c43-7e0b93f7ba41-log-socket\") pod \"ovnkube-node-x6bj7\" (UID: \"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.929399 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8c73dc1e-320c-48ac-8c43-7e0b93f7ba41-ovnkube-script-lib\") pod \"ovnkube-node-x6bj7\" (UID: \"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.929442 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8c73dc1e-320c-48ac-8c43-7e0b93f7ba41-run-ovn\") pod \"ovnkube-node-x6bj7\" (UID: \"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.929479 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8c73dc1e-320c-48ac-8c43-7e0b93f7ba41-ovnkube-config\") pod \"ovnkube-node-x6bj7\" (UID: \"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.929569 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8c73dc1e-320c-48ac-8c43-7e0b93f7ba41-host-cni-netd\") pod \"ovnkube-node-x6bj7\" (UID: \"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.929594 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8c73dc1e-320c-48ac-8c43-7e0b93f7ba41-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-x6bj7\" (UID: \"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.929617 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8c73dc1e-320c-48ac-8c43-7e0b93f7ba41-run-systemd\") pod \"ovnkube-node-x6bj7\" (UID: \"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.929627 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" (UID: "8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.929636 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8c73dc1e-320c-48ac-8c43-7e0b93f7ba41-env-overrides\") pod \"ovnkube-node-x6bj7\" (UID: \"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.929706 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" (UID: "8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.929742 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8c73dc1e-320c-48ac-8c43-7e0b93f7ba41-systemd-units\") pod \"ovnkube-node-x6bj7\" (UID: \"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.929759 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" (UID: "8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.929811 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c73dc1e-320c-48ac-8c43-7e0b93f7ba41-etc-openvswitch\") pod \"ovnkube-node-x6bj7\" (UID: \"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.929850 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8c73dc1e-320c-48ac-8c43-7e0b93f7ba41-node-log\") pod \"ovnkube-node-x6bj7\" (UID: \"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.929866 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8c73dc1e-320c-48ac-8c43-7e0b93f7ba41-host-slash\") pod \"ovnkube-node-x6bj7\" (UID: \"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.929974 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8c73dc1e-320c-48ac-8c43-7e0b93f7ba41-host-cni-bin\") pod \"ovnkube-node-x6bj7\" (UID: \"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.930017 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8c73dc1e-320c-48ac-8c43-7e0b93f7ba41-ovn-node-metrics-cert\") pod \"ovnkube-node-x6bj7\" (UID: \"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.930077 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8c73dc1e-320c-48ac-8c43-7e0b93f7ba41-host-kubelet\") pod \"ovnkube-node-x6bj7\" (UID: \"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.930116 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c73dc1e-320c-48ac-8c43-7e0b93f7ba41-run-openvswitch\") pod \"ovnkube-node-x6bj7\" (UID: \"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.930148 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c73dc1e-320c-48ac-8c43-7e0b93f7ba41-var-lib-openvswitch\") pod \"ovnkube-node-x6bj7\" (UID: \"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.930179 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmrwd\" (UniqueName: \"kubernetes.io/projected/8c73dc1e-320c-48ac-8c43-7e0b93f7ba41-kube-api-access-tmrwd\") pod \"ovnkube-node-x6bj7\" (UID: \"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.930237 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8c73dc1e-320c-48ac-8c43-7e0b93f7ba41-host-run-ovn-kubernetes\") pod \"ovnkube-node-x6bj7\" (UID: \"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.930311 4724 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.930322 4724 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.930331 4724 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.930339 4724 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-host-slash\") on node \"crc\" DevicePath \"\"" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.930350 4724 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.930359 4724 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.930368 4724 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-log-socket\") on node \"crc\" DevicePath \"\"" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.930377 4724 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.930386 4724 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.930408 4724 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.930417 4724 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-node-log\") on node \"crc\" DevicePath \"\"" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.930424 4724 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.930432 4724 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.930441 4724 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.930451 4724 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.930459 4724 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.930742 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-zzpcv" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.931655 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" (UID: "8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.936776 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-kube-api-access-vpsrk" (OuterVolumeSpecName: "kube-api-access-vpsrk") pod "8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" (UID: "8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1"). InnerVolumeSpecName "kube-api-access-vpsrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.937972 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" (UID: "8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:41:10 crc kubenswrapper[4724]: I0223 17:41:10.951497 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" (UID: "8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.031418 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8c73dc1e-320c-48ac-8c43-7e0b93f7ba41-host-run-ovn-kubernetes\") pod \"ovnkube-node-x6bj7\" (UID: \"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.031514 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8c73dc1e-320c-48ac-8c43-7e0b93f7ba41-host-run-netns\") pod \"ovnkube-node-x6bj7\" (UID: \"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.031550 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8c73dc1e-320c-48ac-8c43-7e0b93f7ba41-log-socket\") pod \"ovnkube-node-x6bj7\" (UID: \"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.031666 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8c73dc1e-320c-48ac-8c43-7e0b93f7ba41-host-run-netns\") pod \"ovnkube-node-x6bj7\" (UID: \"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.031748 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8c73dc1e-320c-48ac-8c43-7e0b93f7ba41-log-socket\") pod \"ovnkube-node-x6bj7\" (UID: \"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.032088 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8c73dc1e-320c-48ac-8c43-7e0b93f7ba41-ovnkube-script-lib\") pod \"ovnkube-node-x6bj7\" (UID: \"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.032158 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8c73dc1e-320c-48ac-8c43-7e0b93f7ba41-run-ovn\") pod \"ovnkube-node-x6bj7\" (UID: \"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.032213 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8c73dc1e-320c-48ac-8c43-7e0b93f7ba41-ovnkube-config\") pod \"ovnkube-node-x6bj7\" (UID: \"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.032291 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8c73dc1e-320c-48ac-8c43-7e0b93f7ba41-run-systemd\") pod \"ovnkube-node-x6bj7\" (UID: \"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.032342 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8c73dc1e-320c-48ac-8c43-7e0b93f7ba41-host-cni-netd\") pod \"ovnkube-node-x6bj7\" (UID: \"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.032375 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8c73dc1e-320c-48ac-8c43-7e0b93f7ba41-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-x6bj7\" (UID: \"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.033526 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8c73dc1e-320c-48ac-8c43-7e0b93f7ba41-systemd-units\") pod \"ovnkube-node-x6bj7\" (UID: \"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.033557 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8c73dc1e-320c-48ac-8c43-7e0b93f7ba41-env-overrides\") pod \"ovnkube-node-x6bj7\" (UID: \"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.033561 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8c73dc1e-320c-48ac-8c43-7e0b93f7ba41-ovnkube-config\") pod \"ovnkube-node-x6bj7\" (UID: \"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.033600 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c73dc1e-320c-48ac-8c43-7e0b93f7ba41-etc-openvswitch\") pod \"ovnkube-node-x6bj7\" (UID: \"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.032599 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8c73dc1e-320c-48ac-8c43-7e0b93f7ba41-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-x6bj7\" (UID: \"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.032503 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8c73dc1e-320c-48ac-8c43-7e0b93f7ba41-run-systemd\") pod \"ovnkube-node-x6bj7\" (UID: \"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.032466 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8c73dc1e-320c-48ac-8c43-7e0b93f7ba41-run-ovn\") pod \"ovnkube-node-x6bj7\" (UID: \"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.033698 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8c73dc1e-320c-48ac-8c43-7e0b93f7ba41-systemd-units\") pod \"ovnkube-node-x6bj7\" (UID: \"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.032558 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8c73dc1e-320c-48ac-8c43-7e0b93f7ba41-host-cni-netd\") pod \"ovnkube-node-x6bj7\" (UID: \"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.033729 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8c73dc1e-320c-48ac-8c43-7e0b93f7ba41-node-log\") pod \"ovnkube-node-x6bj7\" (UID: \"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.033770 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8c73dc1e-320c-48ac-8c43-7e0b93f7ba41-node-log\") pod \"ovnkube-node-x6bj7\" (UID: \"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.033809 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c73dc1e-320c-48ac-8c43-7e0b93f7ba41-etc-openvswitch\") pod \"ovnkube-node-x6bj7\" (UID: \"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.033804 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8c73dc1e-320c-48ac-8c43-7e0b93f7ba41-host-slash\") pod \"ovnkube-node-x6bj7\" (UID: \"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.033841 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8c73dc1e-320c-48ac-8c43-7e0b93f7ba41-host-slash\") pod \"ovnkube-node-x6bj7\" (UID: \"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.033853 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8c73dc1e-320c-48ac-8c43-7e0b93f7ba41-ovnkube-script-lib\") pod \"ovnkube-node-x6bj7\" (UID: \"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.033132 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8c73dc1e-320c-48ac-8c43-7e0b93f7ba41-host-run-ovn-kubernetes\") pod \"ovnkube-node-x6bj7\" (UID: \"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.033925 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8c73dc1e-320c-48ac-8c43-7e0b93f7ba41-host-cni-bin\") pod \"ovnkube-node-x6bj7\" (UID: \"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.033951 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8c73dc1e-320c-48ac-8c43-7e0b93f7ba41-ovn-node-metrics-cert\") pod \"ovnkube-node-x6bj7\" (UID: \"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.033982 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8c73dc1e-320c-48ac-8c43-7e0b93f7ba41-host-kubelet\") pod \"ovnkube-node-x6bj7\" (UID: \"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.034008 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c73dc1e-320c-48ac-8c43-7e0b93f7ba41-var-lib-openvswitch\") pod \"ovnkube-node-x6bj7\" (UID: \"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.034031 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c73dc1e-320c-48ac-8c43-7e0b93f7ba41-run-openvswitch\") pod \"ovnkube-node-x6bj7\" (UID: \"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.034056 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmrwd\" (UniqueName: \"kubernetes.io/projected/8c73dc1e-320c-48ac-8c43-7e0b93f7ba41-kube-api-access-tmrwd\") pod \"ovnkube-node-x6bj7\" (UID: \"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.034330 4724 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.034380 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8c73dc1e-320c-48ac-8c43-7e0b93f7ba41-host-kubelet\") pod \"ovnkube-node-x6bj7\" (UID: \"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.034437 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c73dc1e-320c-48ac-8c43-7e0b93f7ba41-run-openvswitch\") pod \"ovnkube-node-x6bj7\" (UID: \"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.034482 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8c73dc1e-320c-48ac-8c43-7e0b93f7ba41-host-cni-bin\") pod \"ovnkube-node-x6bj7\" (UID: \"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.034523 4724 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.034549 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpsrk\" (UniqueName: \"kubernetes.io/projected/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-kube-api-access-vpsrk\") on node \"crc\" DevicePath \"\"" Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.034567 4724 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.034679 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c73dc1e-320c-48ac-8c43-7e0b93f7ba41-var-lib-openvswitch\") pod \"ovnkube-node-x6bj7\" (UID: \"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.034909 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8c73dc1e-320c-48ac-8c43-7e0b93f7ba41-env-overrides\") pod \"ovnkube-node-x6bj7\" (UID: \"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.038009 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8c73dc1e-320c-48ac-8c43-7e0b93f7ba41-ovn-node-metrics-cert\") pod \"ovnkube-node-x6bj7\" (UID: \"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.056012 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmrwd\" (UniqueName: \"kubernetes.io/projected/8c73dc1e-320c-48ac-8c43-7e0b93f7ba41-kube-api-access-tmrwd\") pod \"ovnkube-node-x6bj7\" (UID: \"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41\") " pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.213573 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.792651 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-78fmj_8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1/ovn-acl-logging/0.log" Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.793584 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-78fmj_8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1/ovn-controller/0.log" Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.794349 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" event={"ID":"8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1","Type":"ContainerDied","Data":"4d3e62c813d4b4e51956aba87980d5e1132e8213ba780042a99f3f6149163ef8"} Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.794440 4724 scope.go:117] "RemoveContainer" containerID="88158ddc63919d018224228d921f0c979df519c84676428b368e05e5728e7216" Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.794370 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-78fmj" Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.798139 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mmxrg_45a042db-4057-4913-8091-da7d8c79feba/kube-multus/1.log" Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.800608 4724 generic.go:334] "Generic (PLEG): container finished" podID="8c73dc1e-320c-48ac-8c43-7e0b93f7ba41" containerID="3808f0a8d418f1a6f8552fde475df2eec9988b6dd5ad4882dbdd3be3f0f599ad" exitCode=0 Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.800656 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" event={"ID":"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41","Type":"ContainerDied","Data":"3808f0a8d418f1a6f8552fde475df2eec9988b6dd5ad4882dbdd3be3f0f599ad"} Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.800678 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" event={"ID":"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41","Type":"ContainerStarted","Data":"95cf82c6f10b4acb7b4c8561a9dd8b70438a576b6258a937976ab3d2c9d163f9"} Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.841056 4724 scope.go:117] "RemoveContainer" containerID="0c79e6fcccea909f19d459103ec63aed7de2f0a498ffab190e63a647f2f20bdb" Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.867459 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-78fmj"] Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.872881 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-78fmj"] Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.886918 4724 scope.go:117] "RemoveContainer" containerID="fcecf13d69c640f2efa0b9abefbb13a08c15b82126e11159faed92da3b2addf7" Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.916447 4724 scope.go:117] "RemoveContainer" containerID="16f5feb6e9a7c7f7013f55ac108791b423b2ba7e1e443d1653c7a85e5622cfc1" Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.942097 4724 scope.go:117] "RemoveContainer" containerID="9724a1e6d050110c119972cc34cf64fa8dc6c515f8bc0aec4dbf82463c18d3f9" Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.960004 4724 scope.go:117] "RemoveContainer" containerID="c4cc6f0c20370fbabd53ec282c0d89022adfd7706f939163b00f543f7086aad1" Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.975839 4724 scope.go:117] "RemoveContainer" containerID="0cd0bf4870b6f1d6369d4d1a12817188568f572708708271de41e71e46f2002e" Feb 23 17:41:11 crc kubenswrapper[4724]: I0223 17:41:11.994571 4724 scope.go:117] "RemoveContainer" containerID="6574641e63ac4e0e79550d6b28b2a683f11be521784b19d2fbfc9226dd5b7ef0" Feb 23 17:41:12 crc kubenswrapper[4724]: I0223 17:41:12.010648 4724 scope.go:117] "RemoveContainer" containerID="31a45d3dd297c3d2d4ca5e437f703790082ce9d5429688cc058e989da24daf02" Feb 23 17:41:12 crc kubenswrapper[4724]: I0223 17:41:12.809738 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" event={"ID":"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41","Type":"ContainerStarted","Data":"97b3c071d1f45ddd45df7dcecd7abe1e739fa61c14aa3261376d6cfbb759262a"} Feb 23 17:41:12 crc kubenswrapper[4724]: I0223 17:41:12.810360 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" event={"ID":"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41","Type":"ContainerStarted","Data":"61bb663ea3b9114606762c96dded65d3382801f08fd3261de6690978a44881bc"} Feb 23 17:41:12 crc kubenswrapper[4724]: I0223 17:41:12.810469 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" event={"ID":"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41","Type":"ContainerStarted","Data":"9f663cdc7a3d2b174674e6274491f1edd04e100e709512c438cd74eceb5f485e"} Feb 23 17:41:12 crc kubenswrapper[4724]: I0223 17:41:12.810488 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" event={"ID":"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41","Type":"ContainerStarted","Data":"f77640d5d1298d0eeb1603a87eb5a2525dc5bede3f2e88616f146e0d89ed8536"} Feb 23 17:41:12 crc kubenswrapper[4724]: I0223 17:41:12.810503 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" event={"ID":"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41","Type":"ContainerStarted","Data":"c981a58e752a578177b51ceff0294dc68f76d73ef37b5577af3bc907984513e8"} Feb 23 17:41:12 crc kubenswrapper[4724]: I0223 17:41:12.810517 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" event={"ID":"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41","Type":"ContainerStarted","Data":"93d8787c9989035fc33c57b4322466c624b4b4a833cf2b18a35be497f145a49b"} Feb 23 17:41:12 crc kubenswrapper[4724]: I0223 17:41:12.958923 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1" path="/var/lib/kubelet/pods/8c8df7b6-e5f2-4950-b2d2-9f1583fe76c1/volumes" Feb 23 17:41:15 crc kubenswrapper[4724]: I0223 17:41:15.831699 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" event={"ID":"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41","Type":"ContainerStarted","Data":"c5aaf8146f03ffdf5f47203ec8a08ed022a2bbc2022af5f1689322045ed1b8de"} Feb 23 17:41:17 crc kubenswrapper[4724]: I0223 17:41:17.849964 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" event={"ID":"8c73dc1e-320c-48ac-8c43-7e0b93f7ba41","Type":"ContainerStarted","Data":"5e444a1ad4a39a079e7ce5146ba9e97acb97a3c9a820638ae6d354afe6ab2cd0"} Feb 23 17:41:17 crc kubenswrapper[4724]: I0223 17:41:17.850452 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:17 crc kubenswrapper[4724]: I0223 17:41:17.850517 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:17 crc kubenswrapper[4724]: I0223 17:41:17.881358 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:17 crc kubenswrapper[4724]: I0223 17:41:17.891884 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" podStartSLOduration=7.89185393 podStartE2EDuration="7.89185393s" podCreationTimestamp="2026-02-23 17:41:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:41:17.88917659 +0000 UTC m=+633.705376190" watchObservedRunningTime="2026-02-23 17:41:17.89185393 +0000 UTC m=+633.708053520" Feb 23 17:41:18 crc kubenswrapper[4724]: I0223 17:41:18.857807 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:18 crc kubenswrapper[4724]: I0223 17:41:18.901468 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:24 crc kubenswrapper[4724]: I0223 17:41:24.958835 4724 scope.go:117] "RemoveContainer" containerID="226aa2be31b966ee054e9088dea89c730f96f6f6438d8c45123ad5997ba318a1" Feb 23 17:41:25 crc kubenswrapper[4724]: I0223 17:41:25.913731 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mmxrg_45a042db-4057-4913-8091-da7d8c79feba/kube-multus/1.log" Feb 23 17:41:25 crc kubenswrapper[4724]: I0223 17:41:25.914197 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mmxrg" event={"ID":"45a042db-4057-4913-8091-da7d8c79feba","Type":"ContainerStarted","Data":"07a368d08a00a875ab922411b119b98b5ce5a339be62fec57572f8e53f67cd31"} Feb 23 17:41:27 crc kubenswrapper[4724]: I0223 17:41:27.753273 4724 patch_prober.go:28] interesting pod/machine-config-daemon-rw78r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 17:41:27 crc kubenswrapper[4724]: I0223 17:41:27.754983 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 17:41:39 crc kubenswrapper[4724]: I0223 17:41:39.815669 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08x48lj"] Feb 23 17:41:39 crc kubenswrapper[4724]: I0223 17:41:39.817535 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08x48lj" Feb 23 17:41:39 crc kubenswrapper[4724]: I0223 17:41:39.820085 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 23 17:41:39 crc kubenswrapper[4724]: I0223 17:41:39.834293 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08x48lj"] Feb 23 17:41:39 crc kubenswrapper[4724]: I0223 17:41:39.965696 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kshkq\" (UniqueName: \"kubernetes.io/projected/ca8d1d6c-2638-493f-8aed-775dd9bd326d-kube-api-access-kshkq\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08x48lj\" (UID: \"ca8d1d6c-2638-493f-8aed-775dd9bd326d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08x48lj" Feb 23 17:41:39 crc kubenswrapper[4724]: I0223 17:41:39.965772 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ca8d1d6c-2638-493f-8aed-775dd9bd326d-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08x48lj\" (UID: \"ca8d1d6c-2638-493f-8aed-775dd9bd326d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08x48lj" Feb 23 17:41:39 crc kubenswrapper[4724]: I0223 17:41:39.965812 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ca8d1d6c-2638-493f-8aed-775dd9bd326d-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08x48lj\" (UID: \"ca8d1d6c-2638-493f-8aed-775dd9bd326d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08x48lj" Feb 23 17:41:40 crc kubenswrapper[4724]: I0223 17:41:40.068119 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kshkq\" (UniqueName: \"kubernetes.io/projected/ca8d1d6c-2638-493f-8aed-775dd9bd326d-kube-api-access-kshkq\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08x48lj\" (UID: \"ca8d1d6c-2638-493f-8aed-775dd9bd326d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08x48lj" Feb 23 17:41:40 crc kubenswrapper[4724]: I0223 17:41:40.068214 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ca8d1d6c-2638-493f-8aed-775dd9bd326d-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08x48lj\" (UID: \"ca8d1d6c-2638-493f-8aed-775dd9bd326d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08x48lj" Feb 23 17:41:40 crc kubenswrapper[4724]: I0223 17:41:40.068292 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ca8d1d6c-2638-493f-8aed-775dd9bd326d-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08x48lj\" (UID: \"ca8d1d6c-2638-493f-8aed-775dd9bd326d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08x48lj" Feb 23 17:41:40 crc kubenswrapper[4724]: I0223 17:41:40.069208 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ca8d1d6c-2638-493f-8aed-775dd9bd326d-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08x48lj\" (UID: \"ca8d1d6c-2638-493f-8aed-775dd9bd326d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08x48lj" Feb 23 17:41:40 crc kubenswrapper[4724]: I0223 17:41:40.069317 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ca8d1d6c-2638-493f-8aed-775dd9bd326d-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08x48lj\" (UID: \"ca8d1d6c-2638-493f-8aed-775dd9bd326d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08x48lj" Feb 23 17:41:40 crc kubenswrapper[4724]: I0223 17:41:40.102457 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kshkq\" (UniqueName: \"kubernetes.io/projected/ca8d1d6c-2638-493f-8aed-775dd9bd326d-kube-api-access-kshkq\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08x48lj\" (UID: \"ca8d1d6c-2638-493f-8aed-775dd9bd326d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08x48lj" Feb 23 17:41:40 crc kubenswrapper[4724]: I0223 17:41:40.157573 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08x48lj" Feb 23 17:41:40 crc kubenswrapper[4724]: I0223 17:41:40.414451 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08x48lj"] Feb 23 17:41:41 crc kubenswrapper[4724]: I0223 17:41:41.025505 4724 generic.go:334] "Generic (PLEG): container finished" podID="ca8d1d6c-2638-493f-8aed-775dd9bd326d" containerID="a0f09d53d790a9f9070552922346449c45c5438b1314903aaa769873165ad0ce" exitCode=0 Feb 23 17:41:41 crc kubenswrapper[4724]: I0223 17:41:41.025578 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08x48lj" event={"ID":"ca8d1d6c-2638-493f-8aed-775dd9bd326d","Type":"ContainerDied","Data":"a0f09d53d790a9f9070552922346449c45c5438b1314903aaa769873165ad0ce"} Feb 23 17:41:41 crc kubenswrapper[4724]: I0223 17:41:41.025622 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08x48lj" event={"ID":"ca8d1d6c-2638-493f-8aed-775dd9bd326d","Type":"ContainerStarted","Data":"1436897ebc2681c8ef79b2d1ed0366dd05933be3de6aed7e03a63bbcb9a4eeb9"} Feb 23 17:41:41 crc kubenswrapper[4724]: I0223 17:41:41.253877 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-x6bj7" Feb 23 17:41:43 crc kubenswrapper[4724]: I0223 17:41:43.039816 4724 generic.go:334] "Generic (PLEG): container finished" podID="ca8d1d6c-2638-493f-8aed-775dd9bd326d" containerID="5a903e00498c69b6785376c9f77f2dfc75068fbf925b81123adb13237ab15348" exitCode=0 Feb 23 17:41:43 crc kubenswrapper[4724]: I0223 17:41:43.039923 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08x48lj" event={"ID":"ca8d1d6c-2638-493f-8aed-775dd9bd326d","Type":"ContainerDied","Data":"5a903e00498c69b6785376c9f77f2dfc75068fbf925b81123adb13237ab15348"} Feb 23 17:41:43 crc kubenswrapper[4724]: E0223 17:41:43.625831 4724 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca8d1d6c_2638_493f_8aed_775dd9bd326d.slice/crio-conmon-04e5ea5c605d1415e181c8b778a75415d6fefc5dbba4d154dc77f5654226e269.scope\": RecentStats: unable to find data in memory cache]" Feb 23 17:41:44 crc kubenswrapper[4724]: I0223 17:41:44.067826 4724 generic.go:334] "Generic (PLEG): container finished" podID="ca8d1d6c-2638-493f-8aed-775dd9bd326d" containerID="04e5ea5c605d1415e181c8b778a75415d6fefc5dbba4d154dc77f5654226e269" exitCode=0 Feb 23 17:41:44 crc kubenswrapper[4724]: I0223 17:41:44.067903 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08x48lj" event={"ID":"ca8d1d6c-2638-493f-8aed-775dd9bd326d","Type":"ContainerDied","Data":"04e5ea5c605d1415e181c8b778a75415d6fefc5dbba4d154dc77f5654226e269"} Feb 23 17:41:45 crc kubenswrapper[4724]: I0223 17:41:45.300628 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08x48lj" Feb 23 17:41:45 crc kubenswrapper[4724]: I0223 17:41:45.457014 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ca8d1d6c-2638-493f-8aed-775dd9bd326d-util\") pod \"ca8d1d6c-2638-493f-8aed-775dd9bd326d\" (UID: \"ca8d1d6c-2638-493f-8aed-775dd9bd326d\") " Feb 23 17:41:45 crc kubenswrapper[4724]: I0223 17:41:45.457091 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ca8d1d6c-2638-493f-8aed-775dd9bd326d-bundle\") pod \"ca8d1d6c-2638-493f-8aed-775dd9bd326d\" (UID: \"ca8d1d6c-2638-493f-8aed-775dd9bd326d\") " Feb 23 17:41:45 crc kubenswrapper[4724]: I0223 17:41:45.457126 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kshkq\" (UniqueName: \"kubernetes.io/projected/ca8d1d6c-2638-493f-8aed-775dd9bd326d-kube-api-access-kshkq\") pod \"ca8d1d6c-2638-493f-8aed-775dd9bd326d\" (UID: \"ca8d1d6c-2638-493f-8aed-775dd9bd326d\") " Feb 23 17:41:45 crc kubenswrapper[4724]: I0223 17:41:45.459240 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca8d1d6c-2638-493f-8aed-775dd9bd326d-bundle" (OuterVolumeSpecName: "bundle") pod "ca8d1d6c-2638-493f-8aed-775dd9bd326d" (UID: "ca8d1d6c-2638-493f-8aed-775dd9bd326d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:41:45 crc kubenswrapper[4724]: I0223 17:41:45.465732 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca8d1d6c-2638-493f-8aed-775dd9bd326d-kube-api-access-kshkq" (OuterVolumeSpecName: "kube-api-access-kshkq") pod "ca8d1d6c-2638-493f-8aed-775dd9bd326d" (UID: "ca8d1d6c-2638-493f-8aed-775dd9bd326d"). InnerVolumeSpecName "kube-api-access-kshkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:41:45 crc kubenswrapper[4724]: I0223 17:41:45.479589 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca8d1d6c-2638-493f-8aed-775dd9bd326d-util" (OuterVolumeSpecName: "util") pod "ca8d1d6c-2638-493f-8aed-775dd9bd326d" (UID: "ca8d1d6c-2638-493f-8aed-775dd9bd326d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:41:45 crc kubenswrapper[4724]: I0223 17:41:45.558349 4724 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ca8d1d6c-2638-493f-8aed-775dd9bd326d-util\") on node \"crc\" DevicePath \"\"" Feb 23 17:41:45 crc kubenswrapper[4724]: I0223 17:41:45.558444 4724 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ca8d1d6c-2638-493f-8aed-775dd9bd326d-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 17:41:45 crc kubenswrapper[4724]: I0223 17:41:45.558454 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kshkq\" (UniqueName: \"kubernetes.io/projected/ca8d1d6c-2638-493f-8aed-775dd9bd326d-kube-api-access-kshkq\") on node \"crc\" DevicePath \"\"" Feb 23 17:41:46 crc kubenswrapper[4724]: I0223 17:41:46.086428 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08x48lj" Feb 23 17:41:46 crc kubenswrapper[4724]: I0223 17:41:46.086360 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08x48lj" event={"ID":"ca8d1d6c-2638-493f-8aed-775dd9bd326d","Type":"ContainerDied","Data":"1436897ebc2681c8ef79b2d1ed0366dd05933be3de6aed7e03a63bbcb9a4eeb9"} Feb 23 17:41:46 crc kubenswrapper[4724]: I0223 17:41:46.087028 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1436897ebc2681c8ef79b2d1ed0366dd05933be3de6aed7e03a63bbcb9a4eeb9" Feb 23 17:41:54 crc kubenswrapper[4724]: I0223 17:41:54.288204 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-5jjjl"] Feb 23 17:41:54 crc kubenswrapper[4724]: E0223 17:41:54.288973 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca8d1d6c-2638-493f-8aed-775dd9bd326d" containerName="util" Feb 23 17:41:54 crc kubenswrapper[4724]: I0223 17:41:54.288991 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca8d1d6c-2638-493f-8aed-775dd9bd326d" containerName="util" Feb 23 17:41:54 crc kubenswrapper[4724]: E0223 17:41:54.289002 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca8d1d6c-2638-493f-8aed-775dd9bd326d" containerName="extract" Feb 23 17:41:54 crc kubenswrapper[4724]: I0223 17:41:54.289009 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca8d1d6c-2638-493f-8aed-775dd9bd326d" containerName="extract" Feb 23 17:41:54 crc kubenswrapper[4724]: E0223 17:41:54.289018 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca8d1d6c-2638-493f-8aed-775dd9bd326d" containerName="pull" Feb 23 17:41:54 crc kubenswrapper[4724]: I0223 17:41:54.289025 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca8d1d6c-2638-493f-8aed-775dd9bd326d" containerName="pull" Feb 23 17:41:54 crc kubenswrapper[4724]: I0223 17:41:54.289145 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca8d1d6c-2638-493f-8aed-775dd9bd326d" containerName="extract" Feb 23 17:41:54 crc kubenswrapper[4724]: I0223 17:41:54.289686 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5jjjl" Feb 23 17:41:54 crc kubenswrapper[4724]: I0223 17:41:54.293793 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 23 17:41:54 crc kubenswrapper[4724]: I0223 17:41:54.293994 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 23 17:41:54 crc kubenswrapper[4724]: I0223 17:41:54.293930 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-gdgw7" Feb 23 17:41:54 crc kubenswrapper[4724]: I0223 17:41:54.304995 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-5jjjl"] Feb 23 17:41:54 crc kubenswrapper[4724]: I0223 17:41:54.343173 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6c699bc6b7-ldjml"] Feb 23 17:41:54 crc kubenswrapper[4724]: I0223 17:41:54.343921 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c699bc6b7-ldjml" Feb 23 17:41:54 crc kubenswrapper[4724]: I0223 17:41:54.347751 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 23 17:41:54 crc kubenswrapper[4724]: I0223 17:41:54.347978 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-ckdd4" Feb 23 17:41:54 crc kubenswrapper[4724]: I0223 17:41:54.371344 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6c699bc6b7-lthdd"] Feb 23 17:41:54 crc kubenswrapper[4724]: I0223 17:41:54.372188 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c699bc6b7-lthdd" Feb 23 17:41:54 crc kubenswrapper[4724]: I0223 17:41:54.378110 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/88e5fd13-0f53-4516-b0e8-73f22b9837eb-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6c699bc6b7-ldjml\" (UID: \"88e5fd13-0f53-4516-b0e8-73f22b9837eb\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c699bc6b7-ldjml" Feb 23 17:41:54 crc kubenswrapper[4724]: I0223 17:41:54.378454 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7750cf0f-feab-4fd7-a8a3-4fc9298a169e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6c699bc6b7-lthdd\" (UID: \"7750cf0f-feab-4fd7-a8a3-4fc9298a169e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c699bc6b7-lthdd" Feb 23 17:41:54 crc kubenswrapper[4724]: I0223 17:41:54.378578 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7750cf0f-feab-4fd7-a8a3-4fc9298a169e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6c699bc6b7-lthdd\" (UID: \"7750cf0f-feab-4fd7-a8a3-4fc9298a169e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c699bc6b7-lthdd" Feb 23 17:41:54 crc kubenswrapper[4724]: I0223 17:41:54.378700 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/88e5fd13-0f53-4516-b0e8-73f22b9837eb-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6c699bc6b7-ldjml\" (UID: \"88e5fd13-0f53-4516-b0e8-73f22b9837eb\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c699bc6b7-ldjml" Feb 23 17:41:54 crc kubenswrapper[4724]: I0223 17:41:54.379312 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6c699bc6b7-ldjml"] Feb 23 17:41:54 crc kubenswrapper[4724]: I0223 17:41:54.396704 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6c699bc6b7-lthdd"] Feb 23 17:41:54 crc kubenswrapper[4724]: I0223 17:41:54.479860 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7750cf0f-feab-4fd7-a8a3-4fc9298a169e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6c699bc6b7-lthdd\" (UID: \"7750cf0f-feab-4fd7-a8a3-4fc9298a169e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c699bc6b7-lthdd" Feb 23 17:41:54 crc kubenswrapper[4724]: I0223 17:41:54.479964 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/88e5fd13-0f53-4516-b0e8-73f22b9837eb-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6c699bc6b7-ldjml\" (UID: \"88e5fd13-0f53-4516-b0e8-73f22b9837eb\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c699bc6b7-ldjml" Feb 23 17:41:54 crc kubenswrapper[4724]: I0223 17:41:54.480019 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/88e5fd13-0f53-4516-b0e8-73f22b9837eb-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6c699bc6b7-ldjml\" (UID: \"88e5fd13-0f53-4516-b0e8-73f22b9837eb\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c699bc6b7-ldjml" Feb 23 17:41:54 crc kubenswrapper[4724]: I0223 17:41:54.480082 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htmjq\" (UniqueName: \"kubernetes.io/projected/814ddfc1-f41d-41fe-9e19-72ebf86f8950-kube-api-access-htmjq\") pod \"obo-prometheus-operator-68bc856cb9-5jjjl\" (UID: \"814ddfc1-f41d-41fe-9e19-72ebf86f8950\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5jjjl" Feb 23 17:41:54 crc kubenswrapper[4724]: I0223 17:41:54.480126 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7750cf0f-feab-4fd7-a8a3-4fc9298a169e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6c699bc6b7-lthdd\" (UID: \"7750cf0f-feab-4fd7-a8a3-4fc9298a169e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c699bc6b7-lthdd" Feb 23 17:41:54 crc kubenswrapper[4724]: I0223 17:41:54.488182 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/88e5fd13-0f53-4516-b0e8-73f22b9837eb-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6c699bc6b7-ldjml\" (UID: \"88e5fd13-0f53-4516-b0e8-73f22b9837eb\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c699bc6b7-ldjml" Feb 23 17:41:54 crc kubenswrapper[4724]: I0223 17:41:54.495032 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7750cf0f-feab-4fd7-a8a3-4fc9298a169e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6c699bc6b7-lthdd\" (UID: \"7750cf0f-feab-4fd7-a8a3-4fc9298a169e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c699bc6b7-lthdd" Feb 23 17:41:54 crc kubenswrapper[4724]: I0223 17:41:54.496853 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/88e5fd13-0f53-4516-b0e8-73f22b9837eb-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6c699bc6b7-ldjml\" (UID: \"88e5fd13-0f53-4516-b0e8-73f22b9837eb\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c699bc6b7-ldjml" Feb 23 17:41:54 crc kubenswrapper[4724]: I0223 17:41:54.518177 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7750cf0f-feab-4fd7-a8a3-4fc9298a169e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6c699bc6b7-lthdd\" (UID: \"7750cf0f-feab-4fd7-a8a3-4fc9298a169e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c699bc6b7-lthdd" Feb 23 17:41:54 crc kubenswrapper[4724]: I0223 17:41:54.577312 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-djp7f"] Feb 23 17:41:54 crc kubenswrapper[4724]: I0223 17:41:54.578256 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-djp7f" Feb 23 17:41:54 crc kubenswrapper[4724]: I0223 17:41:54.581438 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 23 17:41:54 crc kubenswrapper[4724]: I0223 17:41:54.581602 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htmjq\" (UniqueName: \"kubernetes.io/projected/814ddfc1-f41d-41fe-9e19-72ebf86f8950-kube-api-access-htmjq\") pod \"obo-prometheus-operator-68bc856cb9-5jjjl\" (UID: \"814ddfc1-f41d-41fe-9e19-72ebf86f8950\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5jjjl" Feb 23 17:41:54 crc kubenswrapper[4724]: I0223 17:41:54.585869 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-g4lqs" Feb 23 17:41:54 crc kubenswrapper[4724]: I0223 17:41:54.600139 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-djp7f"] Feb 23 17:41:54 crc kubenswrapper[4724]: I0223 17:41:54.604200 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htmjq\" (UniqueName: \"kubernetes.io/projected/814ddfc1-f41d-41fe-9e19-72ebf86f8950-kube-api-access-htmjq\") pod \"obo-prometheus-operator-68bc856cb9-5jjjl\" (UID: \"814ddfc1-f41d-41fe-9e19-72ebf86f8950\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5jjjl" Feb 23 17:41:54 crc kubenswrapper[4724]: I0223 17:41:54.625739 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5jjjl" Feb 23 17:41:54 crc kubenswrapper[4724]: I0223 17:41:54.660723 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c699bc6b7-ldjml" Feb 23 17:41:54 crc kubenswrapper[4724]: I0223 17:41:54.684983 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/0a3d2d9a-1225-4ec1-ac5b-4657ca676522-observability-operator-tls\") pod \"observability-operator-59bdc8b94-djp7f\" (UID: \"0a3d2d9a-1225-4ec1-ac5b-4657ca676522\") " pod="openshift-operators/observability-operator-59bdc8b94-djp7f" Feb 23 17:41:54 crc kubenswrapper[4724]: I0223 17:41:54.685039 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzrgc\" (UniqueName: \"kubernetes.io/projected/0a3d2d9a-1225-4ec1-ac5b-4657ca676522-kube-api-access-fzrgc\") pod \"observability-operator-59bdc8b94-djp7f\" (UID: \"0a3d2d9a-1225-4ec1-ac5b-4657ca676522\") " pod="openshift-operators/observability-operator-59bdc8b94-djp7f" Feb 23 17:41:54 crc kubenswrapper[4724]: I0223 17:41:54.688265 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c699bc6b7-lthdd" Feb 23 17:41:54 crc kubenswrapper[4724]: I0223 17:41:54.733194 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-6j5cq"] Feb 23 17:41:54 crc kubenswrapper[4724]: I0223 17:41:54.734219 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-6j5cq" Feb 23 17:41:54 crc kubenswrapper[4724]: I0223 17:41:54.737096 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-h2l2v" Feb 23 17:41:54 crc kubenswrapper[4724]: I0223 17:41:54.745191 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-6j5cq"] Feb 23 17:41:54 crc kubenswrapper[4724]: I0223 17:41:54.785871 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vn52\" (UniqueName: \"kubernetes.io/projected/606f1fc9-e753-4c28-8386-dfe7bb1f4eca-kube-api-access-6vn52\") pod \"perses-operator-5bf474d74f-6j5cq\" (UID: \"606f1fc9-e753-4c28-8386-dfe7bb1f4eca\") " pod="openshift-operators/perses-operator-5bf474d74f-6j5cq" Feb 23 17:41:54 crc kubenswrapper[4724]: I0223 17:41:54.785924 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/606f1fc9-e753-4c28-8386-dfe7bb1f4eca-openshift-service-ca\") pod \"perses-operator-5bf474d74f-6j5cq\" (UID: \"606f1fc9-e753-4c28-8386-dfe7bb1f4eca\") " pod="openshift-operators/perses-operator-5bf474d74f-6j5cq" Feb 23 17:41:54 crc kubenswrapper[4724]: I0223 17:41:54.785952 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/0a3d2d9a-1225-4ec1-ac5b-4657ca676522-observability-operator-tls\") pod \"observability-operator-59bdc8b94-djp7f\" (UID: \"0a3d2d9a-1225-4ec1-ac5b-4657ca676522\") " pod="openshift-operators/observability-operator-59bdc8b94-djp7f" Feb 23 17:41:54 crc kubenswrapper[4724]: I0223 17:41:54.786116 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzrgc\" (UniqueName: \"kubernetes.io/projected/0a3d2d9a-1225-4ec1-ac5b-4657ca676522-kube-api-access-fzrgc\") pod \"observability-operator-59bdc8b94-djp7f\" (UID: \"0a3d2d9a-1225-4ec1-ac5b-4657ca676522\") " pod="openshift-operators/observability-operator-59bdc8b94-djp7f" Feb 23 17:41:54 crc kubenswrapper[4724]: I0223 17:41:54.790687 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/0a3d2d9a-1225-4ec1-ac5b-4657ca676522-observability-operator-tls\") pod \"observability-operator-59bdc8b94-djp7f\" (UID: \"0a3d2d9a-1225-4ec1-ac5b-4657ca676522\") " pod="openshift-operators/observability-operator-59bdc8b94-djp7f" Feb 23 17:41:54 crc kubenswrapper[4724]: I0223 17:41:54.806796 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzrgc\" (UniqueName: \"kubernetes.io/projected/0a3d2d9a-1225-4ec1-ac5b-4657ca676522-kube-api-access-fzrgc\") pod \"observability-operator-59bdc8b94-djp7f\" (UID: \"0a3d2d9a-1225-4ec1-ac5b-4657ca676522\") " pod="openshift-operators/observability-operator-59bdc8b94-djp7f" Feb 23 17:41:54 crc kubenswrapper[4724]: I0223 17:41:54.888192 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vn52\" (UniqueName: \"kubernetes.io/projected/606f1fc9-e753-4c28-8386-dfe7bb1f4eca-kube-api-access-6vn52\") pod \"perses-operator-5bf474d74f-6j5cq\" (UID: \"606f1fc9-e753-4c28-8386-dfe7bb1f4eca\") " pod="openshift-operators/perses-operator-5bf474d74f-6j5cq" Feb 23 17:41:54 crc kubenswrapper[4724]: I0223 17:41:54.888243 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/606f1fc9-e753-4c28-8386-dfe7bb1f4eca-openshift-service-ca\") pod \"perses-operator-5bf474d74f-6j5cq\" (UID: \"606f1fc9-e753-4c28-8386-dfe7bb1f4eca\") " pod="openshift-operators/perses-operator-5bf474d74f-6j5cq" Feb 23 17:41:54 crc kubenswrapper[4724]: I0223 17:41:54.889182 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/606f1fc9-e753-4c28-8386-dfe7bb1f4eca-openshift-service-ca\") pod \"perses-operator-5bf474d74f-6j5cq\" (UID: \"606f1fc9-e753-4c28-8386-dfe7bb1f4eca\") " pod="openshift-operators/perses-operator-5bf474d74f-6j5cq" Feb 23 17:41:54 crc kubenswrapper[4724]: I0223 17:41:54.895718 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-djp7f" Feb 23 17:41:54 crc kubenswrapper[4724]: I0223 17:41:54.908145 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vn52\" (UniqueName: \"kubernetes.io/projected/606f1fc9-e753-4c28-8386-dfe7bb1f4eca-kube-api-access-6vn52\") pod \"perses-operator-5bf474d74f-6j5cq\" (UID: \"606f1fc9-e753-4c28-8386-dfe7bb1f4eca\") " pod="openshift-operators/perses-operator-5bf474d74f-6j5cq" Feb 23 17:41:55 crc kubenswrapper[4724]: I0223 17:41:55.056581 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-6j5cq" Feb 23 17:41:55 crc kubenswrapper[4724]: I0223 17:41:55.134413 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-djp7f"] Feb 23 17:41:55 crc kubenswrapper[4724]: I0223 17:41:55.176364 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-5jjjl"] Feb 23 17:41:55 crc kubenswrapper[4724]: I0223 17:41:55.267620 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6c699bc6b7-ldjml"] Feb 23 17:41:55 crc kubenswrapper[4724]: I0223 17:41:55.285737 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6c699bc6b7-lthdd"] Feb 23 17:41:55 crc kubenswrapper[4724]: W0223 17:41:55.297543 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7750cf0f_feab_4fd7_a8a3_4fc9298a169e.slice/crio-450cc60b7631a6374b4c1e299acf792a5884d5bfcc982c95412ee4aca33954ee WatchSource:0}: Error finding container 450cc60b7631a6374b4c1e299acf792a5884d5bfcc982c95412ee4aca33954ee: Status 404 returned error can't find the container with id 450cc60b7631a6374b4c1e299acf792a5884d5bfcc982c95412ee4aca33954ee Feb 23 17:41:55 crc kubenswrapper[4724]: I0223 17:41:55.333509 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-6j5cq"] Feb 23 17:41:55 crc kubenswrapper[4724]: W0223 17:41:55.342964 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod606f1fc9_e753_4c28_8386_dfe7bb1f4eca.slice/crio-4aee9da4c52e526fd730ae3cc585ccd0138e1ec04b4a8cdb1c93a1cf94d6cbad WatchSource:0}: Error finding container 4aee9da4c52e526fd730ae3cc585ccd0138e1ec04b4a8cdb1c93a1cf94d6cbad: Status 404 returned error can't find the container with id 4aee9da4c52e526fd730ae3cc585ccd0138e1ec04b4a8cdb1c93a1cf94d6cbad Feb 23 17:41:56 crc kubenswrapper[4724]: I0223 17:41:56.141336 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-6j5cq" event={"ID":"606f1fc9-e753-4c28-8386-dfe7bb1f4eca","Type":"ContainerStarted","Data":"4aee9da4c52e526fd730ae3cc585ccd0138e1ec04b4a8cdb1c93a1cf94d6cbad"} Feb 23 17:41:56 crc kubenswrapper[4724]: I0223 17:41:56.142899 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c699bc6b7-ldjml" event={"ID":"88e5fd13-0f53-4516-b0e8-73f22b9837eb","Type":"ContainerStarted","Data":"33ca429e537dfecef5ba686a7ed45a65c768ef9469e8bb33601dd914aea3a3d9"} Feb 23 17:41:56 crc kubenswrapper[4724]: I0223 17:41:56.144624 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c699bc6b7-lthdd" event={"ID":"7750cf0f-feab-4fd7-a8a3-4fc9298a169e","Type":"ContainerStarted","Data":"450cc60b7631a6374b4c1e299acf792a5884d5bfcc982c95412ee4aca33954ee"} Feb 23 17:41:56 crc kubenswrapper[4724]: I0223 17:41:56.145921 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-djp7f" event={"ID":"0a3d2d9a-1225-4ec1-ac5b-4657ca676522","Type":"ContainerStarted","Data":"1e07041e33d39d1d75fe8ea5aea713249b378c6d9de35703dd5213328ac4f2df"} Feb 23 17:41:56 crc kubenswrapper[4724]: I0223 17:41:56.147292 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5jjjl" event={"ID":"814ddfc1-f41d-41fe-9e19-72ebf86f8950","Type":"ContainerStarted","Data":"da9eba417134a9cf377cf8e01b1dc4c66789849285882c0099a7c105785fc65a"} Feb 23 17:41:57 crc kubenswrapper[4724]: I0223 17:41:57.752480 4724 patch_prober.go:28] interesting pod/machine-config-daemon-rw78r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 17:41:57 crc kubenswrapper[4724]: I0223 17:41:57.752546 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 17:42:06 crc kubenswrapper[4724]: I0223 17:42:06.266068 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c699bc6b7-lthdd" event={"ID":"7750cf0f-feab-4fd7-a8a3-4fc9298a169e","Type":"ContainerStarted","Data":"9c168b2969ae4f9b96a1a8a40da706331ea020a9cebd8d74721c249b9cc3e5de"} Feb 23 17:42:06 crc kubenswrapper[4724]: I0223 17:42:06.270602 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-djp7f" event={"ID":"0a3d2d9a-1225-4ec1-ac5b-4657ca676522","Type":"ContainerStarted","Data":"4faa786c1476d0369320663e202e1b4ffa8bb426925e468e6490f5b6561d3cc6"} Feb 23 17:42:06 crc kubenswrapper[4724]: I0223 17:42:06.270640 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-djp7f" Feb 23 17:42:06 crc kubenswrapper[4724]: I0223 17:42:06.271466 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5jjjl" event={"ID":"814ddfc1-f41d-41fe-9e19-72ebf86f8950","Type":"ContainerStarted","Data":"1b3151e70865b01c63587fc41156a1f5c32e52d599c54dbf59517f3b88e185ea"} Feb 23 17:42:06 crc kubenswrapper[4724]: I0223 17:42:06.273462 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-6j5cq" event={"ID":"606f1fc9-e753-4c28-8386-dfe7bb1f4eca","Type":"ContainerStarted","Data":"95659e1c2845d4175c44a29783b19ce38d379e1759af79a75d3fb2559b3b9469"} Feb 23 17:42:06 crc kubenswrapper[4724]: I0223 17:42:06.273934 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-6j5cq" Feb 23 17:42:06 crc kubenswrapper[4724]: I0223 17:42:06.275941 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c699bc6b7-ldjml" event={"ID":"88e5fd13-0f53-4516-b0e8-73f22b9837eb","Type":"ContainerStarted","Data":"09ffb89387fe415ae5283c9ee6c235d2239e2a4e763dad1939da580e1891be14"} Feb 23 17:42:06 crc kubenswrapper[4724]: I0223 17:42:06.276658 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-djp7f" Feb 23 17:42:06 crc kubenswrapper[4724]: I0223 17:42:06.294004 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c699bc6b7-lthdd" podStartSLOduration=1.859005676 podStartE2EDuration="12.293971777s" podCreationTimestamp="2026-02-23 17:41:54 +0000 UTC" firstStartedPulling="2026-02-23 17:41:55.306260907 +0000 UTC m=+671.122460507" lastFinishedPulling="2026-02-23 17:42:05.741227008 +0000 UTC m=+681.557426608" observedRunningTime="2026-02-23 17:42:06.288244897 +0000 UTC m=+682.104444497" watchObservedRunningTime="2026-02-23 17:42:06.293971777 +0000 UTC m=+682.110171377" Feb 23 17:42:06 crc kubenswrapper[4724]: I0223 17:42:06.310858 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-6j5cq" podStartSLOduration=1.9597897789999998 podStartE2EDuration="12.310837139s" podCreationTimestamp="2026-02-23 17:41:54 +0000 UTC" firstStartedPulling="2026-02-23 17:41:55.349958073 +0000 UTC m=+671.166157673" lastFinishedPulling="2026-02-23 17:42:05.701005433 +0000 UTC m=+681.517205033" observedRunningTime="2026-02-23 17:42:06.310008608 +0000 UTC m=+682.126208218" watchObservedRunningTime="2026-02-23 17:42:06.310837139 +0000 UTC m=+682.127036739" Feb 23 17:42:06 crc kubenswrapper[4724]: I0223 17:42:06.339277 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c699bc6b7-ldjml" podStartSLOduration=1.923433466 podStartE2EDuration="12.339253225s" podCreationTimestamp="2026-02-23 17:41:54 +0000 UTC" firstStartedPulling="2026-02-23 17:41:55.284839115 +0000 UTC m=+671.101038715" lastFinishedPulling="2026-02-23 17:42:05.700658874 +0000 UTC m=+681.516858474" observedRunningTime="2026-02-23 17:42:06.336721788 +0000 UTC m=+682.152921398" watchObservedRunningTime="2026-02-23 17:42:06.339253225 +0000 UTC m=+682.155452815" Feb 23 17:42:06 crc kubenswrapper[4724]: I0223 17:42:06.366031 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5jjjl" podStartSLOduration=1.861558852 podStartE2EDuration="12.366005646s" podCreationTimestamp="2026-02-23 17:41:54 +0000 UTC" firstStartedPulling="2026-02-23 17:41:55.195323707 +0000 UTC m=+671.011523297" lastFinishedPulling="2026-02-23 17:42:05.699770491 +0000 UTC m=+681.515970091" observedRunningTime="2026-02-23 17:42:06.364885077 +0000 UTC m=+682.181084687" watchObservedRunningTime="2026-02-23 17:42:06.366005646 +0000 UTC m=+682.182205246" Feb 23 17:42:06 crc kubenswrapper[4724]: I0223 17:42:06.408962 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-djp7f" podStartSLOduration=1.8306566229999999 podStartE2EDuration="12.408933903s" podCreationTimestamp="2026-02-23 17:41:54 +0000 UTC" firstStartedPulling="2026-02-23 17:41:55.158005588 +0000 UTC m=+670.974205188" lastFinishedPulling="2026-02-23 17:42:05.736282868 +0000 UTC m=+681.552482468" observedRunningTime="2026-02-23 17:42:06.401648341 +0000 UTC m=+682.217847941" watchObservedRunningTime="2026-02-23 17:42:06.408933903 +0000 UTC m=+682.225133513" Feb 23 17:42:15 crc kubenswrapper[4724]: I0223 17:42:15.061218 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-6j5cq" Feb 23 17:42:27 crc kubenswrapper[4724]: I0223 17:42:27.752800 4724 patch_prober.go:28] interesting pod/machine-config-daemon-rw78r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 17:42:27 crc kubenswrapper[4724]: I0223 17:42:27.753650 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 17:42:27 crc kubenswrapper[4724]: I0223 17:42:27.753713 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" Feb 23 17:42:27 crc kubenswrapper[4724]: I0223 17:42:27.754422 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9be474f9627637d77fe947efade6a752f0ba58fbd772db2e8c59cd37a04b285e"} pod="openshift-machine-config-operator/machine-config-daemon-rw78r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 17:42:27 crc kubenswrapper[4724]: I0223 17:42:27.754482 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" containerID="cri-o://9be474f9627637d77fe947efade6a752f0ba58fbd772db2e8c59cd37a04b285e" gracePeriod=600 Feb 23 17:42:28 crc kubenswrapper[4724]: I0223 17:42:28.407250 4724 generic.go:334] "Generic (PLEG): container finished" podID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerID="9be474f9627637d77fe947efade6a752f0ba58fbd772db2e8c59cd37a04b285e" exitCode=0 Feb 23 17:42:28 crc kubenswrapper[4724]: I0223 17:42:28.407318 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" event={"ID":"a065b197-b354-4d9b-b2e9-7d4882a3d1a2","Type":"ContainerDied","Data":"9be474f9627637d77fe947efade6a752f0ba58fbd772db2e8c59cd37a04b285e"} Feb 23 17:42:28 crc kubenswrapper[4724]: I0223 17:42:28.408115 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" event={"ID":"a065b197-b354-4d9b-b2e9-7d4882a3d1a2","Type":"ContainerStarted","Data":"558f0555580cf65f49e1db87e25baa9b3fcbcc94e63b57b3a835c127120a597f"} Feb 23 17:42:28 crc kubenswrapper[4724]: I0223 17:42:28.408140 4724 scope.go:117] "RemoveContainer" containerID="bf38d9a5a1d2630175dcd94c9e597b013cf2712dd646e5ede28f7464d6d184a5" Feb 23 17:42:33 crc kubenswrapper[4724]: I0223 17:42:33.417287 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatt28t"] Feb 23 17:42:33 crc kubenswrapper[4724]: I0223 17:42:33.419130 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatt28t" Feb 23 17:42:33 crc kubenswrapper[4724]: I0223 17:42:33.421692 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 23 17:42:33 crc kubenswrapper[4724]: I0223 17:42:33.432255 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatt28t"] Feb 23 17:42:33 crc kubenswrapper[4724]: I0223 17:42:33.565404 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcdft\" (UniqueName: \"kubernetes.io/projected/af1905ab-fece-4f3b-8f30-f96b5022bb3d-kube-api-access-rcdft\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatt28t\" (UID: \"af1905ab-fece-4f3b-8f30-f96b5022bb3d\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatt28t" Feb 23 17:42:33 crc kubenswrapper[4724]: I0223 17:42:33.565498 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/af1905ab-fece-4f3b-8f30-f96b5022bb3d-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatt28t\" (UID: \"af1905ab-fece-4f3b-8f30-f96b5022bb3d\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatt28t" Feb 23 17:42:33 crc kubenswrapper[4724]: I0223 17:42:33.565531 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/af1905ab-fece-4f3b-8f30-f96b5022bb3d-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatt28t\" (UID: \"af1905ab-fece-4f3b-8f30-f96b5022bb3d\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatt28t" Feb 23 17:42:33 crc kubenswrapper[4724]: I0223 17:42:33.666608 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/af1905ab-fece-4f3b-8f30-f96b5022bb3d-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatt28t\" (UID: \"af1905ab-fece-4f3b-8f30-f96b5022bb3d\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatt28t" Feb 23 17:42:33 crc kubenswrapper[4724]: I0223 17:42:33.666664 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/af1905ab-fece-4f3b-8f30-f96b5022bb3d-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatt28t\" (UID: \"af1905ab-fece-4f3b-8f30-f96b5022bb3d\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatt28t" Feb 23 17:42:33 crc kubenswrapper[4724]: I0223 17:42:33.666706 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcdft\" (UniqueName: \"kubernetes.io/projected/af1905ab-fece-4f3b-8f30-f96b5022bb3d-kube-api-access-rcdft\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatt28t\" (UID: \"af1905ab-fece-4f3b-8f30-f96b5022bb3d\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatt28t" Feb 23 17:42:33 crc kubenswrapper[4724]: I0223 17:42:33.667286 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/af1905ab-fece-4f3b-8f30-f96b5022bb3d-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatt28t\" (UID: \"af1905ab-fece-4f3b-8f30-f96b5022bb3d\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatt28t" Feb 23 17:42:33 crc kubenswrapper[4724]: I0223 17:42:33.667366 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/af1905ab-fece-4f3b-8f30-f96b5022bb3d-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatt28t\" (UID: \"af1905ab-fece-4f3b-8f30-f96b5022bb3d\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatt28t" Feb 23 17:42:33 crc kubenswrapper[4724]: I0223 17:42:33.693523 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcdft\" (UniqueName: \"kubernetes.io/projected/af1905ab-fece-4f3b-8f30-f96b5022bb3d-kube-api-access-rcdft\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatt28t\" (UID: \"af1905ab-fece-4f3b-8f30-f96b5022bb3d\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatt28t" Feb 23 17:42:33 crc kubenswrapper[4724]: I0223 17:42:33.737496 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatt28t" Feb 23 17:42:34 crc kubenswrapper[4724]: I0223 17:42:34.140519 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatt28t"] Feb 23 17:42:34 crc kubenswrapper[4724]: I0223 17:42:34.452315 4724 generic.go:334] "Generic (PLEG): container finished" podID="af1905ab-fece-4f3b-8f30-f96b5022bb3d" containerID="95f1ea859f06c4d127174199c66223e012c9113881ff2e67e6304bfde2eb0187" exitCode=0 Feb 23 17:42:34 crc kubenswrapper[4724]: I0223 17:42:34.452492 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatt28t" event={"ID":"af1905ab-fece-4f3b-8f30-f96b5022bb3d","Type":"ContainerDied","Data":"95f1ea859f06c4d127174199c66223e012c9113881ff2e67e6304bfde2eb0187"} Feb 23 17:42:34 crc kubenswrapper[4724]: I0223 17:42:34.452867 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatt28t" event={"ID":"af1905ab-fece-4f3b-8f30-f96b5022bb3d","Type":"ContainerStarted","Data":"ca5003b1d0f55e674059267396ecac3d41d9643fd106659287eee4a8cbe52e94"} Feb 23 17:42:36 crc kubenswrapper[4724]: I0223 17:42:36.468189 4724 generic.go:334] "Generic (PLEG): container finished" podID="af1905ab-fece-4f3b-8f30-f96b5022bb3d" containerID="7f06e185c3782b784907f77af8db95489034d6e29be7506246a47dcadb762a16" exitCode=0 Feb 23 17:42:36 crc kubenswrapper[4724]: I0223 17:42:36.468744 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatt28t" event={"ID":"af1905ab-fece-4f3b-8f30-f96b5022bb3d","Type":"ContainerDied","Data":"7f06e185c3782b784907f77af8db95489034d6e29be7506246a47dcadb762a16"} Feb 23 17:42:37 crc kubenswrapper[4724]: I0223 17:42:37.478649 4724 generic.go:334] "Generic (PLEG): container finished" podID="af1905ab-fece-4f3b-8f30-f96b5022bb3d" containerID="0fb77c19eb83245d9b6bd0e34c620f7c9a3a65f59ee5a7a1d3b94aad1da55c4a" exitCode=0 Feb 23 17:42:37 crc kubenswrapper[4724]: I0223 17:42:37.478791 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatt28t" event={"ID":"af1905ab-fece-4f3b-8f30-f96b5022bb3d","Type":"ContainerDied","Data":"0fb77c19eb83245d9b6bd0e34c620f7c9a3a65f59ee5a7a1d3b94aad1da55c4a"} Feb 23 17:42:38 crc kubenswrapper[4724]: I0223 17:42:38.785127 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatt28t" Feb 23 17:42:38 crc kubenswrapper[4724]: I0223 17:42:38.947577 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/af1905ab-fece-4f3b-8f30-f96b5022bb3d-bundle\") pod \"af1905ab-fece-4f3b-8f30-f96b5022bb3d\" (UID: \"af1905ab-fece-4f3b-8f30-f96b5022bb3d\") " Feb 23 17:42:38 crc kubenswrapper[4724]: I0223 17:42:38.947638 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcdft\" (UniqueName: \"kubernetes.io/projected/af1905ab-fece-4f3b-8f30-f96b5022bb3d-kube-api-access-rcdft\") pod \"af1905ab-fece-4f3b-8f30-f96b5022bb3d\" (UID: \"af1905ab-fece-4f3b-8f30-f96b5022bb3d\") " Feb 23 17:42:38 crc kubenswrapper[4724]: I0223 17:42:38.947696 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/af1905ab-fece-4f3b-8f30-f96b5022bb3d-util\") pod \"af1905ab-fece-4f3b-8f30-f96b5022bb3d\" (UID: \"af1905ab-fece-4f3b-8f30-f96b5022bb3d\") " Feb 23 17:42:38 crc kubenswrapper[4724]: I0223 17:42:38.948225 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af1905ab-fece-4f3b-8f30-f96b5022bb3d-bundle" (OuterVolumeSpecName: "bundle") pod "af1905ab-fece-4f3b-8f30-f96b5022bb3d" (UID: "af1905ab-fece-4f3b-8f30-f96b5022bb3d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:42:38 crc kubenswrapper[4724]: I0223 17:42:38.970299 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af1905ab-fece-4f3b-8f30-f96b5022bb3d-kube-api-access-rcdft" (OuterVolumeSpecName: "kube-api-access-rcdft") pod "af1905ab-fece-4f3b-8f30-f96b5022bb3d" (UID: "af1905ab-fece-4f3b-8f30-f96b5022bb3d"). InnerVolumeSpecName "kube-api-access-rcdft". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:42:38 crc kubenswrapper[4724]: I0223 17:42:38.972278 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af1905ab-fece-4f3b-8f30-f96b5022bb3d-util" (OuterVolumeSpecName: "util") pod "af1905ab-fece-4f3b-8f30-f96b5022bb3d" (UID: "af1905ab-fece-4f3b-8f30-f96b5022bb3d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:42:39 crc kubenswrapper[4724]: I0223 17:42:39.048876 4724 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/af1905ab-fece-4f3b-8f30-f96b5022bb3d-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 17:42:39 crc kubenswrapper[4724]: I0223 17:42:39.048911 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcdft\" (UniqueName: \"kubernetes.io/projected/af1905ab-fece-4f3b-8f30-f96b5022bb3d-kube-api-access-rcdft\") on node \"crc\" DevicePath \"\"" Feb 23 17:42:39 crc kubenswrapper[4724]: I0223 17:42:39.048921 4724 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/af1905ab-fece-4f3b-8f30-f96b5022bb3d-util\") on node \"crc\" DevicePath \"\"" Feb 23 17:42:39 crc kubenswrapper[4724]: I0223 17:42:39.496746 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatt28t" event={"ID":"af1905ab-fece-4f3b-8f30-f96b5022bb3d","Type":"ContainerDied","Data":"ca5003b1d0f55e674059267396ecac3d41d9643fd106659287eee4a8cbe52e94"} Feb 23 17:42:39 crc kubenswrapper[4724]: I0223 17:42:39.497231 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca5003b1d0f55e674059267396ecac3d41d9643fd106659287eee4a8cbe52e94" Feb 23 17:42:39 crc kubenswrapper[4724]: I0223 17:42:39.496939 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatt28t" Feb 23 17:42:44 crc kubenswrapper[4724]: I0223 17:42:44.989964 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-hs7ws"] Feb 23 17:42:44 crc kubenswrapper[4724]: E0223 17:42:44.990945 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af1905ab-fece-4f3b-8f30-f96b5022bb3d" containerName="util" Feb 23 17:42:44 crc kubenswrapper[4724]: I0223 17:42:44.990961 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="af1905ab-fece-4f3b-8f30-f96b5022bb3d" containerName="util" Feb 23 17:42:44 crc kubenswrapper[4724]: E0223 17:42:44.990983 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af1905ab-fece-4f3b-8f30-f96b5022bb3d" containerName="pull" Feb 23 17:42:44 crc kubenswrapper[4724]: I0223 17:42:44.990989 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="af1905ab-fece-4f3b-8f30-f96b5022bb3d" containerName="pull" Feb 23 17:42:44 crc kubenswrapper[4724]: E0223 17:42:44.991009 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af1905ab-fece-4f3b-8f30-f96b5022bb3d" containerName="extract" Feb 23 17:42:44 crc kubenswrapper[4724]: I0223 17:42:44.991016 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="af1905ab-fece-4f3b-8f30-f96b5022bb3d" containerName="extract" Feb 23 17:42:44 crc kubenswrapper[4724]: I0223 17:42:44.991165 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="af1905ab-fece-4f3b-8f30-f96b5022bb3d" containerName="extract" Feb 23 17:42:44 crc kubenswrapper[4724]: I0223 17:42:44.991813 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-hs7ws" Feb 23 17:42:44 crc kubenswrapper[4724]: I0223 17:42:44.994566 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-8zgwq" Feb 23 17:42:45 crc kubenswrapper[4724]: I0223 17:42:45.000506 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 23 17:42:45 crc kubenswrapper[4724]: I0223 17:42:45.001761 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 23 17:42:45 crc kubenswrapper[4724]: I0223 17:42:45.008564 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-hs7ws"] Feb 23 17:42:45 crc kubenswrapper[4724]: I0223 17:42:45.140980 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8nxc\" (UniqueName: \"kubernetes.io/projected/12ceca0c-78de-41ff-8e20-cdf172bd915e-kube-api-access-n8nxc\") pod \"nmstate-operator-694c9596b7-hs7ws\" (UID: \"12ceca0c-78de-41ff-8e20-cdf172bd915e\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-hs7ws" Feb 23 17:42:45 crc kubenswrapper[4724]: I0223 17:42:45.242667 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8nxc\" (UniqueName: \"kubernetes.io/projected/12ceca0c-78de-41ff-8e20-cdf172bd915e-kube-api-access-n8nxc\") pod \"nmstate-operator-694c9596b7-hs7ws\" (UID: \"12ceca0c-78de-41ff-8e20-cdf172bd915e\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-hs7ws" Feb 23 17:42:45 crc kubenswrapper[4724]: I0223 17:42:45.264638 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8nxc\" (UniqueName: \"kubernetes.io/projected/12ceca0c-78de-41ff-8e20-cdf172bd915e-kube-api-access-n8nxc\") pod \"nmstate-operator-694c9596b7-hs7ws\" (UID: \"12ceca0c-78de-41ff-8e20-cdf172bd915e\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-hs7ws" Feb 23 17:42:45 crc kubenswrapper[4724]: I0223 17:42:45.314239 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-hs7ws" Feb 23 17:42:45 crc kubenswrapper[4724]: I0223 17:42:45.600900 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-hs7ws"] Feb 23 17:42:46 crc kubenswrapper[4724]: I0223 17:42:46.549817 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-hs7ws" event={"ID":"12ceca0c-78de-41ff-8e20-cdf172bd915e","Type":"ContainerStarted","Data":"acea18f126a9a5cbf05d487c3edceb7f508333d8001f17e5b18133505d85681d"} Feb 23 17:42:48 crc kubenswrapper[4724]: I0223 17:42:48.579963 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-hs7ws" event={"ID":"12ceca0c-78de-41ff-8e20-cdf172bd915e","Type":"ContainerStarted","Data":"e3f665655356acf815d4e47897ba89fed57aee172b1efff5c1539cc18ea54c4c"} Feb 23 17:42:48 crc kubenswrapper[4724]: I0223 17:42:48.602258 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-hs7ws" podStartSLOduration=2.568094978 podStartE2EDuration="4.602233418s" podCreationTimestamp="2026-02-23 17:42:44 +0000 UTC" firstStartedPulling="2026-02-23 17:42:45.621587269 +0000 UTC m=+721.437786869" lastFinishedPulling="2026-02-23 17:42:47.655725699 +0000 UTC m=+723.471925309" observedRunningTime="2026-02-23 17:42:48.599972829 +0000 UTC m=+724.416172449" watchObservedRunningTime="2026-02-23 17:42:48.602233418 +0000 UTC m=+724.418433018" Feb 23 17:42:54 crc kubenswrapper[4724]: I0223 17:42:54.821596 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-2s7zq"] Feb 23 17:42:54 crc kubenswrapper[4724]: I0223 17:42:54.823290 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-2s7zq" Feb 23 17:42:54 crc kubenswrapper[4724]: I0223 17:42:54.826130 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-2ghv7" Feb 23 17:42:54 crc kubenswrapper[4724]: I0223 17:42:54.840315 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-49gxm"] Feb 23 17:42:54 crc kubenswrapper[4724]: I0223 17:42:54.841189 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-49gxm" Feb 23 17:42:54 crc kubenswrapper[4724]: I0223 17:42:54.845673 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 23 17:42:54 crc kubenswrapper[4724]: I0223 17:42:54.854145 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-2s7zq"] Feb 23 17:42:54 crc kubenswrapper[4724]: I0223 17:42:54.873442 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-49gxm"] Feb 23 17:42:54 crc kubenswrapper[4724]: I0223 17:42:54.889248 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-cfznb"] Feb 23 17:42:54 crc kubenswrapper[4724]: I0223 17:42:54.890239 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-cfznb" Feb 23 17:42:54 crc kubenswrapper[4724]: I0223 17:42:54.984731 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-lrrzf"] Feb 23 17:42:54 crc kubenswrapper[4724]: I0223 17:42:54.985604 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-lrrzf" Feb 23 17:42:54 crc kubenswrapper[4724]: I0223 17:42:54.989059 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-7vwqr" Feb 23 17:42:54 crc kubenswrapper[4724]: I0223 17:42:54.989296 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 23 17:42:54 crc kubenswrapper[4724]: I0223 17:42:54.989471 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 23 17:42:54 crc kubenswrapper[4724]: I0223 17:42:54.992463 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8dtz\" (UniqueName: \"kubernetes.io/projected/bce58068-4adb-427b-96f8-e289d595515d-kube-api-access-g8dtz\") pod \"nmstate-metrics-58c85c668d-2s7zq\" (UID: \"bce58068-4adb-427b-96f8-e289d595515d\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-2s7zq" Feb 23 17:42:54 crc kubenswrapper[4724]: I0223 17:42:54.992561 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/28356f9d-af74-4f20-ba5c-8a40fda9ef6d-ovs-socket\") pod \"nmstate-handler-cfznb\" (UID: \"28356f9d-af74-4f20-ba5c-8a40fda9ef6d\") " pod="openshift-nmstate/nmstate-handler-cfznb" Feb 23 17:42:54 crc kubenswrapper[4724]: I0223 17:42:54.992593 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/1217d925-38a5-4311-a32a-49e306238283-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-49gxm\" (UID: \"1217d925-38a5-4311-a32a-49e306238283\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-49gxm" Feb 23 17:42:54 crc kubenswrapper[4724]: I0223 17:42:54.992881 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp9hq\" (UniqueName: \"kubernetes.io/projected/28356f9d-af74-4f20-ba5c-8a40fda9ef6d-kube-api-access-sp9hq\") pod \"nmstate-handler-cfznb\" (UID: \"28356f9d-af74-4f20-ba5c-8a40fda9ef6d\") " pod="openshift-nmstate/nmstate-handler-cfznb" Feb 23 17:42:54 crc kubenswrapper[4724]: I0223 17:42:54.992957 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmzbx\" (UniqueName: \"kubernetes.io/projected/1217d925-38a5-4311-a32a-49e306238283-kube-api-access-zmzbx\") pod \"nmstate-webhook-866bcb46dc-49gxm\" (UID: \"1217d925-38a5-4311-a32a-49e306238283\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-49gxm" Feb 23 17:42:54 crc kubenswrapper[4724]: I0223 17:42:54.992987 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/28356f9d-af74-4f20-ba5c-8a40fda9ef6d-nmstate-lock\") pod \"nmstate-handler-cfznb\" (UID: \"28356f9d-af74-4f20-ba5c-8a40fda9ef6d\") " pod="openshift-nmstate/nmstate-handler-cfznb" Feb 23 17:42:54 crc kubenswrapper[4724]: I0223 17:42:54.993027 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/28356f9d-af74-4f20-ba5c-8a40fda9ef6d-dbus-socket\") pod \"nmstate-handler-cfznb\" (UID: \"28356f9d-af74-4f20-ba5c-8a40fda9ef6d\") " pod="openshift-nmstate/nmstate-handler-cfznb" Feb 23 17:42:55 crc kubenswrapper[4724]: I0223 17:42:55.000985 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-lrrzf"] Feb 23 17:42:55 crc kubenswrapper[4724]: I0223 17:42:55.094270 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/28356f9d-af74-4f20-ba5c-8a40fda9ef6d-ovs-socket\") pod \"nmstate-handler-cfznb\" (UID: \"28356f9d-af74-4f20-ba5c-8a40fda9ef6d\") " pod="openshift-nmstate/nmstate-handler-cfznb" Feb 23 17:42:55 crc kubenswrapper[4724]: I0223 17:42:55.094333 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9jzn\" (UniqueName: \"kubernetes.io/projected/86c89d64-bec0-4e95-ae8c-194200a9f20c-kube-api-access-h9jzn\") pod \"nmstate-console-plugin-5c78fc5d65-lrrzf\" (UID: \"86c89d64-bec0-4e95-ae8c-194200a9f20c\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-lrrzf" Feb 23 17:42:55 crc kubenswrapper[4724]: I0223 17:42:55.094362 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/1217d925-38a5-4311-a32a-49e306238283-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-49gxm\" (UID: \"1217d925-38a5-4311-a32a-49e306238283\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-49gxm" Feb 23 17:42:55 crc kubenswrapper[4724]: I0223 17:42:55.094417 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/86c89d64-bec0-4e95-ae8c-194200a9f20c-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-lrrzf\" (UID: \"86c89d64-bec0-4e95-ae8c-194200a9f20c\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-lrrzf" Feb 23 17:42:55 crc kubenswrapper[4724]: I0223 17:42:55.094440 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sp9hq\" (UniqueName: \"kubernetes.io/projected/28356f9d-af74-4f20-ba5c-8a40fda9ef6d-kube-api-access-sp9hq\") pod \"nmstate-handler-cfznb\" (UID: \"28356f9d-af74-4f20-ba5c-8a40fda9ef6d\") " pod="openshift-nmstate/nmstate-handler-cfznb" Feb 23 17:42:55 crc kubenswrapper[4724]: I0223 17:42:55.094458 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/86c89d64-bec0-4e95-ae8c-194200a9f20c-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-lrrzf\" (UID: \"86c89d64-bec0-4e95-ae8c-194200a9f20c\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-lrrzf" Feb 23 17:42:55 crc kubenswrapper[4724]: I0223 17:42:55.094473 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmzbx\" (UniqueName: \"kubernetes.io/projected/1217d925-38a5-4311-a32a-49e306238283-kube-api-access-zmzbx\") pod \"nmstate-webhook-866bcb46dc-49gxm\" (UID: \"1217d925-38a5-4311-a32a-49e306238283\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-49gxm" Feb 23 17:42:55 crc kubenswrapper[4724]: I0223 17:42:55.094489 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/28356f9d-af74-4f20-ba5c-8a40fda9ef6d-nmstate-lock\") pod \"nmstate-handler-cfznb\" (UID: \"28356f9d-af74-4f20-ba5c-8a40fda9ef6d\") " pod="openshift-nmstate/nmstate-handler-cfznb" Feb 23 17:42:55 crc kubenswrapper[4724]: I0223 17:42:55.094514 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/28356f9d-af74-4f20-ba5c-8a40fda9ef6d-dbus-socket\") pod \"nmstate-handler-cfznb\" (UID: \"28356f9d-af74-4f20-ba5c-8a40fda9ef6d\") " pod="openshift-nmstate/nmstate-handler-cfznb" Feb 23 17:42:55 crc kubenswrapper[4724]: I0223 17:42:55.094535 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8dtz\" (UniqueName: \"kubernetes.io/projected/bce58068-4adb-427b-96f8-e289d595515d-kube-api-access-g8dtz\") pod \"nmstate-metrics-58c85c668d-2s7zq\" (UID: \"bce58068-4adb-427b-96f8-e289d595515d\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-2s7zq" Feb 23 17:42:55 crc kubenswrapper[4724]: I0223 17:42:55.095056 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/28356f9d-af74-4f20-ba5c-8a40fda9ef6d-nmstate-lock\") pod \"nmstate-handler-cfznb\" (UID: \"28356f9d-af74-4f20-ba5c-8a40fda9ef6d\") " pod="openshift-nmstate/nmstate-handler-cfznb" Feb 23 17:42:55 crc kubenswrapper[4724]: I0223 17:42:55.095309 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/28356f9d-af74-4f20-ba5c-8a40fda9ef6d-ovs-socket\") pod \"nmstate-handler-cfznb\" (UID: \"28356f9d-af74-4f20-ba5c-8a40fda9ef6d\") " pod="openshift-nmstate/nmstate-handler-cfznb" Feb 23 17:42:55 crc kubenswrapper[4724]: I0223 17:42:55.095407 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/28356f9d-af74-4f20-ba5c-8a40fda9ef6d-dbus-socket\") pod \"nmstate-handler-cfznb\" (UID: \"28356f9d-af74-4f20-ba5c-8a40fda9ef6d\") " pod="openshift-nmstate/nmstate-handler-cfznb" Feb 23 17:42:55 crc kubenswrapper[4724]: I0223 17:42:55.106433 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/1217d925-38a5-4311-a32a-49e306238283-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-49gxm\" (UID: \"1217d925-38a5-4311-a32a-49e306238283\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-49gxm" Feb 23 17:42:55 crc kubenswrapper[4724]: I0223 17:42:55.119488 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmzbx\" (UniqueName: \"kubernetes.io/projected/1217d925-38a5-4311-a32a-49e306238283-kube-api-access-zmzbx\") pod \"nmstate-webhook-866bcb46dc-49gxm\" (UID: \"1217d925-38a5-4311-a32a-49e306238283\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-49gxm" Feb 23 17:42:55 crc kubenswrapper[4724]: I0223 17:42:55.121999 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp9hq\" (UniqueName: \"kubernetes.io/projected/28356f9d-af74-4f20-ba5c-8a40fda9ef6d-kube-api-access-sp9hq\") pod \"nmstate-handler-cfznb\" (UID: \"28356f9d-af74-4f20-ba5c-8a40fda9ef6d\") " pod="openshift-nmstate/nmstate-handler-cfznb" Feb 23 17:42:55 crc kubenswrapper[4724]: I0223 17:42:55.130132 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8dtz\" (UniqueName: \"kubernetes.io/projected/bce58068-4adb-427b-96f8-e289d595515d-kube-api-access-g8dtz\") pod \"nmstate-metrics-58c85c668d-2s7zq\" (UID: \"bce58068-4adb-427b-96f8-e289d595515d\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-2s7zq" Feb 23 17:42:55 crc kubenswrapper[4724]: I0223 17:42:55.140786 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-2s7zq" Feb 23 17:42:55 crc kubenswrapper[4724]: I0223 17:42:55.154364 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-49gxm" Feb 23 17:42:55 crc kubenswrapper[4724]: I0223 17:42:55.182068 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-59bf9587c-q9w72"] Feb 23 17:42:55 crc kubenswrapper[4724]: I0223 17:42:55.183017 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-59bf9587c-q9w72" Feb 23 17:42:55 crc kubenswrapper[4724]: I0223 17:42:55.194214 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-59bf9587c-q9w72"] Feb 23 17:42:55 crc kubenswrapper[4724]: I0223 17:42:55.195468 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/86c89d64-bec0-4e95-ae8c-194200a9f20c-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-lrrzf\" (UID: \"86c89d64-bec0-4e95-ae8c-194200a9f20c\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-lrrzf" Feb 23 17:42:55 crc kubenswrapper[4724]: I0223 17:42:55.195504 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/86c89d64-bec0-4e95-ae8c-194200a9f20c-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-lrrzf\" (UID: \"86c89d64-bec0-4e95-ae8c-194200a9f20c\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-lrrzf" Feb 23 17:42:55 crc kubenswrapper[4724]: I0223 17:42:55.195573 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9jzn\" (UniqueName: \"kubernetes.io/projected/86c89d64-bec0-4e95-ae8c-194200a9f20c-kube-api-access-h9jzn\") pod \"nmstate-console-plugin-5c78fc5d65-lrrzf\" (UID: \"86c89d64-bec0-4e95-ae8c-194200a9f20c\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-lrrzf" Feb 23 17:42:55 crc kubenswrapper[4724]: E0223 17:42:55.195585 4724 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Feb 23 17:42:55 crc kubenswrapper[4724]: E0223 17:42:55.195640 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86c89d64-bec0-4e95-ae8c-194200a9f20c-plugin-serving-cert podName:86c89d64-bec0-4e95-ae8c-194200a9f20c nodeName:}" failed. No retries permitted until 2026-02-23 17:42:55.695618626 +0000 UTC m=+731.511818226 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/86c89d64-bec0-4e95-ae8c-194200a9f20c-plugin-serving-cert") pod "nmstate-console-plugin-5c78fc5d65-lrrzf" (UID: "86c89d64-bec0-4e95-ae8c-194200a9f20c") : secret "plugin-serving-cert" not found Feb 23 17:42:55 crc kubenswrapper[4724]: I0223 17:42:55.196602 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/86c89d64-bec0-4e95-ae8c-194200a9f20c-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-lrrzf\" (UID: \"86c89d64-bec0-4e95-ae8c-194200a9f20c\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-lrrzf" Feb 23 17:42:55 crc kubenswrapper[4724]: I0223 17:42:55.208063 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-cfznb" Feb 23 17:42:55 crc kubenswrapper[4724]: I0223 17:42:55.235111 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9jzn\" (UniqueName: \"kubernetes.io/projected/86c89d64-bec0-4e95-ae8c-194200a9f20c-kube-api-access-h9jzn\") pod \"nmstate-console-plugin-5c78fc5d65-lrrzf\" (UID: \"86c89d64-bec0-4e95-ae8c-194200a9f20c\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-lrrzf" Feb 23 17:42:55 crc kubenswrapper[4724]: W0223 17:42:55.236462 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28356f9d_af74_4f20_ba5c_8a40fda9ef6d.slice/crio-7c48f73d0a0dfbd3e4106f1129abf6ad41063c7eb76d57d9bda841a038cf4dbc WatchSource:0}: Error finding container 7c48f73d0a0dfbd3e4106f1129abf6ad41063c7eb76d57d9bda841a038cf4dbc: Status 404 returned error can't find the container with id 7c48f73d0a0dfbd3e4106f1129abf6ad41063c7eb76d57d9bda841a038cf4dbc Feb 23 17:42:55 crc kubenswrapper[4724]: I0223 17:42:55.297187 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/406fd52c-6117-47e3-abe0-f9d0bb3f9f49-console-config\") pod \"console-59bf9587c-q9w72\" (UID: \"406fd52c-6117-47e3-abe0-f9d0bb3f9f49\") " pod="openshift-console/console-59bf9587c-q9w72" Feb 23 17:42:55 crc kubenswrapper[4724]: I0223 17:42:55.297234 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/406fd52c-6117-47e3-abe0-f9d0bb3f9f49-service-ca\") pod \"console-59bf9587c-q9w72\" (UID: \"406fd52c-6117-47e3-abe0-f9d0bb3f9f49\") " pod="openshift-console/console-59bf9587c-q9w72" Feb 23 17:42:55 crc kubenswrapper[4724]: I0223 17:42:55.297291 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/406fd52c-6117-47e3-abe0-f9d0bb3f9f49-console-oauth-config\") pod \"console-59bf9587c-q9w72\" (UID: \"406fd52c-6117-47e3-abe0-f9d0bb3f9f49\") " pod="openshift-console/console-59bf9587c-q9w72" Feb 23 17:42:55 crc kubenswrapper[4724]: I0223 17:42:55.297315 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/406fd52c-6117-47e3-abe0-f9d0bb3f9f49-trusted-ca-bundle\") pod \"console-59bf9587c-q9w72\" (UID: \"406fd52c-6117-47e3-abe0-f9d0bb3f9f49\") " pod="openshift-console/console-59bf9587c-q9w72" Feb 23 17:42:55 crc kubenswrapper[4724]: I0223 17:42:55.297332 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcrcd\" (UniqueName: \"kubernetes.io/projected/406fd52c-6117-47e3-abe0-f9d0bb3f9f49-kube-api-access-hcrcd\") pod \"console-59bf9587c-q9w72\" (UID: \"406fd52c-6117-47e3-abe0-f9d0bb3f9f49\") " pod="openshift-console/console-59bf9587c-q9w72" Feb 23 17:42:55 crc kubenswrapper[4724]: I0223 17:42:55.297355 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/406fd52c-6117-47e3-abe0-f9d0bb3f9f49-console-serving-cert\") pod \"console-59bf9587c-q9w72\" (UID: \"406fd52c-6117-47e3-abe0-f9d0bb3f9f49\") " pod="openshift-console/console-59bf9587c-q9w72" Feb 23 17:42:55 crc kubenswrapper[4724]: I0223 17:42:55.297418 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/406fd52c-6117-47e3-abe0-f9d0bb3f9f49-oauth-serving-cert\") pod \"console-59bf9587c-q9w72\" (UID: \"406fd52c-6117-47e3-abe0-f9d0bb3f9f49\") " pod="openshift-console/console-59bf9587c-q9w72" Feb 23 17:42:55 crc kubenswrapper[4724]: I0223 17:42:55.398506 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/406fd52c-6117-47e3-abe0-f9d0bb3f9f49-oauth-serving-cert\") pod \"console-59bf9587c-q9w72\" (UID: \"406fd52c-6117-47e3-abe0-f9d0bb3f9f49\") " pod="openshift-console/console-59bf9587c-q9w72" Feb 23 17:42:55 crc kubenswrapper[4724]: I0223 17:42:55.398568 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/406fd52c-6117-47e3-abe0-f9d0bb3f9f49-console-config\") pod \"console-59bf9587c-q9w72\" (UID: \"406fd52c-6117-47e3-abe0-f9d0bb3f9f49\") " pod="openshift-console/console-59bf9587c-q9w72" Feb 23 17:42:55 crc kubenswrapper[4724]: I0223 17:42:55.398585 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/406fd52c-6117-47e3-abe0-f9d0bb3f9f49-service-ca\") pod \"console-59bf9587c-q9w72\" (UID: \"406fd52c-6117-47e3-abe0-f9d0bb3f9f49\") " pod="openshift-console/console-59bf9587c-q9w72" Feb 23 17:42:55 crc kubenswrapper[4724]: I0223 17:42:55.398632 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/406fd52c-6117-47e3-abe0-f9d0bb3f9f49-console-oauth-config\") pod \"console-59bf9587c-q9w72\" (UID: \"406fd52c-6117-47e3-abe0-f9d0bb3f9f49\") " pod="openshift-console/console-59bf9587c-q9w72" Feb 23 17:42:55 crc kubenswrapper[4724]: I0223 17:42:55.398656 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/406fd52c-6117-47e3-abe0-f9d0bb3f9f49-trusted-ca-bundle\") pod \"console-59bf9587c-q9w72\" (UID: \"406fd52c-6117-47e3-abe0-f9d0bb3f9f49\") " pod="openshift-console/console-59bf9587c-q9w72" Feb 23 17:42:55 crc kubenswrapper[4724]: I0223 17:42:55.398671 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcrcd\" (UniqueName: \"kubernetes.io/projected/406fd52c-6117-47e3-abe0-f9d0bb3f9f49-kube-api-access-hcrcd\") pod \"console-59bf9587c-q9w72\" (UID: \"406fd52c-6117-47e3-abe0-f9d0bb3f9f49\") " pod="openshift-console/console-59bf9587c-q9w72" Feb 23 17:42:55 crc kubenswrapper[4724]: I0223 17:42:55.398695 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/406fd52c-6117-47e3-abe0-f9d0bb3f9f49-console-serving-cert\") pod \"console-59bf9587c-q9w72\" (UID: \"406fd52c-6117-47e3-abe0-f9d0bb3f9f49\") " pod="openshift-console/console-59bf9587c-q9w72" Feb 23 17:42:55 crc kubenswrapper[4724]: I0223 17:42:55.400752 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/406fd52c-6117-47e3-abe0-f9d0bb3f9f49-oauth-serving-cert\") pod \"console-59bf9587c-q9w72\" (UID: \"406fd52c-6117-47e3-abe0-f9d0bb3f9f49\") " pod="openshift-console/console-59bf9587c-q9w72" Feb 23 17:42:55 crc kubenswrapper[4724]: I0223 17:42:55.401243 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/406fd52c-6117-47e3-abe0-f9d0bb3f9f49-trusted-ca-bundle\") pod \"console-59bf9587c-q9w72\" (UID: \"406fd52c-6117-47e3-abe0-f9d0bb3f9f49\") " pod="openshift-console/console-59bf9587c-q9w72" Feb 23 17:42:55 crc kubenswrapper[4724]: I0223 17:42:55.401577 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/406fd52c-6117-47e3-abe0-f9d0bb3f9f49-console-config\") pod \"console-59bf9587c-q9w72\" (UID: \"406fd52c-6117-47e3-abe0-f9d0bb3f9f49\") " pod="openshift-console/console-59bf9587c-q9w72" Feb 23 17:42:55 crc kubenswrapper[4724]: I0223 17:42:55.402103 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/406fd52c-6117-47e3-abe0-f9d0bb3f9f49-service-ca\") pod \"console-59bf9587c-q9w72\" (UID: \"406fd52c-6117-47e3-abe0-f9d0bb3f9f49\") " pod="openshift-console/console-59bf9587c-q9w72" Feb 23 17:42:55 crc kubenswrapper[4724]: I0223 17:42:55.405341 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/406fd52c-6117-47e3-abe0-f9d0bb3f9f49-console-oauth-config\") pod \"console-59bf9587c-q9w72\" (UID: \"406fd52c-6117-47e3-abe0-f9d0bb3f9f49\") " pod="openshift-console/console-59bf9587c-q9w72" Feb 23 17:42:55 crc kubenswrapper[4724]: I0223 17:42:55.405473 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/406fd52c-6117-47e3-abe0-f9d0bb3f9f49-console-serving-cert\") pod \"console-59bf9587c-q9w72\" (UID: \"406fd52c-6117-47e3-abe0-f9d0bb3f9f49\") " pod="openshift-console/console-59bf9587c-q9w72" Feb 23 17:42:55 crc kubenswrapper[4724]: I0223 17:42:55.432881 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcrcd\" (UniqueName: \"kubernetes.io/projected/406fd52c-6117-47e3-abe0-f9d0bb3f9f49-kube-api-access-hcrcd\") pod \"console-59bf9587c-q9w72\" (UID: \"406fd52c-6117-47e3-abe0-f9d0bb3f9f49\") " pod="openshift-console/console-59bf9587c-q9w72" Feb 23 17:42:55 crc kubenswrapper[4724]: I0223 17:42:55.440210 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-2s7zq"] Feb 23 17:42:55 crc kubenswrapper[4724]: I0223 17:42:55.544681 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-59bf9587c-q9w72" Feb 23 17:42:55 crc kubenswrapper[4724]: I0223 17:42:55.615983 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-49gxm"] Feb 23 17:42:55 crc kubenswrapper[4724]: W0223 17:42:55.618989 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1217d925_38a5_4311_a32a_49e306238283.slice/crio-e19b9cd867d001eb674f0ebf7c8dd09de5583a84f6b1148ae640241ac0ff5a46 WatchSource:0}: Error finding container e19b9cd867d001eb674f0ebf7c8dd09de5583a84f6b1148ae640241ac0ff5a46: Status 404 returned error can't find the container with id e19b9cd867d001eb674f0ebf7c8dd09de5583a84f6b1148ae640241ac0ff5a46 Feb 23 17:42:55 crc kubenswrapper[4724]: I0223 17:42:55.628365 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-2s7zq" event={"ID":"bce58068-4adb-427b-96f8-e289d595515d","Type":"ContainerStarted","Data":"15d1099390279b7972c654f434b6dd3a0b445e3bb6a1df3aa8a06ecdaba028c3"} Feb 23 17:42:55 crc kubenswrapper[4724]: I0223 17:42:55.629732 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-cfznb" event={"ID":"28356f9d-af74-4f20-ba5c-8a40fda9ef6d","Type":"ContainerStarted","Data":"7c48f73d0a0dfbd3e4106f1129abf6ad41063c7eb76d57d9bda841a038cf4dbc"} Feb 23 17:42:55 crc kubenswrapper[4724]: I0223 17:42:55.633625 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-49gxm" event={"ID":"1217d925-38a5-4311-a32a-49e306238283","Type":"ContainerStarted","Data":"e19b9cd867d001eb674f0ebf7c8dd09de5583a84f6b1148ae640241ac0ff5a46"} Feb 23 17:42:55 crc kubenswrapper[4724]: I0223 17:42:55.703714 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/86c89d64-bec0-4e95-ae8c-194200a9f20c-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-lrrzf\" (UID: \"86c89d64-bec0-4e95-ae8c-194200a9f20c\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-lrrzf" Feb 23 17:42:55 crc kubenswrapper[4724]: I0223 17:42:55.710141 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/86c89d64-bec0-4e95-ae8c-194200a9f20c-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-lrrzf\" (UID: \"86c89d64-bec0-4e95-ae8c-194200a9f20c\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-lrrzf" Feb 23 17:42:55 crc kubenswrapper[4724]: I0223 17:42:55.735455 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-59bf9587c-q9w72"] Feb 23 17:42:55 crc kubenswrapper[4724]: W0223 17:42:55.737494 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod406fd52c_6117_47e3_abe0_f9d0bb3f9f49.slice/crio-4781776b7c8e96278f45945a11b704b9fa4a9008a522309845cf951dc5aab9fe WatchSource:0}: Error finding container 4781776b7c8e96278f45945a11b704b9fa4a9008a522309845cf951dc5aab9fe: Status 404 returned error can't find the container with id 4781776b7c8e96278f45945a11b704b9fa4a9008a522309845cf951dc5aab9fe Feb 23 17:42:55 crc kubenswrapper[4724]: I0223 17:42:55.906339 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-lrrzf" Feb 23 17:42:56 crc kubenswrapper[4724]: I0223 17:42:56.363277 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-lrrzf"] Feb 23 17:42:56 crc kubenswrapper[4724]: I0223 17:42:56.642619 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-lrrzf" event={"ID":"86c89d64-bec0-4e95-ae8c-194200a9f20c","Type":"ContainerStarted","Data":"d6856e9e1dd4f8c541c2a451e31685cb8895aa6ef37b8e163d803e13f7bd1027"} Feb 23 17:42:56 crc kubenswrapper[4724]: I0223 17:42:56.644449 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-59bf9587c-q9w72" event={"ID":"406fd52c-6117-47e3-abe0-f9d0bb3f9f49","Type":"ContainerStarted","Data":"9ffcf9657cb69808ee9803d8df01937d984892cad88ae468242bff21eb859980"} Feb 23 17:42:56 crc kubenswrapper[4724]: I0223 17:42:56.644495 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-59bf9587c-q9w72" event={"ID":"406fd52c-6117-47e3-abe0-f9d0bb3f9f49","Type":"ContainerStarted","Data":"4781776b7c8e96278f45945a11b704b9fa4a9008a522309845cf951dc5aab9fe"} Feb 23 17:42:56 crc kubenswrapper[4724]: I0223 17:42:56.680910 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-59bf9587c-q9w72" podStartSLOduration=1.680878956 podStartE2EDuration="1.680878956s" podCreationTimestamp="2026-02-23 17:42:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:42:56.663150411 +0000 UTC m=+732.479350021" watchObservedRunningTime="2026-02-23 17:42:56.680878956 +0000 UTC m=+732.497078576" Feb 23 17:42:58 crc kubenswrapper[4724]: I0223 17:42:58.663952 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-49gxm" event={"ID":"1217d925-38a5-4311-a32a-49e306238283","Type":"ContainerStarted","Data":"aee20976707ee410bcd18fd0e257eadd66e1882b4bb20b10525a9004b5a8c990"} Feb 23 17:42:58 crc kubenswrapper[4724]: I0223 17:42:58.664777 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-49gxm" Feb 23 17:42:58 crc kubenswrapper[4724]: I0223 17:42:58.669047 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-2s7zq" event={"ID":"bce58068-4adb-427b-96f8-e289d595515d","Type":"ContainerStarted","Data":"596e3a4d089ce772722e50f94dde59a7cd3004f9afce020d40cfa9e70bc1295b"} Feb 23 17:42:58 crc kubenswrapper[4724]: I0223 17:42:58.678768 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-cfznb" event={"ID":"28356f9d-af74-4f20-ba5c-8a40fda9ef6d","Type":"ContainerStarted","Data":"b1ca1dcdb7a7dbdc17cc2cd30323dda5b291afc25f726f272c560d496fd63c18"} Feb 23 17:42:58 crc kubenswrapper[4724]: I0223 17:42:58.679008 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-cfznb" Feb 23 17:42:58 crc kubenswrapper[4724]: I0223 17:42:58.680895 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-49gxm" podStartSLOduration=2.588186166 podStartE2EDuration="4.680880511s" podCreationTimestamp="2026-02-23 17:42:54 +0000 UTC" firstStartedPulling="2026-02-23 17:42:55.624148257 +0000 UTC m=+731.440347887" lastFinishedPulling="2026-02-23 17:42:57.716842622 +0000 UTC m=+733.533042232" observedRunningTime="2026-02-23 17:42:58.679954377 +0000 UTC m=+734.496153987" watchObservedRunningTime="2026-02-23 17:42:58.680880511 +0000 UTC m=+734.497080111" Feb 23 17:42:58 crc kubenswrapper[4724]: I0223 17:42:58.715399 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-cfznb" podStartSLOduration=2.259982826 podStartE2EDuration="4.715357435s" podCreationTimestamp="2026-02-23 17:42:54 +0000 UTC" firstStartedPulling="2026-02-23 17:42:55.239928678 +0000 UTC m=+731.056128278" lastFinishedPulling="2026-02-23 17:42:57.695303267 +0000 UTC m=+733.511502887" observedRunningTime="2026-02-23 17:42:58.71019033 +0000 UTC m=+734.526389930" watchObservedRunningTime="2026-02-23 17:42:58.715357435 +0000 UTC m=+734.531557035" Feb 23 17:42:59 crc kubenswrapper[4724]: I0223 17:42:59.687053 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-lrrzf" event={"ID":"86c89d64-bec0-4e95-ae8c-194200a9f20c","Type":"ContainerStarted","Data":"79384f5c38f130ec1c232f4dccc2c9f78dfca72ef10ff49b1761be940c0f1e99"} Feb 23 17:42:59 crc kubenswrapper[4724]: I0223 17:42:59.716011 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-lrrzf" podStartSLOduration=3.479738702 podStartE2EDuration="5.715985293s" podCreationTimestamp="2026-02-23 17:42:54 +0000 UTC" firstStartedPulling="2026-02-23 17:42:56.373211816 +0000 UTC m=+732.189411436" lastFinishedPulling="2026-02-23 17:42:58.609458417 +0000 UTC m=+734.425658027" observedRunningTime="2026-02-23 17:42:59.71091445 +0000 UTC m=+735.527114050" watchObservedRunningTime="2026-02-23 17:42:59.715985293 +0000 UTC m=+735.532184893" Feb 23 17:43:00 crc kubenswrapper[4724]: I0223 17:43:00.703082 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-2s7zq" event={"ID":"bce58068-4adb-427b-96f8-e289d595515d","Type":"ContainerStarted","Data":"993ba508ef7fc770539ad973d8a64434c5a49961c988beb38bbc42ab67151ce6"} Feb 23 17:43:00 crc kubenswrapper[4724]: I0223 17:43:00.738908 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-2s7zq" podStartSLOduration=2.54489484 podStartE2EDuration="6.738882016s" podCreationTimestamp="2026-02-23 17:42:54 +0000 UTC" firstStartedPulling="2026-02-23 17:42:55.451316453 +0000 UTC m=+731.267516053" lastFinishedPulling="2026-02-23 17:42:59.645303629 +0000 UTC m=+735.461503229" observedRunningTime="2026-02-23 17:43:00.725985888 +0000 UTC m=+736.542185498" watchObservedRunningTime="2026-02-23 17:43:00.738882016 +0000 UTC m=+736.555081616" Feb 23 17:43:05 crc kubenswrapper[4724]: I0223 17:43:05.255665 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-cfznb" Feb 23 17:43:05 crc kubenswrapper[4724]: I0223 17:43:05.545604 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-59bf9587c-q9w72" Feb 23 17:43:05 crc kubenswrapper[4724]: I0223 17:43:05.545657 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-59bf9587c-q9w72" Feb 23 17:43:05 crc kubenswrapper[4724]: I0223 17:43:05.553317 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-59bf9587c-q9w72" Feb 23 17:43:05 crc kubenswrapper[4724]: I0223 17:43:05.742488 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-59bf9587c-q9w72" Feb 23 17:43:05 crc kubenswrapper[4724]: I0223 17:43:05.803699 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-fknnv"] Feb 23 17:43:15 crc kubenswrapper[4724]: I0223 17:43:15.160804 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-49gxm" Feb 23 17:43:30 crc kubenswrapper[4724]: I0223 17:43:30.512771 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21324dbf"] Feb 23 17:43:30 crc kubenswrapper[4724]: I0223 17:43:30.515130 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21324dbf" Feb 23 17:43:30 crc kubenswrapper[4724]: I0223 17:43:30.521018 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 23 17:43:30 crc kubenswrapper[4724]: I0223 17:43:30.539163 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21324dbf"] Feb 23 17:43:30 crc kubenswrapper[4724]: I0223 17:43:30.694417 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/371db8f9-a502-40cb-a0cd-256b481c12aa-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21324dbf\" (UID: \"371db8f9-a502-40cb-a0cd-256b481c12aa\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21324dbf" Feb 23 17:43:30 crc kubenswrapper[4724]: I0223 17:43:30.694984 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c94s\" (UniqueName: \"kubernetes.io/projected/371db8f9-a502-40cb-a0cd-256b481c12aa-kube-api-access-4c94s\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21324dbf\" (UID: \"371db8f9-a502-40cb-a0cd-256b481c12aa\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21324dbf" Feb 23 17:43:30 crc kubenswrapper[4724]: I0223 17:43:30.695045 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/371db8f9-a502-40cb-a0cd-256b481c12aa-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21324dbf\" (UID: \"371db8f9-a502-40cb-a0cd-256b481c12aa\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21324dbf" Feb 23 17:43:30 crc kubenswrapper[4724]: I0223 17:43:30.796074 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4c94s\" (UniqueName: \"kubernetes.io/projected/371db8f9-a502-40cb-a0cd-256b481c12aa-kube-api-access-4c94s\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21324dbf\" (UID: \"371db8f9-a502-40cb-a0cd-256b481c12aa\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21324dbf" Feb 23 17:43:30 crc kubenswrapper[4724]: I0223 17:43:30.796125 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/371db8f9-a502-40cb-a0cd-256b481c12aa-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21324dbf\" (UID: \"371db8f9-a502-40cb-a0cd-256b481c12aa\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21324dbf" Feb 23 17:43:30 crc kubenswrapper[4724]: I0223 17:43:30.796181 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/371db8f9-a502-40cb-a0cd-256b481c12aa-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21324dbf\" (UID: \"371db8f9-a502-40cb-a0cd-256b481c12aa\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21324dbf" Feb 23 17:43:30 crc kubenswrapper[4724]: I0223 17:43:30.796714 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/371db8f9-a502-40cb-a0cd-256b481c12aa-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21324dbf\" (UID: \"371db8f9-a502-40cb-a0cd-256b481c12aa\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21324dbf" Feb 23 17:43:30 crc kubenswrapper[4724]: I0223 17:43:30.796845 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/371db8f9-a502-40cb-a0cd-256b481c12aa-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21324dbf\" (UID: \"371db8f9-a502-40cb-a0cd-256b481c12aa\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21324dbf" Feb 23 17:43:30 crc kubenswrapper[4724]: I0223 17:43:30.818366 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4c94s\" (UniqueName: \"kubernetes.io/projected/371db8f9-a502-40cb-a0cd-256b481c12aa-kube-api-access-4c94s\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21324dbf\" (UID: \"371db8f9-a502-40cb-a0cd-256b481c12aa\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21324dbf" Feb 23 17:43:30 crc kubenswrapper[4724]: I0223 17:43:30.834316 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21324dbf" Feb 23 17:43:30 crc kubenswrapper[4724]: I0223 17:43:30.856850 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-fknnv" podUID="997b5710-9b99-4207-92da-28b7a1923db2" containerName="console" containerID="cri-o://5f3cb84d271e733bef79fc17cc127f47657cda1d8996903130f4b9174dee5bb6" gracePeriod=15 Feb 23 17:43:31 crc kubenswrapper[4724]: I0223 17:43:31.076218 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21324dbf"] Feb 23 17:43:31 crc kubenswrapper[4724]: I0223 17:43:31.225968 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-fknnv_997b5710-9b99-4207-92da-28b7a1923db2/console/0.log" Feb 23 17:43:31 crc kubenswrapper[4724]: I0223 17:43:31.226342 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fknnv" Feb 23 17:43:31 crc kubenswrapper[4724]: I0223 17:43:31.404113 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/997b5710-9b99-4207-92da-28b7a1923db2-console-config\") pod \"997b5710-9b99-4207-92da-28b7a1923db2\" (UID: \"997b5710-9b99-4207-92da-28b7a1923db2\") " Feb 23 17:43:31 crc kubenswrapper[4724]: I0223 17:43:31.404178 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/997b5710-9b99-4207-92da-28b7a1923db2-console-oauth-config\") pod \"997b5710-9b99-4207-92da-28b7a1923db2\" (UID: \"997b5710-9b99-4207-92da-28b7a1923db2\") " Feb 23 17:43:31 crc kubenswrapper[4724]: I0223 17:43:31.404208 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/997b5710-9b99-4207-92da-28b7a1923db2-trusted-ca-bundle\") pod \"997b5710-9b99-4207-92da-28b7a1923db2\" (UID: \"997b5710-9b99-4207-92da-28b7a1923db2\") " Feb 23 17:43:31 crc kubenswrapper[4724]: I0223 17:43:31.404240 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/997b5710-9b99-4207-92da-28b7a1923db2-service-ca\") pod \"997b5710-9b99-4207-92da-28b7a1923db2\" (UID: \"997b5710-9b99-4207-92da-28b7a1923db2\") " Feb 23 17:43:31 crc kubenswrapper[4724]: I0223 17:43:31.404371 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcfrr\" (UniqueName: \"kubernetes.io/projected/997b5710-9b99-4207-92da-28b7a1923db2-kube-api-access-fcfrr\") pod \"997b5710-9b99-4207-92da-28b7a1923db2\" (UID: \"997b5710-9b99-4207-92da-28b7a1923db2\") " Feb 23 17:43:31 crc kubenswrapper[4724]: I0223 17:43:31.404404 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/997b5710-9b99-4207-92da-28b7a1923db2-oauth-serving-cert\") pod \"997b5710-9b99-4207-92da-28b7a1923db2\" (UID: \"997b5710-9b99-4207-92da-28b7a1923db2\") " Feb 23 17:43:31 crc kubenswrapper[4724]: I0223 17:43:31.404439 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/997b5710-9b99-4207-92da-28b7a1923db2-console-serving-cert\") pod \"997b5710-9b99-4207-92da-28b7a1923db2\" (UID: \"997b5710-9b99-4207-92da-28b7a1923db2\") " Feb 23 17:43:31 crc kubenswrapper[4724]: I0223 17:43:31.405267 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/997b5710-9b99-4207-92da-28b7a1923db2-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "997b5710-9b99-4207-92da-28b7a1923db2" (UID: "997b5710-9b99-4207-92da-28b7a1923db2"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:43:31 crc kubenswrapper[4724]: I0223 17:43:31.405293 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/997b5710-9b99-4207-92da-28b7a1923db2-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "997b5710-9b99-4207-92da-28b7a1923db2" (UID: "997b5710-9b99-4207-92da-28b7a1923db2"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:43:31 crc kubenswrapper[4724]: I0223 17:43:31.405337 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/997b5710-9b99-4207-92da-28b7a1923db2-console-config" (OuterVolumeSpecName: "console-config") pod "997b5710-9b99-4207-92da-28b7a1923db2" (UID: "997b5710-9b99-4207-92da-28b7a1923db2"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:43:31 crc kubenswrapper[4724]: I0223 17:43:31.405512 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/997b5710-9b99-4207-92da-28b7a1923db2-service-ca" (OuterVolumeSpecName: "service-ca") pod "997b5710-9b99-4207-92da-28b7a1923db2" (UID: "997b5710-9b99-4207-92da-28b7a1923db2"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:43:31 crc kubenswrapper[4724]: I0223 17:43:31.411766 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/997b5710-9b99-4207-92da-28b7a1923db2-kube-api-access-fcfrr" (OuterVolumeSpecName: "kube-api-access-fcfrr") pod "997b5710-9b99-4207-92da-28b7a1923db2" (UID: "997b5710-9b99-4207-92da-28b7a1923db2"). InnerVolumeSpecName "kube-api-access-fcfrr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:43:31 crc kubenswrapper[4724]: I0223 17:43:31.411795 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/997b5710-9b99-4207-92da-28b7a1923db2-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "997b5710-9b99-4207-92da-28b7a1923db2" (UID: "997b5710-9b99-4207-92da-28b7a1923db2"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:43:31 crc kubenswrapper[4724]: I0223 17:43:31.412008 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/997b5710-9b99-4207-92da-28b7a1923db2-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "997b5710-9b99-4207-92da-28b7a1923db2" (UID: "997b5710-9b99-4207-92da-28b7a1923db2"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:43:31 crc kubenswrapper[4724]: I0223 17:43:31.506238 4724 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/997b5710-9b99-4207-92da-28b7a1923db2-service-ca\") on node \"crc\" DevicePath \"\"" Feb 23 17:43:31 crc kubenswrapper[4724]: I0223 17:43:31.506276 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcfrr\" (UniqueName: \"kubernetes.io/projected/997b5710-9b99-4207-92da-28b7a1923db2-kube-api-access-fcfrr\") on node \"crc\" DevicePath \"\"" Feb 23 17:43:31 crc kubenswrapper[4724]: I0223 17:43:31.506290 4724 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/997b5710-9b99-4207-92da-28b7a1923db2-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 17:43:31 crc kubenswrapper[4724]: I0223 17:43:31.506305 4724 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/997b5710-9b99-4207-92da-28b7a1923db2-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 17:43:31 crc kubenswrapper[4724]: I0223 17:43:31.506316 4724 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/997b5710-9b99-4207-92da-28b7a1923db2-console-config\") on node \"crc\" DevicePath \"\"" Feb 23 17:43:31 crc kubenswrapper[4724]: I0223 17:43:31.506327 4724 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/997b5710-9b99-4207-92da-28b7a1923db2-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 23 17:43:31 crc kubenswrapper[4724]: I0223 17:43:31.506338 4724 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/997b5710-9b99-4207-92da-28b7a1923db2-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 17:43:31 crc kubenswrapper[4724]: I0223 17:43:31.926365 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-fknnv_997b5710-9b99-4207-92da-28b7a1923db2/console/0.log" Feb 23 17:43:31 crc kubenswrapper[4724]: I0223 17:43:31.926456 4724 generic.go:334] "Generic (PLEG): container finished" podID="997b5710-9b99-4207-92da-28b7a1923db2" containerID="5f3cb84d271e733bef79fc17cc127f47657cda1d8996903130f4b9174dee5bb6" exitCode=2 Feb 23 17:43:31 crc kubenswrapper[4724]: I0223 17:43:31.926504 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fknnv" event={"ID":"997b5710-9b99-4207-92da-28b7a1923db2","Type":"ContainerDied","Data":"5f3cb84d271e733bef79fc17cc127f47657cda1d8996903130f4b9174dee5bb6"} Feb 23 17:43:31 crc kubenswrapper[4724]: I0223 17:43:31.926538 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fknnv" event={"ID":"997b5710-9b99-4207-92da-28b7a1923db2","Type":"ContainerDied","Data":"60e9628fccca08824d603e3c61e2b10981b7f61c45137c8b029895ea222fac7a"} Feb 23 17:43:31 crc kubenswrapper[4724]: I0223 17:43:31.926559 4724 scope.go:117] "RemoveContainer" containerID="5f3cb84d271e733bef79fc17cc127f47657cda1d8996903130f4b9174dee5bb6" Feb 23 17:43:31 crc kubenswrapper[4724]: I0223 17:43:31.926599 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fknnv" Feb 23 17:43:31 crc kubenswrapper[4724]: I0223 17:43:31.929504 4724 generic.go:334] "Generic (PLEG): container finished" podID="371db8f9-a502-40cb-a0cd-256b481c12aa" containerID="20dc85c11089379816608b880835f5b40b295602a95691f57b17906921a9dc41" exitCode=0 Feb 23 17:43:31 crc kubenswrapper[4724]: I0223 17:43:31.929550 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21324dbf" event={"ID":"371db8f9-a502-40cb-a0cd-256b481c12aa","Type":"ContainerDied","Data":"20dc85c11089379816608b880835f5b40b295602a95691f57b17906921a9dc41"} Feb 23 17:43:31 crc kubenswrapper[4724]: I0223 17:43:31.929583 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21324dbf" event={"ID":"371db8f9-a502-40cb-a0cd-256b481c12aa","Type":"ContainerStarted","Data":"ca7ebb52298cc706693534117fa158d1506d373c8db166b168b73eb8086f2308"} Feb 23 17:43:31 crc kubenswrapper[4724]: I0223 17:43:31.951081 4724 scope.go:117] "RemoveContainer" containerID="5f3cb84d271e733bef79fc17cc127f47657cda1d8996903130f4b9174dee5bb6" Feb 23 17:43:31 crc kubenswrapper[4724]: E0223 17:43:31.951886 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f3cb84d271e733bef79fc17cc127f47657cda1d8996903130f4b9174dee5bb6\": container with ID starting with 5f3cb84d271e733bef79fc17cc127f47657cda1d8996903130f4b9174dee5bb6 not found: ID does not exist" containerID="5f3cb84d271e733bef79fc17cc127f47657cda1d8996903130f4b9174dee5bb6" Feb 23 17:43:31 crc kubenswrapper[4724]: I0223 17:43:31.951921 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f3cb84d271e733bef79fc17cc127f47657cda1d8996903130f4b9174dee5bb6"} err="failed to get container status \"5f3cb84d271e733bef79fc17cc127f47657cda1d8996903130f4b9174dee5bb6\": rpc error: code = NotFound desc = could not find container \"5f3cb84d271e733bef79fc17cc127f47657cda1d8996903130f4b9174dee5bb6\": container with ID starting with 5f3cb84d271e733bef79fc17cc127f47657cda1d8996903130f4b9174dee5bb6 not found: ID does not exist" Feb 23 17:43:31 crc kubenswrapper[4724]: I0223 17:43:31.963116 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-fknnv"] Feb 23 17:43:31 crc kubenswrapper[4724]: I0223 17:43:31.966991 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-fknnv"] Feb 23 17:43:32 crc kubenswrapper[4724]: I0223 17:43:32.961542 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="997b5710-9b99-4207-92da-28b7a1923db2" path="/var/lib/kubelet/pods/997b5710-9b99-4207-92da-28b7a1923db2/volumes" Feb 23 17:43:33 crc kubenswrapper[4724]: I0223 17:43:33.959099 4724 generic.go:334] "Generic (PLEG): container finished" podID="371db8f9-a502-40cb-a0cd-256b481c12aa" containerID="c375287ef09e40f0a81f9d84beccc412dac878284390ea29be86b572fbfdf6df" exitCode=0 Feb 23 17:43:33 crc kubenswrapper[4724]: I0223 17:43:33.959199 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21324dbf" event={"ID":"371db8f9-a502-40cb-a0cd-256b481c12aa","Type":"ContainerDied","Data":"c375287ef09e40f0a81f9d84beccc412dac878284390ea29be86b572fbfdf6df"} Feb 23 17:43:34 crc kubenswrapper[4724]: I0223 17:43:34.966723 4724 generic.go:334] "Generic (PLEG): container finished" podID="371db8f9-a502-40cb-a0cd-256b481c12aa" containerID="ca2a3784b1cfe334a2f882983803825aa8a9bf7e5f49b24f262b155360115a82" exitCode=0 Feb 23 17:43:34 crc kubenswrapper[4724]: I0223 17:43:34.966810 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21324dbf" event={"ID":"371db8f9-a502-40cb-a0cd-256b481c12aa","Type":"ContainerDied","Data":"ca2a3784b1cfe334a2f882983803825aa8a9bf7e5f49b24f262b155360115a82"} Feb 23 17:43:36 crc kubenswrapper[4724]: I0223 17:43:36.227551 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21324dbf" Feb 23 17:43:36 crc kubenswrapper[4724]: I0223 17:43:36.278850 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/371db8f9-a502-40cb-a0cd-256b481c12aa-util\") pod \"371db8f9-a502-40cb-a0cd-256b481c12aa\" (UID: \"371db8f9-a502-40cb-a0cd-256b481c12aa\") " Feb 23 17:43:36 crc kubenswrapper[4724]: I0223 17:43:36.278952 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/371db8f9-a502-40cb-a0cd-256b481c12aa-bundle\") pod \"371db8f9-a502-40cb-a0cd-256b481c12aa\" (UID: \"371db8f9-a502-40cb-a0cd-256b481c12aa\") " Feb 23 17:43:36 crc kubenswrapper[4724]: I0223 17:43:36.279084 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4c94s\" (UniqueName: \"kubernetes.io/projected/371db8f9-a502-40cb-a0cd-256b481c12aa-kube-api-access-4c94s\") pod \"371db8f9-a502-40cb-a0cd-256b481c12aa\" (UID: \"371db8f9-a502-40cb-a0cd-256b481c12aa\") " Feb 23 17:43:36 crc kubenswrapper[4724]: I0223 17:43:36.282025 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/371db8f9-a502-40cb-a0cd-256b481c12aa-bundle" (OuterVolumeSpecName: "bundle") pod "371db8f9-a502-40cb-a0cd-256b481c12aa" (UID: "371db8f9-a502-40cb-a0cd-256b481c12aa"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:43:36 crc kubenswrapper[4724]: I0223 17:43:36.285846 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/371db8f9-a502-40cb-a0cd-256b481c12aa-kube-api-access-4c94s" (OuterVolumeSpecName: "kube-api-access-4c94s") pod "371db8f9-a502-40cb-a0cd-256b481c12aa" (UID: "371db8f9-a502-40cb-a0cd-256b481c12aa"). InnerVolumeSpecName "kube-api-access-4c94s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:43:36 crc kubenswrapper[4724]: I0223 17:43:36.293923 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/371db8f9-a502-40cb-a0cd-256b481c12aa-util" (OuterVolumeSpecName: "util") pod "371db8f9-a502-40cb-a0cd-256b481c12aa" (UID: "371db8f9-a502-40cb-a0cd-256b481c12aa"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:43:36 crc kubenswrapper[4724]: I0223 17:43:36.381602 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4c94s\" (UniqueName: \"kubernetes.io/projected/371db8f9-a502-40cb-a0cd-256b481c12aa-kube-api-access-4c94s\") on node \"crc\" DevicePath \"\"" Feb 23 17:43:36 crc kubenswrapper[4724]: I0223 17:43:36.381651 4724 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/371db8f9-a502-40cb-a0cd-256b481c12aa-util\") on node \"crc\" DevicePath \"\"" Feb 23 17:43:36 crc kubenswrapper[4724]: I0223 17:43:36.381668 4724 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/371db8f9-a502-40cb-a0cd-256b481c12aa-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 17:43:36 crc kubenswrapper[4724]: I0223 17:43:36.984014 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21324dbf" event={"ID":"371db8f9-a502-40cb-a0cd-256b481c12aa","Type":"ContainerDied","Data":"ca7ebb52298cc706693534117fa158d1506d373c8db166b168b73eb8086f2308"} Feb 23 17:43:36 crc kubenswrapper[4724]: I0223 17:43:36.984074 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca7ebb52298cc706693534117fa158d1506d373c8db166b168b73eb8086f2308" Feb 23 17:43:36 crc kubenswrapper[4724]: I0223 17:43:36.984126 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21324dbf" Feb 23 17:43:45 crc kubenswrapper[4724]: I0223 17:43:45.868209 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-5bb6655d58-zmrrz"] Feb 23 17:43:45 crc kubenswrapper[4724]: E0223 17:43:45.869211 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="371db8f9-a502-40cb-a0cd-256b481c12aa" containerName="pull" Feb 23 17:43:45 crc kubenswrapper[4724]: I0223 17:43:45.869227 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="371db8f9-a502-40cb-a0cd-256b481c12aa" containerName="pull" Feb 23 17:43:45 crc kubenswrapper[4724]: E0223 17:43:45.869236 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="997b5710-9b99-4207-92da-28b7a1923db2" containerName="console" Feb 23 17:43:45 crc kubenswrapper[4724]: I0223 17:43:45.869242 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="997b5710-9b99-4207-92da-28b7a1923db2" containerName="console" Feb 23 17:43:45 crc kubenswrapper[4724]: E0223 17:43:45.869250 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="371db8f9-a502-40cb-a0cd-256b481c12aa" containerName="util" Feb 23 17:43:45 crc kubenswrapper[4724]: I0223 17:43:45.869256 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="371db8f9-a502-40cb-a0cd-256b481c12aa" containerName="util" Feb 23 17:43:45 crc kubenswrapper[4724]: E0223 17:43:45.869263 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="371db8f9-a502-40cb-a0cd-256b481c12aa" containerName="extract" Feb 23 17:43:45 crc kubenswrapper[4724]: I0223 17:43:45.869269 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="371db8f9-a502-40cb-a0cd-256b481c12aa" containerName="extract" Feb 23 17:43:45 crc kubenswrapper[4724]: I0223 17:43:45.869366 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="371db8f9-a502-40cb-a0cd-256b481c12aa" containerName="extract" Feb 23 17:43:45 crc kubenswrapper[4724]: I0223 17:43:45.869380 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="997b5710-9b99-4207-92da-28b7a1923db2" containerName="console" Feb 23 17:43:45 crc kubenswrapper[4724]: I0223 17:43:45.869866 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5bb6655d58-zmrrz" Feb 23 17:43:45 crc kubenswrapper[4724]: I0223 17:43:45.872643 4724 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 23 17:43:45 crc kubenswrapper[4724]: I0223 17:43:45.872697 4724 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 23 17:43:45 crc kubenswrapper[4724]: I0223 17:43:45.872644 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 23 17:43:45 crc kubenswrapper[4724]: I0223 17:43:45.874604 4724 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-mjz4h" Feb 23 17:43:45 crc kubenswrapper[4724]: I0223 17:43:45.874776 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 23 17:43:45 crc kubenswrapper[4724]: I0223 17:43:45.882360 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5bb6655d58-zmrrz"] Feb 23 17:43:46 crc kubenswrapper[4724]: I0223 17:43:46.007709 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8fb21fbd-388b-4b8f-a0ec-78f2396bf456-apiservice-cert\") pod \"metallb-operator-controller-manager-5bb6655d58-zmrrz\" (UID: \"8fb21fbd-388b-4b8f-a0ec-78f2396bf456\") " pod="metallb-system/metallb-operator-controller-manager-5bb6655d58-zmrrz" Feb 23 17:43:46 crc kubenswrapper[4724]: I0223 17:43:46.007785 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxm24\" (UniqueName: \"kubernetes.io/projected/8fb21fbd-388b-4b8f-a0ec-78f2396bf456-kube-api-access-qxm24\") pod \"metallb-operator-controller-manager-5bb6655d58-zmrrz\" (UID: \"8fb21fbd-388b-4b8f-a0ec-78f2396bf456\") " pod="metallb-system/metallb-operator-controller-manager-5bb6655d58-zmrrz" Feb 23 17:43:46 crc kubenswrapper[4724]: I0223 17:43:46.007813 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8fb21fbd-388b-4b8f-a0ec-78f2396bf456-webhook-cert\") pod \"metallb-operator-controller-manager-5bb6655d58-zmrrz\" (UID: \"8fb21fbd-388b-4b8f-a0ec-78f2396bf456\") " pod="metallb-system/metallb-operator-controller-manager-5bb6655d58-zmrrz" Feb 23 17:43:46 crc kubenswrapper[4724]: I0223 17:43:46.109239 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8fb21fbd-388b-4b8f-a0ec-78f2396bf456-webhook-cert\") pod \"metallb-operator-controller-manager-5bb6655d58-zmrrz\" (UID: \"8fb21fbd-388b-4b8f-a0ec-78f2396bf456\") " pod="metallb-system/metallb-operator-controller-manager-5bb6655d58-zmrrz" Feb 23 17:43:46 crc kubenswrapper[4724]: I0223 17:43:46.109491 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8fb21fbd-388b-4b8f-a0ec-78f2396bf456-apiservice-cert\") pod \"metallb-operator-controller-manager-5bb6655d58-zmrrz\" (UID: \"8fb21fbd-388b-4b8f-a0ec-78f2396bf456\") " pod="metallb-system/metallb-operator-controller-manager-5bb6655d58-zmrrz" Feb 23 17:43:46 crc kubenswrapper[4724]: I0223 17:43:46.109633 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxm24\" (UniqueName: \"kubernetes.io/projected/8fb21fbd-388b-4b8f-a0ec-78f2396bf456-kube-api-access-qxm24\") pod \"metallb-operator-controller-manager-5bb6655d58-zmrrz\" (UID: \"8fb21fbd-388b-4b8f-a0ec-78f2396bf456\") " pod="metallb-system/metallb-operator-controller-manager-5bb6655d58-zmrrz" Feb 23 17:43:46 crc kubenswrapper[4724]: I0223 17:43:46.117137 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8fb21fbd-388b-4b8f-a0ec-78f2396bf456-apiservice-cert\") pod \"metallb-operator-controller-manager-5bb6655d58-zmrrz\" (UID: \"8fb21fbd-388b-4b8f-a0ec-78f2396bf456\") " pod="metallb-system/metallb-operator-controller-manager-5bb6655d58-zmrrz" Feb 23 17:43:46 crc kubenswrapper[4724]: I0223 17:43:46.127003 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8fb21fbd-388b-4b8f-a0ec-78f2396bf456-webhook-cert\") pod \"metallb-operator-controller-manager-5bb6655d58-zmrrz\" (UID: \"8fb21fbd-388b-4b8f-a0ec-78f2396bf456\") " pod="metallb-system/metallb-operator-controller-manager-5bb6655d58-zmrrz" Feb 23 17:43:46 crc kubenswrapper[4724]: I0223 17:43:46.127351 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxm24\" (UniqueName: \"kubernetes.io/projected/8fb21fbd-388b-4b8f-a0ec-78f2396bf456-kube-api-access-qxm24\") pod \"metallb-operator-controller-manager-5bb6655d58-zmrrz\" (UID: \"8fb21fbd-388b-4b8f-a0ec-78f2396bf456\") " pod="metallb-system/metallb-operator-controller-manager-5bb6655d58-zmrrz" Feb 23 17:43:46 crc kubenswrapper[4724]: I0223 17:43:46.194927 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5bb6655d58-zmrrz" Feb 23 17:43:46 crc kubenswrapper[4724]: I0223 17:43:46.314742 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-745c85d5d8-v6vwt"] Feb 23 17:43:46 crc kubenswrapper[4724]: I0223 17:43:46.315755 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-745c85d5d8-v6vwt" Feb 23 17:43:46 crc kubenswrapper[4724]: I0223 17:43:46.321841 4724 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 23 17:43:46 crc kubenswrapper[4724]: I0223 17:43:46.321893 4724 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 23 17:43:46 crc kubenswrapper[4724]: I0223 17:43:46.325863 4724 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-psl5w" Feb 23 17:43:46 crc kubenswrapper[4724]: I0223 17:43:46.343203 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-745c85d5d8-v6vwt"] Feb 23 17:43:46 crc kubenswrapper[4724]: I0223 17:43:46.413954 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwxzm\" (UniqueName: \"kubernetes.io/projected/7e8b0053-5568-4e4e-8021-f2351dc9f4df-kube-api-access-hwxzm\") pod \"metallb-operator-webhook-server-745c85d5d8-v6vwt\" (UID: \"7e8b0053-5568-4e4e-8021-f2351dc9f4df\") " pod="metallb-system/metallb-operator-webhook-server-745c85d5d8-v6vwt" Feb 23 17:43:46 crc kubenswrapper[4724]: I0223 17:43:46.414041 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7e8b0053-5568-4e4e-8021-f2351dc9f4df-webhook-cert\") pod \"metallb-operator-webhook-server-745c85d5d8-v6vwt\" (UID: \"7e8b0053-5568-4e4e-8021-f2351dc9f4df\") " pod="metallb-system/metallb-operator-webhook-server-745c85d5d8-v6vwt" Feb 23 17:43:46 crc kubenswrapper[4724]: I0223 17:43:46.414078 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7e8b0053-5568-4e4e-8021-f2351dc9f4df-apiservice-cert\") pod \"metallb-operator-webhook-server-745c85d5d8-v6vwt\" (UID: \"7e8b0053-5568-4e4e-8021-f2351dc9f4df\") " pod="metallb-system/metallb-operator-webhook-server-745c85d5d8-v6vwt" Feb 23 17:43:46 crc kubenswrapper[4724]: I0223 17:43:46.515559 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwxzm\" (UniqueName: \"kubernetes.io/projected/7e8b0053-5568-4e4e-8021-f2351dc9f4df-kube-api-access-hwxzm\") pod \"metallb-operator-webhook-server-745c85d5d8-v6vwt\" (UID: \"7e8b0053-5568-4e4e-8021-f2351dc9f4df\") " pod="metallb-system/metallb-operator-webhook-server-745c85d5d8-v6vwt" Feb 23 17:43:46 crc kubenswrapper[4724]: I0223 17:43:46.515630 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7e8b0053-5568-4e4e-8021-f2351dc9f4df-webhook-cert\") pod \"metallb-operator-webhook-server-745c85d5d8-v6vwt\" (UID: \"7e8b0053-5568-4e4e-8021-f2351dc9f4df\") " pod="metallb-system/metallb-operator-webhook-server-745c85d5d8-v6vwt" Feb 23 17:43:46 crc kubenswrapper[4724]: I0223 17:43:46.515660 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7e8b0053-5568-4e4e-8021-f2351dc9f4df-apiservice-cert\") pod \"metallb-operator-webhook-server-745c85d5d8-v6vwt\" (UID: \"7e8b0053-5568-4e4e-8021-f2351dc9f4df\") " pod="metallb-system/metallb-operator-webhook-server-745c85d5d8-v6vwt" Feb 23 17:43:46 crc kubenswrapper[4724]: I0223 17:43:46.522662 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7e8b0053-5568-4e4e-8021-f2351dc9f4df-apiservice-cert\") pod \"metallb-operator-webhook-server-745c85d5d8-v6vwt\" (UID: \"7e8b0053-5568-4e4e-8021-f2351dc9f4df\") " pod="metallb-system/metallb-operator-webhook-server-745c85d5d8-v6vwt" Feb 23 17:43:46 crc kubenswrapper[4724]: I0223 17:43:46.523359 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7e8b0053-5568-4e4e-8021-f2351dc9f4df-webhook-cert\") pod \"metallb-operator-webhook-server-745c85d5d8-v6vwt\" (UID: \"7e8b0053-5568-4e4e-8021-f2351dc9f4df\") " pod="metallb-system/metallb-operator-webhook-server-745c85d5d8-v6vwt" Feb 23 17:43:46 crc kubenswrapper[4724]: I0223 17:43:46.551354 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwxzm\" (UniqueName: \"kubernetes.io/projected/7e8b0053-5568-4e4e-8021-f2351dc9f4df-kube-api-access-hwxzm\") pod \"metallb-operator-webhook-server-745c85d5d8-v6vwt\" (UID: \"7e8b0053-5568-4e4e-8021-f2351dc9f4df\") " pod="metallb-system/metallb-operator-webhook-server-745c85d5d8-v6vwt" Feb 23 17:43:46 crc kubenswrapper[4724]: I0223 17:43:46.631603 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-745c85d5d8-v6vwt" Feb 23 17:43:46 crc kubenswrapper[4724]: I0223 17:43:46.840613 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5bb6655d58-zmrrz"] Feb 23 17:43:46 crc kubenswrapper[4724]: W0223 17:43:46.850937 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fb21fbd_388b_4b8f_a0ec_78f2396bf456.slice/crio-59b1b3159f5ddf0a460088ea24261964bb8d1ba8155681ece25720bdefd2dd30 WatchSource:0}: Error finding container 59b1b3159f5ddf0a460088ea24261964bb8d1ba8155681ece25720bdefd2dd30: Status 404 returned error can't find the container with id 59b1b3159f5ddf0a460088ea24261964bb8d1ba8155681ece25720bdefd2dd30 Feb 23 17:43:47 crc kubenswrapper[4724]: I0223 17:43:47.040261 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5bb6655d58-zmrrz" event={"ID":"8fb21fbd-388b-4b8f-a0ec-78f2396bf456","Type":"ContainerStarted","Data":"59b1b3159f5ddf0a460088ea24261964bb8d1ba8155681ece25720bdefd2dd30"} Feb 23 17:43:47 crc kubenswrapper[4724]: I0223 17:43:47.100852 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-745c85d5d8-v6vwt"] Feb 23 17:43:47 crc kubenswrapper[4724]: W0223 17:43:47.111871 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e8b0053_5568_4e4e_8021_f2351dc9f4df.slice/crio-9a318b2596c9f97952655989a99d94d37194b2117a7565c5b8ba4bd759ad39c6 WatchSource:0}: Error finding container 9a318b2596c9f97952655989a99d94d37194b2117a7565c5b8ba4bd759ad39c6: Status 404 returned error can't find the container with id 9a318b2596c9f97952655989a99d94d37194b2117a7565c5b8ba4bd759ad39c6 Feb 23 17:43:48 crc kubenswrapper[4724]: I0223 17:43:48.048118 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-745c85d5d8-v6vwt" event={"ID":"7e8b0053-5568-4e4e-8021-f2351dc9f4df","Type":"ContainerStarted","Data":"9a318b2596c9f97952655989a99d94d37194b2117a7565c5b8ba4bd759ad39c6"} Feb 23 17:43:51 crc kubenswrapper[4724]: I0223 17:43:51.069641 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5bb6655d58-zmrrz" event={"ID":"8fb21fbd-388b-4b8f-a0ec-78f2396bf456","Type":"ContainerStarted","Data":"57036cac70555583a787b8862d893587edcaabb71683af6d33ee5070e52a7e64"} Feb 23 17:43:51 crc kubenswrapper[4724]: I0223 17:43:51.075481 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5bb6655d58-zmrrz" Feb 23 17:43:52 crc kubenswrapper[4724]: I0223 17:43:52.078608 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-745c85d5d8-v6vwt" event={"ID":"7e8b0053-5568-4e4e-8021-f2351dc9f4df","Type":"ContainerStarted","Data":"f29ef4cddccfa7ee10729942fc8bc93d12333d75445c1ff5162de62a423e4e79"} Feb 23 17:43:52 crc kubenswrapper[4724]: I0223 17:43:52.078679 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-745c85d5d8-v6vwt" Feb 23 17:43:52 crc kubenswrapper[4724]: I0223 17:43:52.096972 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-745c85d5d8-v6vwt" podStartSLOduration=1.499049968 podStartE2EDuration="6.096944816s" podCreationTimestamp="2026-02-23 17:43:46 +0000 UTC" firstStartedPulling="2026-02-23 17:43:47.115206957 +0000 UTC m=+782.931406557" lastFinishedPulling="2026-02-23 17:43:51.713101805 +0000 UTC m=+787.529301405" observedRunningTime="2026-02-23 17:43:52.094672547 +0000 UTC m=+787.910872147" watchObservedRunningTime="2026-02-23 17:43:52.096944816 +0000 UTC m=+787.913144426" Feb 23 17:43:52 crc kubenswrapper[4724]: I0223 17:43:52.102481 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-5bb6655d58-zmrrz" podStartSLOduration=3.807088808 podStartE2EDuration="7.102456428s" podCreationTimestamp="2026-02-23 17:43:45 +0000 UTC" firstStartedPulling="2026-02-23 17:43:46.854558374 +0000 UTC m=+782.670757984" lastFinishedPulling="2026-02-23 17:43:50.149926004 +0000 UTC m=+785.966125604" observedRunningTime="2026-02-23 17:43:51.110623935 +0000 UTC m=+786.926823555" watchObservedRunningTime="2026-02-23 17:43:52.102456428 +0000 UTC m=+787.918656028" Feb 23 17:43:53 crc kubenswrapper[4724]: I0223 17:43:53.323252 4724 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 23 17:44:06 crc kubenswrapper[4724]: I0223 17:44:06.635897 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-745c85d5d8-v6vwt" Feb 23 17:44:26 crc kubenswrapper[4724]: I0223 17:44:26.197975 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5bb6655d58-zmrrz" Feb 23 17:44:26 crc kubenswrapper[4724]: I0223 17:44:26.963149 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-6cjbc"] Feb 23 17:44:26 crc kubenswrapper[4724]: I0223 17:44:26.966122 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-6cjbc" Feb 23 17:44:26 crc kubenswrapper[4724]: I0223 17:44:26.970358 4724 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 23 17:44:26 crc kubenswrapper[4724]: I0223 17:44:26.970720 4724 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-788vk" Feb 23 17:44:26 crc kubenswrapper[4724]: I0223 17:44:26.970911 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 23 17:44:26 crc kubenswrapper[4724]: I0223 17:44:26.971415 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/da6b6734-568e-4283-8df5-f8e9abbef784-metrics\") pod \"frr-k8s-6cjbc\" (UID: \"da6b6734-568e-4283-8df5-f8e9abbef784\") " pod="metallb-system/frr-k8s-6cjbc" Feb 23 17:44:26 crc kubenswrapper[4724]: I0223 17:44:26.971470 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4zt8\" (UniqueName: \"kubernetes.io/projected/da6b6734-568e-4283-8df5-f8e9abbef784-kube-api-access-x4zt8\") pod \"frr-k8s-6cjbc\" (UID: \"da6b6734-568e-4283-8df5-f8e9abbef784\") " pod="metallb-system/frr-k8s-6cjbc" Feb 23 17:44:26 crc kubenswrapper[4724]: I0223 17:44:26.971553 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/da6b6734-568e-4283-8df5-f8e9abbef784-metrics-certs\") pod \"frr-k8s-6cjbc\" (UID: \"da6b6734-568e-4283-8df5-f8e9abbef784\") " pod="metallb-system/frr-k8s-6cjbc" Feb 23 17:44:26 crc kubenswrapper[4724]: I0223 17:44:26.971575 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/da6b6734-568e-4283-8df5-f8e9abbef784-frr-startup\") pod \"frr-k8s-6cjbc\" (UID: \"da6b6734-568e-4283-8df5-f8e9abbef784\") " pod="metallb-system/frr-k8s-6cjbc" Feb 23 17:44:26 crc kubenswrapper[4724]: I0223 17:44:26.971599 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/da6b6734-568e-4283-8df5-f8e9abbef784-frr-sockets\") pod \"frr-k8s-6cjbc\" (UID: \"da6b6734-568e-4283-8df5-f8e9abbef784\") " pod="metallb-system/frr-k8s-6cjbc" Feb 23 17:44:26 crc kubenswrapper[4724]: I0223 17:44:26.971613 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/da6b6734-568e-4283-8df5-f8e9abbef784-reloader\") pod \"frr-k8s-6cjbc\" (UID: \"da6b6734-568e-4283-8df5-f8e9abbef784\") " pod="metallb-system/frr-k8s-6cjbc" Feb 23 17:44:26 crc kubenswrapper[4724]: I0223 17:44:26.971635 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/da6b6734-568e-4283-8df5-f8e9abbef784-frr-conf\") pod \"frr-k8s-6cjbc\" (UID: \"da6b6734-568e-4283-8df5-f8e9abbef784\") " pod="metallb-system/frr-k8s-6cjbc" Feb 23 17:44:26 crc kubenswrapper[4724]: I0223 17:44:26.985545 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-kcvmh"] Feb 23 17:44:26 crc kubenswrapper[4724]: I0223 17:44:26.986775 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-kcvmh" Feb 23 17:44:26 crc kubenswrapper[4724]: I0223 17:44:26.991583 4724 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 23 17:44:27 crc kubenswrapper[4724]: I0223 17:44:27.005164 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-kcvmh"] Feb 23 17:44:27 crc kubenswrapper[4724]: I0223 17:44:27.072013 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4zt8\" (UniqueName: \"kubernetes.io/projected/da6b6734-568e-4283-8df5-f8e9abbef784-kube-api-access-x4zt8\") pod \"frr-k8s-6cjbc\" (UID: \"da6b6734-568e-4283-8df5-f8e9abbef784\") " pod="metallb-system/frr-k8s-6cjbc" Feb 23 17:44:27 crc kubenswrapper[4724]: I0223 17:44:27.072095 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/da6b6734-568e-4283-8df5-f8e9abbef784-metrics-certs\") pod \"frr-k8s-6cjbc\" (UID: \"da6b6734-568e-4283-8df5-f8e9abbef784\") " pod="metallb-system/frr-k8s-6cjbc" Feb 23 17:44:27 crc kubenswrapper[4724]: I0223 17:44:27.072122 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/39fa75d7-3799-41ce-9a9e-ebf9dd8c347b-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-kcvmh\" (UID: \"39fa75d7-3799-41ce-9a9e-ebf9dd8c347b\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-kcvmh" Feb 23 17:44:27 crc kubenswrapper[4724]: I0223 17:44:27.072138 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/da6b6734-568e-4283-8df5-f8e9abbef784-frr-startup\") pod \"frr-k8s-6cjbc\" (UID: \"da6b6734-568e-4283-8df5-f8e9abbef784\") " pod="metallb-system/frr-k8s-6cjbc" Feb 23 17:44:27 crc kubenswrapper[4724]: I0223 17:44:27.072161 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/da6b6734-568e-4283-8df5-f8e9abbef784-frr-sockets\") pod \"frr-k8s-6cjbc\" (UID: \"da6b6734-568e-4283-8df5-f8e9abbef784\") " pod="metallb-system/frr-k8s-6cjbc" Feb 23 17:44:27 crc kubenswrapper[4724]: I0223 17:44:27.072175 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/da6b6734-568e-4283-8df5-f8e9abbef784-reloader\") pod \"frr-k8s-6cjbc\" (UID: \"da6b6734-568e-4283-8df5-f8e9abbef784\") " pod="metallb-system/frr-k8s-6cjbc" Feb 23 17:44:27 crc kubenswrapper[4724]: I0223 17:44:27.072195 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg4r8\" (UniqueName: \"kubernetes.io/projected/39fa75d7-3799-41ce-9a9e-ebf9dd8c347b-kube-api-access-rg4r8\") pod \"frr-k8s-webhook-server-78b44bf5bb-kcvmh\" (UID: \"39fa75d7-3799-41ce-9a9e-ebf9dd8c347b\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-kcvmh" Feb 23 17:44:27 crc kubenswrapper[4724]: I0223 17:44:27.072221 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/da6b6734-568e-4283-8df5-f8e9abbef784-frr-conf\") pod \"frr-k8s-6cjbc\" (UID: \"da6b6734-568e-4283-8df5-f8e9abbef784\") " pod="metallb-system/frr-k8s-6cjbc" Feb 23 17:44:27 crc kubenswrapper[4724]: I0223 17:44:27.072249 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/da6b6734-568e-4283-8df5-f8e9abbef784-metrics\") pod \"frr-k8s-6cjbc\" (UID: \"da6b6734-568e-4283-8df5-f8e9abbef784\") " pod="metallb-system/frr-k8s-6cjbc" Feb 23 17:44:27 crc kubenswrapper[4724]: E0223 17:44:27.072513 4724 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Feb 23 17:44:27 crc kubenswrapper[4724]: E0223 17:44:27.072582 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/da6b6734-568e-4283-8df5-f8e9abbef784-metrics-certs podName:da6b6734-568e-4283-8df5-f8e9abbef784 nodeName:}" failed. No retries permitted until 2026-02-23 17:44:27.572554988 +0000 UTC m=+823.388754588 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/da6b6734-568e-4283-8df5-f8e9abbef784-metrics-certs") pod "frr-k8s-6cjbc" (UID: "da6b6734-568e-4283-8df5-f8e9abbef784") : secret "frr-k8s-certs-secret" not found Feb 23 17:44:27 crc kubenswrapper[4724]: I0223 17:44:27.072791 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/da6b6734-568e-4283-8df5-f8e9abbef784-reloader\") pod \"frr-k8s-6cjbc\" (UID: \"da6b6734-568e-4283-8df5-f8e9abbef784\") " pod="metallb-system/frr-k8s-6cjbc" Feb 23 17:44:27 crc kubenswrapper[4724]: I0223 17:44:27.072876 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/da6b6734-568e-4283-8df5-f8e9abbef784-frr-conf\") pod \"frr-k8s-6cjbc\" (UID: \"da6b6734-568e-4283-8df5-f8e9abbef784\") " pod="metallb-system/frr-k8s-6cjbc" Feb 23 17:44:27 crc kubenswrapper[4724]: I0223 17:44:27.073013 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/da6b6734-568e-4283-8df5-f8e9abbef784-metrics\") pod \"frr-k8s-6cjbc\" (UID: \"da6b6734-568e-4283-8df5-f8e9abbef784\") " pod="metallb-system/frr-k8s-6cjbc" Feb 23 17:44:27 crc kubenswrapper[4724]: I0223 17:44:27.073056 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/da6b6734-568e-4283-8df5-f8e9abbef784-frr-sockets\") pod \"frr-k8s-6cjbc\" (UID: \"da6b6734-568e-4283-8df5-f8e9abbef784\") " pod="metallb-system/frr-k8s-6cjbc" Feb 23 17:44:27 crc kubenswrapper[4724]: I0223 17:44:27.073542 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/da6b6734-568e-4283-8df5-f8e9abbef784-frr-startup\") pod \"frr-k8s-6cjbc\" (UID: \"da6b6734-568e-4283-8df5-f8e9abbef784\") " pod="metallb-system/frr-k8s-6cjbc" Feb 23 17:44:27 crc kubenswrapper[4724]: I0223 17:44:27.093901 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-dxbt6"] Feb 23 17:44:27 crc kubenswrapper[4724]: I0223 17:44:27.095343 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-dxbt6" Feb 23 17:44:27 crc kubenswrapper[4724]: I0223 17:44:27.100840 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 23 17:44:27 crc kubenswrapper[4724]: I0223 17:44:27.100992 4724 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 23 17:44:27 crc kubenswrapper[4724]: I0223 17:44:27.101198 4724 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-545fw" Feb 23 17:44:27 crc kubenswrapper[4724]: I0223 17:44:27.101971 4724 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 23 17:44:27 crc kubenswrapper[4724]: I0223 17:44:27.119465 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4zt8\" (UniqueName: \"kubernetes.io/projected/da6b6734-568e-4283-8df5-f8e9abbef784-kube-api-access-x4zt8\") pod \"frr-k8s-6cjbc\" (UID: \"da6b6734-568e-4283-8df5-f8e9abbef784\") " pod="metallb-system/frr-k8s-6cjbc" Feb 23 17:44:27 crc kubenswrapper[4724]: I0223 17:44:27.126565 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-fhn7w"] Feb 23 17:44:27 crc kubenswrapper[4724]: I0223 17:44:27.127740 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-fhn7w" Feb 23 17:44:27 crc kubenswrapper[4724]: I0223 17:44:27.129747 4724 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 23 17:44:27 crc kubenswrapper[4724]: I0223 17:44:27.145117 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-fhn7w"] Feb 23 17:44:27 crc kubenswrapper[4724]: I0223 17:44:27.173016 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rg4r8\" (UniqueName: \"kubernetes.io/projected/39fa75d7-3799-41ce-9a9e-ebf9dd8c347b-kube-api-access-rg4r8\") pod \"frr-k8s-webhook-server-78b44bf5bb-kcvmh\" (UID: \"39fa75d7-3799-41ce-9a9e-ebf9dd8c347b\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-kcvmh" Feb 23 17:44:27 crc kubenswrapper[4724]: I0223 17:44:27.173500 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/39fa75d7-3799-41ce-9a9e-ebf9dd8c347b-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-kcvmh\" (UID: \"39fa75d7-3799-41ce-9a9e-ebf9dd8c347b\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-kcvmh" Feb 23 17:44:27 crc kubenswrapper[4724]: E0223 17:44:27.173605 4724 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Feb 23 17:44:27 crc kubenswrapper[4724]: E0223 17:44:27.173658 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/39fa75d7-3799-41ce-9a9e-ebf9dd8c347b-cert podName:39fa75d7-3799-41ce-9a9e-ebf9dd8c347b nodeName:}" failed. No retries permitted until 2026-02-23 17:44:27.673641816 +0000 UTC m=+823.489841416 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/39fa75d7-3799-41ce-9a9e-ebf9dd8c347b-cert") pod "frr-k8s-webhook-server-78b44bf5bb-kcvmh" (UID: "39fa75d7-3799-41ce-9a9e-ebf9dd8c347b") : secret "frr-k8s-webhook-server-cert" not found Feb 23 17:44:27 crc kubenswrapper[4724]: I0223 17:44:27.202724 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg4r8\" (UniqueName: \"kubernetes.io/projected/39fa75d7-3799-41ce-9a9e-ebf9dd8c347b-kube-api-access-rg4r8\") pod \"frr-k8s-webhook-server-78b44bf5bb-kcvmh\" (UID: \"39fa75d7-3799-41ce-9a9e-ebf9dd8c347b\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-kcvmh" Feb 23 17:44:27 crc kubenswrapper[4724]: I0223 17:44:27.274681 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8637711e-f5d2-43e1-b8f6-65df43b16ffc-cert\") pod \"controller-69bbfbf88f-fhn7w\" (UID: \"8637711e-f5d2-43e1-b8f6-65df43b16ffc\") " pod="metallb-system/controller-69bbfbf88f-fhn7w" Feb 23 17:44:27 crc kubenswrapper[4724]: I0223 17:44:27.274735 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a55b73c4-da87-4ce8-8418-3d6d854c0b0e-metrics-certs\") pod \"speaker-dxbt6\" (UID: \"a55b73c4-da87-4ce8-8418-3d6d854c0b0e\") " pod="metallb-system/speaker-dxbt6" Feb 23 17:44:27 crc kubenswrapper[4724]: I0223 17:44:27.274783 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a55b73c4-da87-4ce8-8418-3d6d854c0b0e-memberlist\") pod \"speaker-dxbt6\" (UID: \"a55b73c4-da87-4ce8-8418-3d6d854c0b0e\") " pod="metallb-system/speaker-dxbt6" Feb 23 17:44:27 crc kubenswrapper[4724]: I0223 17:44:27.274846 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/a55b73c4-da87-4ce8-8418-3d6d854c0b0e-metallb-excludel2\") pod \"speaker-dxbt6\" (UID: \"a55b73c4-da87-4ce8-8418-3d6d854c0b0e\") " pod="metallb-system/speaker-dxbt6" Feb 23 17:44:27 crc kubenswrapper[4724]: I0223 17:44:27.274908 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8637711e-f5d2-43e1-b8f6-65df43b16ffc-metrics-certs\") pod \"controller-69bbfbf88f-fhn7w\" (UID: \"8637711e-f5d2-43e1-b8f6-65df43b16ffc\") " pod="metallb-system/controller-69bbfbf88f-fhn7w" Feb 23 17:44:27 crc kubenswrapper[4724]: I0223 17:44:27.274947 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkt5g\" (UniqueName: \"kubernetes.io/projected/a55b73c4-da87-4ce8-8418-3d6d854c0b0e-kube-api-access-tkt5g\") pod \"speaker-dxbt6\" (UID: \"a55b73c4-da87-4ce8-8418-3d6d854c0b0e\") " pod="metallb-system/speaker-dxbt6" Feb 23 17:44:27 crc kubenswrapper[4724]: I0223 17:44:27.274974 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhv5h\" (UniqueName: \"kubernetes.io/projected/8637711e-f5d2-43e1-b8f6-65df43b16ffc-kube-api-access-jhv5h\") pod \"controller-69bbfbf88f-fhn7w\" (UID: \"8637711e-f5d2-43e1-b8f6-65df43b16ffc\") " pod="metallb-system/controller-69bbfbf88f-fhn7w" Feb 23 17:44:27 crc kubenswrapper[4724]: I0223 17:44:27.376828 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkt5g\" (UniqueName: \"kubernetes.io/projected/a55b73c4-da87-4ce8-8418-3d6d854c0b0e-kube-api-access-tkt5g\") pod \"speaker-dxbt6\" (UID: \"a55b73c4-da87-4ce8-8418-3d6d854c0b0e\") " pod="metallb-system/speaker-dxbt6" Feb 23 17:44:27 crc kubenswrapper[4724]: I0223 17:44:27.376891 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhv5h\" (UniqueName: \"kubernetes.io/projected/8637711e-f5d2-43e1-b8f6-65df43b16ffc-kube-api-access-jhv5h\") pod \"controller-69bbfbf88f-fhn7w\" (UID: \"8637711e-f5d2-43e1-b8f6-65df43b16ffc\") " pod="metallb-system/controller-69bbfbf88f-fhn7w" Feb 23 17:44:27 crc kubenswrapper[4724]: I0223 17:44:27.376953 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8637711e-f5d2-43e1-b8f6-65df43b16ffc-cert\") pod \"controller-69bbfbf88f-fhn7w\" (UID: \"8637711e-f5d2-43e1-b8f6-65df43b16ffc\") " pod="metallb-system/controller-69bbfbf88f-fhn7w" Feb 23 17:44:27 crc kubenswrapper[4724]: I0223 17:44:27.376977 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a55b73c4-da87-4ce8-8418-3d6d854c0b0e-metrics-certs\") pod \"speaker-dxbt6\" (UID: \"a55b73c4-da87-4ce8-8418-3d6d854c0b0e\") " pod="metallb-system/speaker-dxbt6" Feb 23 17:44:27 crc kubenswrapper[4724]: I0223 17:44:27.377004 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a55b73c4-da87-4ce8-8418-3d6d854c0b0e-memberlist\") pod \"speaker-dxbt6\" (UID: \"a55b73c4-da87-4ce8-8418-3d6d854c0b0e\") " pod="metallb-system/speaker-dxbt6" Feb 23 17:44:27 crc kubenswrapper[4724]: I0223 17:44:27.377018 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/a55b73c4-da87-4ce8-8418-3d6d854c0b0e-metallb-excludel2\") pod \"speaker-dxbt6\" (UID: \"a55b73c4-da87-4ce8-8418-3d6d854c0b0e\") " pod="metallb-system/speaker-dxbt6" Feb 23 17:44:27 crc kubenswrapper[4724]: I0223 17:44:27.377047 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8637711e-f5d2-43e1-b8f6-65df43b16ffc-metrics-certs\") pod \"controller-69bbfbf88f-fhn7w\" (UID: \"8637711e-f5d2-43e1-b8f6-65df43b16ffc\") " pod="metallb-system/controller-69bbfbf88f-fhn7w" Feb 23 17:44:27 crc kubenswrapper[4724]: E0223 17:44:27.378355 4724 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Feb 23 17:44:27 crc kubenswrapper[4724]: E0223 17:44:27.378504 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a55b73c4-da87-4ce8-8418-3d6d854c0b0e-metrics-certs podName:a55b73c4-da87-4ce8-8418-3d6d854c0b0e nodeName:}" failed. No retries permitted until 2026-02-23 17:44:27.878469649 +0000 UTC m=+823.694669259 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a55b73c4-da87-4ce8-8418-3d6d854c0b0e-metrics-certs") pod "speaker-dxbt6" (UID: "a55b73c4-da87-4ce8-8418-3d6d854c0b0e") : secret "speaker-certs-secret" not found Feb 23 17:44:27 crc kubenswrapper[4724]: I0223 17:44:27.378877 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/a55b73c4-da87-4ce8-8418-3d6d854c0b0e-metallb-excludel2\") pod \"speaker-dxbt6\" (UID: \"a55b73c4-da87-4ce8-8418-3d6d854c0b0e\") " pod="metallb-system/speaker-dxbt6" Feb 23 17:44:27 crc kubenswrapper[4724]: E0223 17:44:27.379008 4724 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 23 17:44:27 crc kubenswrapper[4724]: E0223 17:44:27.379132 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a55b73c4-da87-4ce8-8418-3d6d854c0b0e-memberlist podName:a55b73c4-da87-4ce8-8418-3d6d854c0b0e nodeName:}" failed. No retries permitted until 2026-02-23 17:44:27.879105185 +0000 UTC m=+823.695304895 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/a55b73c4-da87-4ce8-8418-3d6d854c0b0e-memberlist") pod "speaker-dxbt6" (UID: "a55b73c4-da87-4ce8-8418-3d6d854c0b0e") : secret "metallb-memberlist" not found Feb 23 17:44:27 crc kubenswrapper[4724]: I0223 17:44:27.381341 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8637711e-f5d2-43e1-b8f6-65df43b16ffc-metrics-certs\") pod \"controller-69bbfbf88f-fhn7w\" (UID: \"8637711e-f5d2-43e1-b8f6-65df43b16ffc\") " pod="metallb-system/controller-69bbfbf88f-fhn7w" Feb 23 17:44:27 crc kubenswrapper[4724]: I0223 17:44:27.384111 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8637711e-f5d2-43e1-b8f6-65df43b16ffc-cert\") pod \"controller-69bbfbf88f-fhn7w\" (UID: \"8637711e-f5d2-43e1-b8f6-65df43b16ffc\") " pod="metallb-system/controller-69bbfbf88f-fhn7w" Feb 23 17:44:27 crc kubenswrapper[4724]: I0223 17:44:27.398138 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhv5h\" (UniqueName: \"kubernetes.io/projected/8637711e-f5d2-43e1-b8f6-65df43b16ffc-kube-api-access-jhv5h\") pod \"controller-69bbfbf88f-fhn7w\" (UID: \"8637711e-f5d2-43e1-b8f6-65df43b16ffc\") " pod="metallb-system/controller-69bbfbf88f-fhn7w" Feb 23 17:44:27 crc kubenswrapper[4724]: I0223 17:44:27.398694 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkt5g\" (UniqueName: \"kubernetes.io/projected/a55b73c4-da87-4ce8-8418-3d6d854c0b0e-kube-api-access-tkt5g\") pod \"speaker-dxbt6\" (UID: \"a55b73c4-da87-4ce8-8418-3d6d854c0b0e\") " pod="metallb-system/speaker-dxbt6" Feb 23 17:44:27 crc kubenswrapper[4724]: I0223 17:44:27.459319 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-fhn7w" Feb 23 17:44:27 crc kubenswrapper[4724]: I0223 17:44:27.580152 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/da6b6734-568e-4283-8df5-f8e9abbef784-metrics-certs\") pod \"frr-k8s-6cjbc\" (UID: \"da6b6734-568e-4283-8df5-f8e9abbef784\") " pod="metallb-system/frr-k8s-6cjbc" Feb 23 17:44:27 crc kubenswrapper[4724]: I0223 17:44:27.584208 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/da6b6734-568e-4283-8df5-f8e9abbef784-metrics-certs\") pod \"frr-k8s-6cjbc\" (UID: \"da6b6734-568e-4283-8df5-f8e9abbef784\") " pod="metallb-system/frr-k8s-6cjbc" Feb 23 17:44:27 crc kubenswrapper[4724]: I0223 17:44:27.584545 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-6cjbc" Feb 23 17:44:27 crc kubenswrapper[4724]: I0223 17:44:27.681482 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/39fa75d7-3799-41ce-9a9e-ebf9dd8c347b-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-kcvmh\" (UID: \"39fa75d7-3799-41ce-9a9e-ebf9dd8c347b\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-kcvmh" Feb 23 17:44:27 crc kubenswrapper[4724]: I0223 17:44:27.686063 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/39fa75d7-3799-41ce-9a9e-ebf9dd8c347b-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-kcvmh\" (UID: \"39fa75d7-3799-41ce-9a9e-ebf9dd8c347b\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-kcvmh" Feb 23 17:44:27 crc kubenswrapper[4724]: I0223 17:44:27.726864 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-fhn7w"] Feb 23 17:44:27 crc kubenswrapper[4724]: W0223 17:44:27.732129 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8637711e_f5d2_43e1_b8f6_65df43b16ffc.slice/crio-ff69efb24a778dbe4452f4dbe04273c4195a5f96dc344b9e6745e931d003b062 WatchSource:0}: Error finding container ff69efb24a778dbe4452f4dbe04273c4195a5f96dc344b9e6745e931d003b062: Status 404 returned error can't find the container with id ff69efb24a778dbe4452f4dbe04273c4195a5f96dc344b9e6745e931d003b062 Feb 23 17:44:27 crc kubenswrapper[4724]: I0223 17:44:27.885370 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a55b73c4-da87-4ce8-8418-3d6d854c0b0e-metrics-certs\") pod \"speaker-dxbt6\" (UID: \"a55b73c4-da87-4ce8-8418-3d6d854c0b0e\") " pod="metallb-system/speaker-dxbt6" Feb 23 17:44:27 crc kubenswrapper[4724]: I0223 17:44:27.885461 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a55b73c4-da87-4ce8-8418-3d6d854c0b0e-memberlist\") pod \"speaker-dxbt6\" (UID: \"a55b73c4-da87-4ce8-8418-3d6d854c0b0e\") " pod="metallb-system/speaker-dxbt6" Feb 23 17:44:27 crc kubenswrapper[4724]: E0223 17:44:27.885621 4724 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 23 17:44:27 crc kubenswrapper[4724]: E0223 17:44:27.885692 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a55b73c4-da87-4ce8-8418-3d6d854c0b0e-memberlist podName:a55b73c4-da87-4ce8-8418-3d6d854c0b0e nodeName:}" failed. No retries permitted until 2026-02-23 17:44:28.885670842 +0000 UTC m=+824.701870442 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/a55b73c4-da87-4ce8-8418-3d6d854c0b0e-memberlist") pod "speaker-dxbt6" (UID: "a55b73c4-da87-4ce8-8418-3d6d854c0b0e") : secret "metallb-memberlist" not found Feb 23 17:44:27 crc kubenswrapper[4724]: I0223 17:44:27.891385 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a55b73c4-da87-4ce8-8418-3d6d854c0b0e-metrics-certs\") pod \"speaker-dxbt6\" (UID: \"a55b73c4-da87-4ce8-8418-3d6d854c0b0e\") " pod="metallb-system/speaker-dxbt6" Feb 23 17:44:27 crc kubenswrapper[4724]: I0223 17:44:27.901959 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-kcvmh" Feb 23 17:44:28 crc kubenswrapper[4724]: I0223 17:44:28.183594 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-kcvmh"] Feb 23 17:44:28 crc kubenswrapper[4724]: I0223 17:44:28.186641 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-fhn7w" event={"ID":"8637711e-f5d2-43e1-b8f6-65df43b16ffc","Type":"ContainerStarted","Data":"d14e0970f1b446cd32a8085b99e497a994473fb1bfb7ceeac1d96a5f53fbbca9"} Feb 23 17:44:28 crc kubenswrapper[4724]: I0223 17:44:28.186715 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-fhn7w" event={"ID":"8637711e-f5d2-43e1-b8f6-65df43b16ffc","Type":"ContainerStarted","Data":"aa51dc2004afb86ab68436db51a5f909ee763fac2ad4a33ad4ecd6fd50aefe0c"} Feb 23 17:44:28 crc kubenswrapper[4724]: I0223 17:44:28.186735 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-fhn7w" event={"ID":"8637711e-f5d2-43e1-b8f6-65df43b16ffc","Type":"ContainerStarted","Data":"ff69efb24a778dbe4452f4dbe04273c4195a5f96dc344b9e6745e931d003b062"} Feb 23 17:44:28 crc kubenswrapper[4724]: I0223 17:44:28.187022 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-fhn7w" Feb 23 17:44:28 crc kubenswrapper[4724]: I0223 17:44:28.187758 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6cjbc" event={"ID":"da6b6734-568e-4283-8df5-f8e9abbef784","Type":"ContainerStarted","Data":"911f7ce3c254207f4c980a22e7ad7658a5e1bbad5b9deeeee3e4e8106e48e4fe"} Feb 23 17:44:28 crc kubenswrapper[4724]: I0223 17:44:28.216788 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-fhn7w" podStartSLOduration=1.216765602 podStartE2EDuration="1.216765602s" podCreationTimestamp="2026-02-23 17:44:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:44:28.208828547 +0000 UTC m=+824.025028147" watchObservedRunningTime="2026-02-23 17:44:28.216765602 +0000 UTC m=+824.032965202" Feb 23 17:44:28 crc kubenswrapper[4724]: I0223 17:44:28.905608 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a55b73c4-da87-4ce8-8418-3d6d854c0b0e-memberlist\") pod \"speaker-dxbt6\" (UID: \"a55b73c4-da87-4ce8-8418-3d6d854c0b0e\") " pod="metallb-system/speaker-dxbt6" Feb 23 17:44:28 crc kubenswrapper[4724]: I0223 17:44:28.912542 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a55b73c4-da87-4ce8-8418-3d6d854c0b0e-memberlist\") pod \"speaker-dxbt6\" (UID: \"a55b73c4-da87-4ce8-8418-3d6d854c0b0e\") " pod="metallb-system/speaker-dxbt6" Feb 23 17:44:28 crc kubenswrapper[4724]: I0223 17:44:28.942434 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-dxbt6" Feb 23 17:44:28 crc kubenswrapper[4724]: W0223 17:44:28.968550 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda55b73c4_da87_4ce8_8418_3d6d854c0b0e.slice/crio-2d8779991056c252133931e36e08a2a9d5b7da62e2029fae70797a4ada4aa29b WatchSource:0}: Error finding container 2d8779991056c252133931e36e08a2a9d5b7da62e2029fae70797a4ada4aa29b: Status 404 returned error can't find the container with id 2d8779991056c252133931e36e08a2a9d5b7da62e2029fae70797a4ada4aa29b Feb 23 17:44:29 crc kubenswrapper[4724]: I0223 17:44:29.206120 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-dxbt6" event={"ID":"a55b73c4-da87-4ce8-8418-3d6d854c0b0e","Type":"ContainerStarted","Data":"2d8779991056c252133931e36e08a2a9d5b7da62e2029fae70797a4ada4aa29b"} Feb 23 17:44:29 crc kubenswrapper[4724]: I0223 17:44:29.212486 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-kcvmh" event={"ID":"39fa75d7-3799-41ce-9a9e-ebf9dd8c347b","Type":"ContainerStarted","Data":"527685982c2bd9d4350d2480bb5eefdd24a2eadc2900e345797d56ab763e681a"} Feb 23 17:44:30 crc kubenswrapper[4724]: I0223 17:44:30.226712 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-dxbt6" event={"ID":"a55b73c4-da87-4ce8-8418-3d6d854c0b0e","Type":"ContainerStarted","Data":"d0ab61d0ffcb3dfe1cba58b5acb43ed7db824b566172aca3dcaa29f29dbe24f6"} Feb 23 17:44:30 crc kubenswrapper[4724]: I0223 17:44:30.232685 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-dxbt6" event={"ID":"a55b73c4-da87-4ce8-8418-3d6d854c0b0e","Type":"ContainerStarted","Data":"468ddac19afe89bd6ad250fcf9abbc808295ed5bf8f5d2228319e39a090095a8"} Feb 23 17:44:30 crc kubenswrapper[4724]: I0223 17:44:30.233106 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-dxbt6" Feb 23 17:44:30 crc kubenswrapper[4724]: I0223 17:44:30.256550 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-dxbt6" podStartSLOduration=3.256530735 podStartE2EDuration="3.256530735s" podCreationTimestamp="2026-02-23 17:44:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:44:30.249759031 +0000 UTC m=+826.065958631" watchObservedRunningTime="2026-02-23 17:44:30.256530735 +0000 UTC m=+826.072730335" Feb 23 17:44:35 crc kubenswrapper[4724]: I0223 17:44:35.275003 4724 generic.go:334] "Generic (PLEG): container finished" podID="da6b6734-568e-4283-8df5-f8e9abbef784" containerID="6992a2c93733cc4a58f02d1ec6f2e5e629ae713c464c6d4e8d1d75a7a6d2b50e" exitCode=0 Feb 23 17:44:35 crc kubenswrapper[4724]: I0223 17:44:35.275057 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6cjbc" event={"ID":"da6b6734-568e-4283-8df5-f8e9abbef784","Type":"ContainerDied","Data":"6992a2c93733cc4a58f02d1ec6f2e5e629ae713c464c6d4e8d1d75a7a6d2b50e"} Feb 23 17:44:35 crc kubenswrapper[4724]: I0223 17:44:35.278818 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-kcvmh" event={"ID":"39fa75d7-3799-41ce-9a9e-ebf9dd8c347b","Type":"ContainerStarted","Data":"6b70183f189db22cc07bcd502b3522981930d248bf311423bd409fae8995669c"} Feb 23 17:44:35 crc kubenswrapper[4724]: I0223 17:44:35.279047 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-kcvmh" Feb 23 17:44:36 crc kubenswrapper[4724]: I0223 17:44:36.286885 4724 generic.go:334] "Generic (PLEG): container finished" podID="da6b6734-568e-4283-8df5-f8e9abbef784" containerID="9a265bb92fae224cdfc4895783545ab035106911918f9b137049e5dff4f8f772" exitCode=0 Feb 23 17:44:36 crc kubenswrapper[4724]: I0223 17:44:36.286975 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6cjbc" event={"ID":"da6b6734-568e-4283-8df5-f8e9abbef784","Type":"ContainerDied","Data":"9a265bb92fae224cdfc4895783545ab035106911918f9b137049e5dff4f8f772"} Feb 23 17:44:36 crc kubenswrapper[4724]: I0223 17:44:36.313864 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-kcvmh" podStartSLOduration=3.47277909 podStartE2EDuration="10.313836019s" podCreationTimestamp="2026-02-23 17:44:26 +0000 UTC" firstStartedPulling="2026-02-23 17:44:28.199227 +0000 UTC m=+824.015426620" lastFinishedPulling="2026-02-23 17:44:35.040283949 +0000 UTC m=+830.856483549" observedRunningTime="2026-02-23 17:44:35.3179025 +0000 UTC m=+831.134102100" watchObservedRunningTime="2026-02-23 17:44:36.313836019 +0000 UTC m=+832.130035619" Feb 23 17:44:37 crc kubenswrapper[4724]: I0223 17:44:37.297334 4724 generic.go:334] "Generic (PLEG): container finished" podID="da6b6734-568e-4283-8df5-f8e9abbef784" containerID="bae5eeb34ed4a7686a4281b339abab6473e7d39dfa2fd38ad7285def68bbf5bd" exitCode=0 Feb 23 17:44:37 crc kubenswrapper[4724]: I0223 17:44:37.297441 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6cjbc" event={"ID":"da6b6734-568e-4283-8df5-f8e9abbef784","Type":"ContainerDied","Data":"bae5eeb34ed4a7686a4281b339abab6473e7d39dfa2fd38ad7285def68bbf5bd"} Feb 23 17:44:37 crc kubenswrapper[4724]: I0223 17:44:37.464569 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-fhn7w" Feb 23 17:44:38 crc kubenswrapper[4724]: I0223 17:44:38.563881 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6cjbc" event={"ID":"da6b6734-568e-4283-8df5-f8e9abbef784","Type":"ContainerStarted","Data":"cfc0d8999f93aadc91dbe0353e35e482e0cba9ed24541cbf27ae1fe7c976bf6e"} Feb 23 17:44:38 crc kubenswrapper[4724]: I0223 17:44:38.563944 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6cjbc" event={"ID":"da6b6734-568e-4283-8df5-f8e9abbef784","Type":"ContainerStarted","Data":"0eabe0758279a3d79cabc586dc0a5013db8db41034d68201cd3c02eb40cde825"} Feb 23 17:44:38 crc kubenswrapper[4724]: I0223 17:44:38.563959 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6cjbc" event={"ID":"da6b6734-568e-4283-8df5-f8e9abbef784","Type":"ContainerStarted","Data":"1af2da0d9173adb24a2d786a11b9453bbd6cde0cb689b99bf3ffb5b720e23108"} Feb 23 17:44:38 crc kubenswrapper[4724]: I0223 17:44:38.563973 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6cjbc" event={"ID":"da6b6734-568e-4283-8df5-f8e9abbef784","Type":"ContainerStarted","Data":"dd07e7eb1ef56b92186bea2a8cc4db2c8972560820a2edf453de39b236a080cd"} Feb 23 17:44:38 crc kubenswrapper[4724]: I0223 17:44:38.563985 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6cjbc" event={"ID":"da6b6734-568e-4283-8df5-f8e9abbef784","Type":"ContainerStarted","Data":"650f42fd0ecfbb7a21cf01f71f3064ae273e149acf3b40867ec02722c15f9f14"} Feb 23 17:44:39 crc kubenswrapper[4724]: I0223 17:44:39.577866 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6cjbc" event={"ID":"da6b6734-568e-4283-8df5-f8e9abbef784","Type":"ContainerStarted","Data":"a3ed2bac144295ca02cbd811ad332de02fdcb1d4992b2d559e9884a43f2f41a1"} Feb 23 17:44:39 crc kubenswrapper[4724]: I0223 17:44:39.578371 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-6cjbc" Feb 23 17:44:39 crc kubenswrapper[4724]: I0223 17:44:39.603713 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-6cjbc" podStartSLOduration=6.377297881 podStartE2EDuration="13.603687709s" podCreationTimestamp="2026-02-23 17:44:26 +0000 UTC" firstStartedPulling="2026-02-23 17:44:27.838403183 +0000 UTC m=+823.654602803" lastFinishedPulling="2026-02-23 17:44:35.064793031 +0000 UTC m=+830.880992631" observedRunningTime="2026-02-23 17:44:39.60334121 +0000 UTC m=+835.419540810" watchObservedRunningTime="2026-02-23 17:44:39.603687709 +0000 UTC m=+835.419887309" Feb 23 17:44:42 crc kubenswrapper[4724]: I0223 17:44:42.585073 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-6cjbc" Feb 23 17:44:42 crc kubenswrapper[4724]: I0223 17:44:42.621799 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-6cjbc" Feb 23 17:44:47 crc kubenswrapper[4724]: I0223 17:44:47.589762 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-6cjbc" Feb 23 17:44:47 crc kubenswrapper[4724]: I0223 17:44:47.950361 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-kcvmh" Feb 23 17:44:48 crc kubenswrapper[4724]: I0223 17:44:48.947101 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-dxbt6" Feb 23 17:44:52 crc kubenswrapper[4724]: I0223 17:44:52.090670 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-ltrm4"] Feb 23 17:44:52 crc kubenswrapper[4724]: I0223 17:44:52.092728 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-ltrm4" Feb 23 17:44:52 crc kubenswrapper[4724]: I0223 17:44:52.095991 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 23 17:44:52 crc kubenswrapper[4724]: I0223 17:44:52.096206 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 23 17:44:52 crc kubenswrapper[4724]: I0223 17:44:52.099093 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-5wlqn" Feb 23 17:44:52 crc kubenswrapper[4724]: I0223 17:44:52.171671 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-ltrm4"] Feb 23 17:44:52 crc kubenswrapper[4724]: I0223 17:44:52.218641 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6wcc\" (UniqueName: \"kubernetes.io/projected/d3c71b25-d09c-4fc4-b5d2-0f5f7f7a16c6-kube-api-access-c6wcc\") pod \"openstack-operator-index-ltrm4\" (UID: \"d3c71b25-d09c-4fc4-b5d2-0f5f7f7a16c6\") " pod="openstack-operators/openstack-operator-index-ltrm4" Feb 23 17:44:52 crc kubenswrapper[4724]: I0223 17:44:52.326353 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6wcc\" (UniqueName: \"kubernetes.io/projected/d3c71b25-d09c-4fc4-b5d2-0f5f7f7a16c6-kube-api-access-c6wcc\") pod \"openstack-operator-index-ltrm4\" (UID: \"d3c71b25-d09c-4fc4-b5d2-0f5f7f7a16c6\") " pod="openstack-operators/openstack-operator-index-ltrm4" Feb 23 17:44:52 crc kubenswrapper[4724]: I0223 17:44:52.352884 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6wcc\" (UniqueName: \"kubernetes.io/projected/d3c71b25-d09c-4fc4-b5d2-0f5f7f7a16c6-kube-api-access-c6wcc\") pod \"openstack-operator-index-ltrm4\" (UID: \"d3c71b25-d09c-4fc4-b5d2-0f5f7f7a16c6\") " pod="openstack-operators/openstack-operator-index-ltrm4" Feb 23 17:44:52 crc kubenswrapper[4724]: I0223 17:44:52.447898 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-ltrm4" Feb 23 17:44:52 crc kubenswrapper[4724]: I0223 17:44:52.670500 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-ltrm4"] Feb 23 17:44:52 crc kubenswrapper[4724]: I0223 17:44:52.762477 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-ltrm4" event={"ID":"d3c71b25-d09c-4fc4-b5d2-0f5f7f7a16c6","Type":"ContainerStarted","Data":"758493dbeccfecfd84baa913f7a5d7a749f880f2b0f686d8ba4722de71ebb843"} Feb 23 17:44:55 crc kubenswrapper[4724]: I0223 17:44:55.252649 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-ltrm4"] Feb 23 17:44:55 crc kubenswrapper[4724]: I0223 17:44:55.789975 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-ltrm4" event={"ID":"d3c71b25-d09c-4fc4-b5d2-0f5f7f7a16c6","Type":"ContainerStarted","Data":"7b7dfc037cc42cf8e8e041b731afc92352c360e3c0e068cb820605ddfcdc443e"} Feb 23 17:44:55 crc kubenswrapper[4724]: I0223 17:44:55.816830 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-ltrm4" podStartSLOduration=1.551703255 podStartE2EDuration="3.81677786s" podCreationTimestamp="2026-02-23 17:44:52 +0000 UTC" firstStartedPulling="2026-02-23 17:44:52.671726137 +0000 UTC m=+848.487925777" lastFinishedPulling="2026-02-23 17:44:54.936800782 +0000 UTC m=+850.753000382" observedRunningTime="2026-02-23 17:44:55.814620105 +0000 UTC m=+851.630819755" watchObservedRunningTime="2026-02-23 17:44:55.81677786 +0000 UTC m=+851.632977510" Feb 23 17:44:55 crc kubenswrapper[4724]: I0223 17:44:55.867408 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-qnjwh"] Feb 23 17:44:55 crc kubenswrapper[4724]: I0223 17:44:55.869555 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-qnjwh" Feb 23 17:44:55 crc kubenswrapper[4724]: I0223 17:44:55.884850 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-qnjwh"] Feb 23 17:44:55 crc kubenswrapper[4724]: I0223 17:44:55.893557 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjdn5\" (UniqueName: \"kubernetes.io/projected/c7f91058-6754-42fd-916c-38da4dd0acd4-kube-api-access-kjdn5\") pod \"openstack-operator-index-qnjwh\" (UID: \"c7f91058-6754-42fd-916c-38da4dd0acd4\") " pod="openstack-operators/openstack-operator-index-qnjwh" Feb 23 17:44:55 crc kubenswrapper[4724]: I0223 17:44:55.994437 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjdn5\" (UniqueName: \"kubernetes.io/projected/c7f91058-6754-42fd-916c-38da4dd0acd4-kube-api-access-kjdn5\") pod \"openstack-operator-index-qnjwh\" (UID: \"c7f91058-6754-42fd-916c-38da4dd0acd4\") " pod="openstack-operators/openstack-operator-index-qnjwh" Feb 23 17:44:56 crc kubenswrapper[4724]: I0223 17:44:56.018980 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjdn5\" (UniqueName: \"kubernetes.io/projected/c7f91058-6754-42fd-916c-38da4dd0acd4-kube-api-access-kjdn5\") pod \"openstack-operator-index-qnjwh\" (UID: \"c7f91058-6754-42fd-916c-38da4dd0acd4\") " pod="openstack-operators/openstack-operator-index-qnjwh" Feb 23 17:44:56 crc kubenswrapper[4724]: I0223 17:44:56.199121 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-qnjwh" Feb 23 17:44:56 crc kubenswrapper[4724]: I0223 17:44:56.693488 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-qnjwh"] Feb 23 17:44:56 crc kubenswrapper[4724]: W0223 17:44:56.709531 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc7f91058_6754_42fd_916c_38da4dd0acd4.slice/crio-0ab3267b39786cd26ac34187223e03ec1a98ab2bcdb0989f13203b21fd60cb8a WatchSource:0}: Error finding container 0ab3267b39786cd26ac34187223e03ec1a98ab2bcdb0989f13203b21fd60cb8a: Status 404 returned error can't find the container with id 0ab3267b39786cd26ac34187223e03ec1a98ab2bcdb0989f13203b21fd60cb8a Feb 23 17:44:56 crc kubenswrapper[4724]: I0223 17:44:56.800044 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-qnjwh" event={"ID":"c7f91058-6754-42fd-916c-38da4dd0acd4","Type":"ContainerStarted","Data":"0ab3267b39786cd26ac34187223e03ec1a98ab2bcdb0989f13203b21fd60cb8a"} Feb 23 17:44:56 crc kubenswrapper[4724]: I0223 17:44:56.800226 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-ltrm4" podUID="d3c71b25-d09c-4fc4-b5d2-0f5f7f7a16c6" containerName="registry-server" containerID="cri-o://7b7dfc037cc42cf8e8e041b731afc92352c360e3c0e068cb820605ddfcdc443e" gracePeriod=2 Feb 23 17:44:57 crc kubenswrapper[4724]: I0223 17:44:57.289805 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-ltrm4" Feb 23 17:44:57 crc kubenswrapper[4724]: I0223 17:44:57.316219 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6wcc\" (UniqueName: \"kubernetes.io/projected/d3c71b25-d09c-4fc4-b5d2-0f5f7f7a16c6-kube-api-access-c6wcc\") pod \"d3c71b25-d09c-4fc4-b5d2-0f5f7f7a16c6\" (UID: \"d3c71b25-d09c-4fc4-b5d2-0f5f7f7a16c6\") " Feb 23 17:44:57 crc kubenswrapper[4724]: I0223 17:44:57.321999 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3c71b25-d09c-4fc4-b5d2-0f5f7f7a16c6-kube-api-access-c6wcc" (OuterVolumeSpecName: "kube-api-access-c6wcc") pod "d3c71b25-d09c-4fc4-b5d2-0f5f7f7a16c6" (UID: "d3c71b25-d09c-4fc4-b5d2-0f5f7f7a16c6"). InnerVolumeSpecName "kube-api-access-c6wcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:44:57 crc kubenswrapper[4724]: I0223 17:44:57.418066 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6wcc\" (UniqueName: \"kubernetes.io/projected/d3c71b25-d09c-4fc4-b5d2-0f5f7f7a16c6-kube-api-access-c6wcc\") on node \"crc\" DevicePath \"\"" Feb 23 17:44:57 crc kubenswrapper[4724]: I0223 17:44:57.752426 4724 patch_prober.go:28] interesting pod/machine-config-daemon-rw78r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 17:44:57 crc kubenswrapper[4724]: I0223 17:44:57.752526 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 17:44:57 crc kubenswrapper[4724]: I0223 17:44:57.810382 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-qnjwh" event={"ID":"c7f91058-6754-42fd-916c-38da4dd0acd4","Type":"ContainerStarted","Data":"b97a75c5480caeb5c4bea5b061b5a51dc067a8e7afd7d5e67bbaf571c35567d8"} Feb 23 17:44:57 crc kubenswrapper[4724]: I0223 17:44:57.813964 4724 generic.go:334] "Generic (PLEG): container finished" podID="d3c71b25-d09c-4fc4-b5d2-0f5f7f7a16c6" containerID="7b7dfc037cc42cf8e8e041b731afc92352c360e3c0e068cb820605ddfcdc443e" exitCode=0 Feb 23 17:44:57 crc kubenswrapper[4724]: I0223 17:44:57.814024 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-ltrm4" event={"ID":"d3c71b25-d09c-4fc4-b5d2-0f5f7f7a16c6","Type":"ContainerDied","Data":"7b7dfc037cc42cf8e8e041b731afc92352c360e3c0e068cb820605ddfcdc443e"} Feb 23 17:44:57 crc kubenswrapper[4724]: I0223 17:44:57.814053 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-ltrm4" event={"ID":"d3c71b25-d09c-4fc4-b5d2-0f5f7f7a16c6","Type":"ContainerDied","Data":"758493dbeccfecfd84baa913f7a5d7a749f880f2b0f686d8ba4722de71ebb843"} Feb 23 17:44:57 crc kubenswrapper[4724]: I0223 17:44:57.814077 4724 scope.go:117] "RemoveContainer" containerID="7b7dfc037cc42cf8e8e041b731afc92352c360e3c0e068cb820605ddfcdc443e" Feb 23 17:44:57 crc kubenswrapper[4724]: I0223 17:44:57.814155 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-ltrm4" Feb 23 17:44:57 crc kubenswrapper[4724]: I0223 17:44:57.832127 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-qnjwh" podStartSLOduration=2.773370078 podStartE2EDuration="2.832091533s" podCreationTimestamp="2026-02-23 17:44:55 +0000 UTC" firstStartedPulling="2026-02-23 17:44:56.715464121 +0000 UTC m=+852.531663721" lastFinishedPulling="2026-02-23 17:44:56.774185576 +0000 UTC m=+852.590385176" observedRunningTime="2026-02-23 17:44:57.826535779 +0000 UTC m=+853.642735379" watchObservedRunningTime="2026-02-23 17:44:57.832091533 +0000 UTC m=+853.648291143" Feb 23 17:44:57 crc kubenswrapper[4724]: I0223 17:44:57.842413 4724 scope.go:117] "RemoveContainer" containerID="7b7dfc037cc42cf8e8e041b731afc92352c360e3c0e068cb820605ddfcdc443e" Feb 23 17:44:57 crc kubenswrapper[4724]: E0223 17:44:57.842961 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b7dfc037cc42cf8e8e041b731afc92352c360e3c0e068cb820605ddfcdc443e\": container with ID starting with 7b7dfc037cc42cf8e8e041b731afc92352c360e3c0e068cb820605ddfcdc443e not found: ID does not exist" containerID="7b7dfc037cc42cf8e8e041b731afc92352c360e3c0e068cb820605ddfcdc443e" Feb 23 17:44:57 crc kubenswrapper[4724]: I0223 17:44:57.843027 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b7dfc037cc42cf8e8e041b731afc92352c360e3c0e068cb820605ddfcdc443e"} err="failed to get container status \"7b7dfc037cc42cf8e8e041b731afc92352c360e3c0e068cb820605ddfcdc443e\": rpc error: code = NotFound desc = could not find container \"7b7dfc037cc42cf8e8e041b731afc92352c360e3c0e068cb820605ddfcdc443e\": container with ID starting with 7b7dfc037cc42cf8e8e041b731afc92352c360e3c0e068cb820605ddfcdc443e not found: ID does not exist" Feb 23 17:44:57 crc kubenswrapper[4724]: I0223 17:44:57.859892 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-ltrm4"] Feb 23 17:44:57 crc kubenswrapper[4724]: I0223 17:44:57.865226 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-ltrm4"] Feb 23 17:44:58 crc kubenswrapper[4724]: I0223 17:44:58.959261 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3c71b25-d09c-4fc4-b5d2-0f5f7f7a16c6" path="/var/lib/kubelet/pods/d3c71b25-d09c-4fc4-b5d2-0f5f7f7a16c6/volumes" Feb 23 17:45:00 crc kubenswrapper[4724]: I0223 17:45:00.153432 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531145-668zc"] Feb 23 17:45:00 crc kubenswrapper[4724]: E0223 17:45:00.154508 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3c71b25-d09c-4fc4-b5d2-0f5f7f7a16c6" containerName="registry-server" Feb 23 17:45:00 crc kubenswrapper[4724]: I0223 17:45:00.154686 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3c71b25-d09c-4fc4-b5d2-0f5f7f7a16c6" containerName="registry-server" Feb 23 17:45:00 crc kubenswrapper[4724]: I0223 17:45:00.154953 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3c71b25-d09c-4fc4-b5d2-0f5f7f7a16c6" containerName="registry-server" Feb 23 17:45:00 crc kubenswrapper[4724]: I0223 17:45:00.155695 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531145-668zc" Feb 23 17:45:00 crc kubenswrapper[4724]: I0223 17:45:00.155919 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531145-668zc"] Feb 23 17:45:00 crc kubenswrapper[4724]: I0223 17:45:00.158740 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 23 17:45:00 crc kubenswrapper[4724]: I0223 17:45:00.159781 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 23 17:45:00 crc kubenswrapper[4724]: I0223 17:45:00.280290 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssl65\" (UniqueName: \"kubernetes.io/projected/cee97caf-66fd-4f32-bb1e-e69f22806a7b-kube-api-access-ssl65\") pod \"collect-profiles-29531145-668zc\" (UID: \"cee97caf-66fd-4f32-bb1e-e69f22806a7b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531145-668zc" Feb 23 17:45:00 crc kubenswrapper[4724]: I0223 17:45:00.280359 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cee97caf-66fd-4f32-bb1e-e69f22806a7b-config-volume\") pod \"collect-profiles-29531145-668zc\" (UID: \"cee97caf-66fd-4f32-bb1e-e69f22806a7b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531145-668zc" Feb 23 17:45:00 crc kubenswrapper[4724]: I0223 17:45:00.280578 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cee97caf-66fd-4f32-bb1e-e69f22806a7b-secret-volume\") pod \"collect-profiles-29531145-668zc\" (UID: \"cee97caf-66fd-4f32-bb1e-e69f22806a7b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531145-668zc" Feb 23 17:45:00 crc kubenswrapper[4724]: I0223 17:45:00.382246 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssl65\" (UniqueName: \"kubernetes.io/projected/cee97caf-66fd-4f32-bb1e-e69f22806a7b-kube-api-access-ssl65\") pod \"collect-profiles-29531145-668zc\" (UID: \"cee97caf-66fd-4f32-bb1e-e69f22806a7b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531145-668zc" Feb 23 17:45:00 crc kubenswrapper[4724]: I0223 17:45:00.382312 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cee97caf-66fd-4f32-bb1e-e69f22806a7b-config-volume\") pod \"collect-profiles-29531145-668zc\" (UID: \"cee97caf-66fd-4f32-bb1e-e69f22806a7b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531145-668zc" Feb 23 17:45:00 crc kubenswrapper[4724]: I0223 17:45:00.382381 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cee97caf-66fd-4f32-bb1e-e69f22806a7b-secret-volume\") pod \"collect-profiles-29531145-668zc\" (UID: \"cee97caf-66fd-4f32-bb1e-e69f22806a7b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531145-668zc" Feb 23 17:45:00 crc kubenswrapper[4724]: I0223 17:45:00.383745 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cee97caf-66fd-4f32-bb1e-e69f22806a7b-config-volume\") pod \"collect-profiles-29531145-668zc\" (UID: \"cee97caf-66fd-4f32-bb1e-e69f22806a7b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531145-668zc" Feb 23 17:45:00 crc kubenswrapper[4724]: I0223 17:45:00.393194 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cee97caf-66fd-4f32-bb1e-e69f22806a7b-secret-volume\") pod \"collect-profiles-29531145-668zc\" (UID: \"cee97caf-66fd-4f32-bb1e-e69f22806a7b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531145-668zc" Feb 23 17:45:00 crc kubenswrapper[4724]: I0223 17:45:00.401187 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssl65\" (UniqueName: \"kubernetes.io/projected/cee97caf-66fd-4f32-bb1e-e69f22806a7b-kube-api-access-ssl65\") pod \"collect-profiles-29531145-668zc\" (UID: \"cee97caf-66fd-4f32-bb1e-e69f22806a7b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531145-668zc" Feb 23 17:45:00 crc kubenswrapper[4724]: I0223 17:45:00.475606 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531145-668zc" Feb 23 17:45:00 crc kubenswrapper[4724]: I0223 17:45:00.924921 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531145-668zc"] Feb 23 17:45:00 crc kubenswrapper[4724]: W0223 17:45:00.932975 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcee97caf_66fd_4f32_bb1e_e69f22806a7b.slice/crio-b240d3cb82556d754b35ceac51815ca667c54c565199ab32f1620a682293ef15 WatchSource:0}: Error finding container b240d3cb82556d754b35ceac51815ca667c54c565199ab32f1620a682293ef15: Status 404 returned error can't find the container with id b240d3cb82556d754b35ceac51815ca667c54c565199ab32f1620a682293ef15 Feb 23 17:45:01 crc kubenswrapper[4724]: I0223 17:45:01.848872 4724 generic.go:334] "Generic (PLEG): container finished" podID="cee97caf-66fd-4f32-bb1e-e69f22806a7b" containerID="436d965bef8a9bbe24686f042c83e50357c529ed1634eecbd00a8fc85a22ea9c" exitCode=0 Feb 23 17:45:01 crc kubenswrapper[4724]: I0223 17:45:01.848940 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531145-668zc" event={"ID":"cee97caf-66fd-4f32-bb1e-e69f22806a7b","Type":"ContainerDied","Data":"436d965bef8a9bbe24686f042c83e50357c529ed1634eecbd00a8fc85a22ea9c"} Feb 23 17:45:01 crc kubenswrapper[4724]: I0223 17:45:01.849321 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531145-668zc" event={"ID":"cee97caf-66fd-4f32-bb1e-e69f22806a7b","Type":"ContainerStarted","Data":"b240d3cb82556d754b35ceac51815ca667c54c565199ab32f1620a682293ef15"} Feb 23 17:45:03 crc kubenswrapper[4724]: I0223 17:45:03.082625 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531145-668zc" Feb 23 17:45:03 crc kubenswrapper[4724]: I0223 17:45:03.126007 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cee97caf-66fd-4f32-bb1e-e69f22806a7b-secret-volume\") pod \"cee97caf-66fd-4f32-bb1e-e69f22806a7b\" (UID: \"cee97caf-66fd-4f32-bb1e-e69f22806a7b\") " Feb 23 17:45:03 crc kubenswrapper[4724]: I0223 17:45:03.126150 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssl65\" (UniqueName: \"kubernetes.io/projected/cee97caf-66fd-4f32-bb1e-e69f22806a7b-kube-api-access-ssl65\") pod \"cee97caf-66fd-4f32-bb1e-e69f22806a7b\" (UID: \"cee97caf-66fd-4f32-bb1e-e69f22806a7b\") " Feb 23 17:45:03 crc kubenswrapper[4724]: I0223 17:45:03.126208 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cee97caf-66fd-4f32-bb1e-e69f22806a7b-config-volume\") pod \"cee97caf-66fd-4f32-bb1e-e69f22806a7b\" (UID: \"cee97caf-66fd-4f32-bb1e-e69f22806a7b\") " Feb 23 17:45:03 crc kubenswrapper[4724]: I0223 17:45:03.127621 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cee97caf-66fd-4f32-bb1e-e69f22806a7b-config-volume" (OuterVolumeSpecName: "config-volume") pod "cee97caf-66fd-4f32-bb1e-e69f22806a7b" (UID: "cee97caf-66fd-4f32-bb1e-e69f22806a7b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:45:03 crc kubenswrapper[4724]: I0223 17:45:03.132926 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cee97caf-66fd-4f32-bb1e-e69f22806a7b-kube-api-access-ssl65" (OuterVolumeSpecName: "kube-api-access-ssl65") pod "cee97caf-66fd-4f32-bb1e-e69f22806a7b" (UID: "cee97caf-66fd-4f32-bb1e-e69f22806a7b"). InnerVolumeSpecName "kube-api-access-ssl65". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:45:03 crc kubenswrapper[4724]: I0223 17:45:03.147330 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cee97caf-66fd-4f32-bb1e-e69f22806a7b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "cee97caf-66fd-4f32-bb1e-e69f22806a7b" (UID: "cee97caf-66fd-4f32-bb1e-e69f22806a7b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:45:03 crc kubenswrapper[4724]: I0223 17:45:03.228452 4724 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cee97caf-66fd-4f32-bb1e-e69f22806a7b-config-volume\") on node \"crc\" DevicePath \"\"" Feb 23 17:45:03 crc kubenswrapper[4724]: I0223 17:45:03.228495 4724 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cee97caf-66fd-4f32-bb1e-e69f22806a7b-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 23 17:45:03 crc kubenswrapper[4724]: I0223 17:45:03.228509 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssl65\" (UniqueName: \"kubernetes.io/projected/cee97caf-66fd-4f32-bb1e-e69f22806a7b-kube-api-access-ssl65\") on node \"crc\" DevicePath \"\"" Feb 23 17:45:03 crc kubenswrapper[4724]: I0223 17:45:03.867775 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531145-668zc" event={"ID":"cee97caf-66fd-4f32-bb1e-e69f22806a7b","Type":"ContainerDied","Data":"b240d3cb82556d754b35ceac51815ca667c54c565199ab32f1620a682293ef15"} Feb 23 17:45:03 crc kubenswrapper[4724]: I0223 17:45:03.867824 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b240d3cb82556d754b35ceac51815ca667c54c565199ab32f1620a682293ef15" Feb 23 17:45:03 crc kubenswrapper[4724]: I0223 17:45:03.867901 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531145-668zc" Feb 23 17:45:06 crc kubenswrapper[4724]: I0223 17:45:06.200302 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-qnjwh" Feb 23 17:45:06 crc kubenswrapper[4724]: I0223 17:45:06.202491 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-qnjwh" Feb 23 17:45:06 crc kubenswrapper[4724]: I0223 17:45:06.227918 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-qnjwh" Feb 23 17:45:06 crc kubenswrapper[4724]: I0223 17:45:06.913886 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-qnjwh" Feb 23 17:45:07 crc kubenswrapper[4724]: I0223 17:45:07.900530 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/d2abe2e0813cd7e38464a3437d4c9e1acf5147797253340570e3ac2557qwvm8"] Feb 23 17:45:07 crc kubenswrapper[4724]: E0223 17:45:07.900886 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cee97caf-66fd-4f32-bb1e-e69f22806a7b" containerName="collect-profiles" Feb 23 17:45:07 crc kubenswrapper[4724]: I0223 17:45:07.900933 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="cee97caf-66fd-4f32-bb1e-e69f22806a7b" containerName="collect-profiles" Feb 23 17:45:07 crc kubenswrapper[4724]: I0223 17:45:07.901139 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="cee97caf-66fd-4f32-bb1e-e69f22806a7b" containerName="collect-profiles" Feb 23 17:45:07 crc kubenswrapper[4724]: I0223 17:45:07.902206 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d2abe2e0813cd7e38464a3437d4c9e1acf5147797253340570e3ac2557qwvm8" Feb 23 17:45:07 crc kubenswrapper[4724]: I0223 17:45:07.906343 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-c57p5" Feb 23 17:45:07 crc kubenswrapper[4724]: I0223 17:45:07.914221 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/d2abe2e0813cd7e38464a3437d4c9e1acf5147797253340570e3ac2557qwvm8"] Feb 23 17:45:07 crc kubenswrapper[4724]: I0223 17:45:07.998976 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/da865614-a81a-4de0-b6e4-8be443632fa5-util\") pod \"d2abe2e0813cd7e38464a3437d4c9e1acf5147797253340570e3ac2557qwvm8\" (UID: \"da865614-a81a-4de0-b6e4-8be443632fa5\") " pod="openstack-operators/d2abe2e0813cd7e38464a3437d4c9e1acf5147797253340570e3ac2557qwvm8" Feb 23 17:45:07 crc kubenswrapper[4724]: I0223 17:45:07.999166 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48n6j\" (UniqueName: \"kubernetes.io/projected/da865614-a81a-4de0-b6e4-8be443632fa5-kube-api-access-48n6j\") pod \"d2abe2e0813cd7e38464a3437d4c9e1acf5147797253340570e3ac2557qwvm8\" (UID: \"da865614-a81a-4de0-b6e4-8be443632fa5\") " pod="openstack-operators/d2abe2e0813cd7e38464a3437d4c9e1acf5147797253340570e3ac2557qwvm8" Feb 23 17:45:07 crc kubenswrapper[4724]: I0223 17:45:07.999198 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/da865614-a81a-4de0-b6e4-8be443632fa5-bundle\") pod \"d2abe2e0813cd7e38464a3437d4c9e1acf5147797253340570e3ac2557qwvm8\" (UID: \"da865614-a81a-4de0-b6e4-8be443632fa5\") " pod="openstack-operators/d2abe2e0813cd7e38464a3437d4c9e1acf5147797253340570e3ac2557qwvm8" Feb 23 17:45:08 crc kubenswrapper[4724]: I0223 17:45:08.100340 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48n6j\" (UniqueName: \"kubernetes.io/projected/da865614-a81a-4de0-b6e4-8be443632fa5-kube-api-access-48n6j\") pod \"d2abe2e0813cd7e38464a3437d4c9e1acf5147797253340570e3ac2557qwvm8\" (UID: \"da865614-a81a-4de0-b6e4-8be443632fa5\") " pod="openstack-operators/d2abe2e0813cd7e38464a3437d4c9e1acf5147797253340570e3ac2557qwvm8" Feb 23 17:45:08 crc kubenswrapper[4724]: I0223 17:45:08.100401 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/da865614-a81a-4de0-b6e4-8be443632fa5-bundle\") pod \"d2abe2e0813cd7e38464a3437d4c9e1acf5147797253340570e3ac2557qwvm8\" (UID: \"da865614-a81a-4de0-b6e4-8be443632fa5\") " pod="openstack-operators/d2abe2e0813cd7e38464a3437d4c9e1acf5147797253340570e3ac2557qwvm8" Feb 23 17:45:08 crc kubenswrapper[4724]: I0223 17:45:08.100479 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/da865614-a81a-4de0-b6e4-8be443632fa5-util\") pod \"d2abe2e0813cd7e38464a3437d4c9e1acf5147797253340570e3ac2557qwvm8\" (UID: \"da865614-a81a-4de0-b6e4-8be443632fa5\") " pod="openstack-operators/d2abe2e0813cd7e38464a3437d4c9e1acf5147797253340570e3ac2557qwvm8" Feb 23 17:45:08 crc kubenswrapper[4724]: I0223 17:45:08.101031 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/da865614-a81a-4de0-b6e4-8be443632fa5-bundle\") pod \"d2abe2e0813cd7e38464a3437d4c9e1acf5147797253340570e3ac2557qwvm8\" (UID: \"da865614-a81a-4de0-b6e4-8be443632fa5\") " pod="openstack-operators/d2abe2e0813cd7e38464a3437d4c9e1acf5147797253340570e3ac2557qwvm8" Feb 23 17:45:08 crc kubenswrapper[4724]: I0223 17:45:08.101062 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/da865614-a81a-4de0-b6e4-8be443632fa5-util\") pod \"d2abe2e0813cd7e38464a3437d4c9e1acf5147797253340570e3ac2557qwvm8\" (UID: \"da865614-a81a-4de0-b6e4-8be443632fa5\") " pod="openstack-operators/d2abe2e0813cd7e38464a3437d4c9e1acf5147797253340570e3ac2557qwvm8" Feb 23 17:45:08 crc kubenswrapper[4724]: I0223 17:45:08.120722 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48n6j\" (UniqueName: \"kubernetes.io/projected/da865614-a81a-4de0-b6e4-8be443632fa5-kube-api-access-48n6j\") pod \"d2abe2e0813cd7e38464a3437d4c9e1acf5147797253340570e3ac2557qwvm8\" (UID: \"da865614-a81a-4de0-b6e4-8be443632fa5\") " pod="openstack-operators/d2abe2e0813cd7e38464a3437d4c9e1acf5147797253340570e3ac2557qwvm8" Feb 23 17:45:08 crc kubenswrapper[4724]: I0223 17:45:08.223490 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d2abe2e0813cd7e38464a3437d4c9e1acf5147797253340570e3ac2557qwvm8" Feb 23 17:45:08 crc kubenswrapper[4724]: I0223 17:45:08.662415 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/d2abe2e0813cd7e38464a3437d4c9e1acf5147797253340570e3ac2557qwvm8"] Feb 23 17:45:08 crc kubenswrapper[4724]: I0223 17:45:08.902349 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d2abe2e0813cd7e38464a3437d4c9e1acf5147797253340570e3ac2557qwvm8" event={"ID":"da865614-a81a-4de0-b6e4-8be443632fa5","Type":"ContainerStarted","Data":"dbae35c5e1aa0894198666ec62b1e6669ded87bac02ab4d19807f3e6ca1e9ec6"} Feb 23 17:45:08 crc kubenswrapper[4724]: I0223 17:45:08.902414 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d2abe2e0813cd7e38464a3437d4c9e1acf5147797253340570e3ac2557qwvm8" event={"ID":"da865614-a81a-4de0-b6e4-8be443632fa5","Type":"ContainerStarted","Data":"3e292fb940cfd6e1bb50c0cfaf63cc4799c20ad33a24cea81f28bd49c44a4e3e"} Feb 23 17:45:09 crc kubenswrapper[4724]: I0223 17:45:09.913072 4724 generic.go:334] "Generic (PLEG): container finished" podID="da865614-a81a-4de0-b6e4-8be443632fa5" containerID="dbae35c5e1aa0894198666ec62b1e6669ded87bac02ab4d19807f3e6ca1e9ec6" exitCode=0 Feb 23 17:45:09 crc kubenswrapper[4724]: I0223 17:45:09.913165 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d2abe2e0813cd7e38464a3437d4c9e1acf5147797253340570e3ac2557qwvm8" event={"ID":"da865614-a81a-4de0-b6e4-8be443632fa5","Type":"ContainerDied","Data":"dbae35c5e1aa0894198666ec62b1e6669ded87bac02ab4d19807f3e6ca1e9ec6"} Feb 23 17:45:10 crc kubenswrapper[4724]: I0223 17:45:10.922501 4724 generic.go:334] "Generic (PLEG): container finished" podID="da865614-a81a-4de0-b6e4-8be443632fa5" containerID="c77df8b45d930b6efa98ef1b6ca17057c0248bb89e199e11f8e6505c2644f5df" exitCode=0 Feb 23 17:45:10 crc kubenswrapper[4724]: I0223 17:45:10.923499 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d2abe2e0813cd7e38464a3437d4c9e1acf5147797253340570e3ac2557qwvm8" event={"ID":"da865614-a81a-4de0-b6e4-8be443632fa5","Type":"ContainerDied","Data":"c77df8b45d930b6efa98ef1b6ca17057c0248bb89e199e11f8e6505c2644f5df"} Feb 23 17:45:11 crc kubenswrapper[4724]: I0223 17:45:11.933453 4724 generic.go:334] "Generic (PLEG): container finished" podID="da865614-a81a-4de0-b6e4-8be443632fa5" containerID="af8012bc009de337d90b4e971987207782b6d7cd2b27489e67808ae7d202784e" exitCode=0 Feb 23 17:45:11 crc kubenswrapper[4724]: I0223 17:45:11.933502 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d2abe2e0813cd7e38464a3437d4c9e1acf5147797253340570e3ac2557qwvm8" event={"ID":"da865614-a81a-4de0-b6e4-8be443632fa5","Type":"ContainerDied","Data":"af8012bc009de337d90b4e971987207782b6d7cd2b27489e67808ae7d202784e"} Feb 23 17:45:13 crc kubenswrapper[4724]: I0223 17:45:13.219771 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d2abe2e0813cd7e38464a3437d4c9e1acf5147797253340570e3ac2557qwvm8" Feb 23 17:45:13 crc kubenswrapper[4724]: I0223 17:45:13.279782 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/da865614-a81a-4de0-b6e4-8be443632fa5-util\") pod \"da865614-a81a-4de0-b6e4-8be443632fa5\" (UID: \"da865614-a81a-4de0-b6e4-8be443632fa5\") " Feb 23 17:45:13 crc kubenswrapper[4724]: I0223 17:45:13.279961 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/da865614-a81a-4de0-b6e4-8be443632fa5-bundle\") pod \"da865614-a81a-4de0-b6e4-8be443632fa5\" (UID: \"da865614-a81a-4de0-b6e4-8be443632fa5\") " Feb 23 17:45:13 crc kubenswrapper[4724]: I0223 17:45:13.280021 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48n6j\" (UniqueName: \"kubernetes.io/projected/da865614-a81a-4de0-b6e4-8be443632fa5-kube-api-access-48n6j\") pod \"da865614-a81a-4de0-b6e4-8be443632fa5\" (UID: \"da865614-a81a-4de0-b6e4-8be443632fa5\") " Feb 23 17:45:13 crc kubenswrapper[4724]: I0223 17:45:13.281227 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da865614-a81a-4de0-b6e4-8be443632fa5-bundle" (OuterVolumeSpecName: "bundle") pod "da865614-a81a-4de0-b6e4-8be443632fa5" (UID: "da865614-a81a-4de0-b6e4-8be443632fa5"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:45:13 crc kubenswrapper[4724]: I0223 17:45:13.287287 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da865614-a81a-4de0-b6e4-8be443632fa5-kube-api-access-48n6j" (OuterVolumeSpecName: "kube-api-access-48n6j") pod "da865614-a81a-4de0-b6e4-8be443632fa5" (UID: "da865614-a81a-4de0-b6e4-8be443632fa5"). InnerVolumeSpecName "kube-api-access-48n6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:45:13 crc kubenswrapper[4724]: I0223 17:45:13.294689 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da865614-a81a-4de0-b6e4-8be443632fa5-util" (OuterVolumeSpecName: "util") pod "da865614-a81a-4de0-b6e4-8be443632fa5" (UID: "da865614-a81a-4de0-b6e4-8be443632fa5"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:45:13 crc kubenswrapper[4724]: I0223 17:45:13.381890 4724 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/da865614-a81a-4de0-b6e4-8be443632fa5-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 17:45:13 crc kubenswrapper[4724]: I0223 17:45:13.381933 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48n6j\" (UniqueName: \"kubernetes.io/projected/da865614-a81a-4de0-b6e4-8be443632fa5-kube-api-access-48n6j\") on node \"crc\" DevicePath \"\"" Feb 23 17:45:13 crc kubenswrapper[4724]: I0223 17:45:13.381944 4724 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/da865614-a81a-4de0-b6e4-8be443632fa5-util\") on node \"crc\" DevicePath \"\"" Feb 23 17:45:13 crc kubenswrapper[4724]: I0223 17:45:13.954284 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d2abe2e0813cd7e38464a3437d4c9e1acf5147797253340570e3ac2557qwvm8" event={"ID":"da865614-a81a-4de0-b6e4-8be443632fa5","Type":"ContainerDied","Data":"3e292fb940cfd6e1bb50c0cfaf63cc4799c20ad33a24cea81f28bd49c44a4e3e"} Feb 23 17:45:13 crc kubenswrapper[4724]: I0223 17:45:13.955202 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e292fb940cfd6e1bb50c0cfaf63cc4799c20ad33a24cea81f28bd49c44a4e3e" Feb 23 17:45:13 crc kubenswrapper[4724]: I0223 17:45:13.955501 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d2abe2e0813cd7e38464a3437d4c9e1acf5147797253340570e3ac2557qwvm8" Feb 23 17:45:20 crc kubenswrapper[4724]: I0223 17:45:20.488107 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-9d7777f98-c6ttl"] Feb 23 17:45:20 crc kubenswrapper[4724]: E0223 17:45:20.489487 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da865614-a81a-4de0-b6e4-8be443632fa5" containerName="pull" Feb 23 17:45:20 crc kubenswrapper[4724]: I0223 17:45:20.489502 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="da865614-a81a-4de0-b6e4-8be443632fa5" containerName="pull" Feb 23 17:45:20 crc kubenswrapper[4724]: E0223 17:45:20.489520 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da865614-a81a-4de0-b6e4-8be443632fa5" containerName="extract" Feb 23 17:45:20 crc kubenswrapper[4724]: I0223 17:45:20.489527 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="da865614-a81a-4de0-b6e4-8be443632fa5" containerName="extract" Feb 23 17:45:20 crc kubenswrapper[4724]: E0223 17:45:20.489535 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da865614-a81a-4de0-b6e4-8be443632fa5" containerName="util" Feb 23 17:45:20 crc kubenswrapper[4724]: I0223 17:45:20.489541 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="da865614-a81a-4de0-b6e4-8be443632fa5" containerName="util" Feb 23 17:45:20 crc kubenswrapper[4724]: I0223 17:45:20.489661 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="da865614-a81a-4de0-b6e4-8be443632fa5" containerName="extract" Feb 23 17:45:20 crc kubenswrapper[4724]: I0223 17:45:20.490715 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-9d7777f98-c6ttl" Feb 23 17:45:20 crc kubenswrapper[4724]: I0223 17:45:20.502419 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-csvdk" Feb 23 17:45:20 crc kubenswrapper[4724]: I0223 17:45:20.548222 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-9d7777f98-c6ttl"] Feb 23 17:45:20 crc kubenswrapper[4724]: I0223 17:45:20.601574 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbztg\" (UniqueName: \"kubernetes.io/projected/264513fc-f807-42c5-8089-abc30cf6404b-kube-api-access-kbztg\") pod \"openstack-operator-controller-init-9d7777f98-c6ttl\" (UID: \"264513fc-f807-42c5-8089-abc30cf6404b\") " pod="openstack-operators/openstack-operator-controller-init-9d7777f98-c6ttl" Feb 23 17:45:20 crc kubenswrapper[4724]: I0223 17:45:20.703180 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbztg\" (UniqueName: \"kubernetes.io/projected/264513fc-f807-42c5-8089-abc30cf6404b-kube-api-access-kbztg\") pod \"openstack-operator-controller-init-9d7777f98-c6ttl\" (UID: \"264513fc-f807-42c5-8089-abc30cf6404b\") " pod="openstack-operators/openstack-operator-controller-init-9d7777f98-c6ttl" Feb 23 17:45:20 crc kubenswrapper[4724]: I0223 17:45:20.728105 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbztg\" (UniqueName: \"kubernetes.io/projected/264513fc-f807-42c5-8089-abc30cf6404b-kube-api-access-kbztg\") pod \"openstack-operator-controller-init-9d7777f98-c6ttl\" (UID: \"264513fc-f807-42c5-8089-abc30cf6404b\") " pod="openstack-operators/openstack-operator-controller-init-9d7777f98-c6ttl" Feb 23 17:45:20 crc kubenswrapper[4724]: I0223 17:45:20.831103 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-9d7777f98-c6ttl" Feb 23 17:45:21 crc kubenswrapper[4724]: I0223 17:45:21.185723 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-9d7777f98-c6ttl"] Feb 23 17:45:22 crc kubenswrapper[4724]: I0223 17:45:22.040355 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-9d7777f98-c6ttl" event={"ID":"264513fc-f807-42c5-8089-abc30cf6404b","Type":"ContainerStarted","Data":"e610c3737565d017886f10754e422c3a5b669a5a0418a6f8c65e608a6845fdb5"} Feb 23 17:45:26 crc kubenswrapper[4724]: I0223 17:45:26.074172 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-9d7777f98-c6ttl" event={"ID":"264513fc-f807-42c5-8089-abc30cf6404b","Type":"ContainerStarted","Data":"e580126f025fde35b19b837a1ad87c21007b159ebff3f25f0456a00a58ba06b6"} Feb 23 17:45:26 crc kubenswrapper[4724]: I0223 17:45:26.075085 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-9d7777f98-c6ttl" Feb 23 17:45:26 crc kubenswrapper[4724]: I0223 17:45:26.106162 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-9d7777f98-c6ttl" podStartSLOduration=1.838875776 podStartE2EDuration="6.106131636s" podCreationTimestamp="2026-02-23 17:45:20 +0000 UTC" firstStartedPulling="2026-02-23 17:45:21.19468679 +0000 UTC m=+877.010886390" lastFinishedPulling="2026-02-23 17:45:25.46194265 +0000 UTC m=+881.278142250" observedRunningTime="2026-02-23 17:45:26.101084676 +0000 UTC m=+881.917284276" watchObservedRunningTime="2026-02-23 17:45:26.106131636 +0000 UTC m=+881.922331256" Feb 23 17:45:27 crc kubenswrapper[4724]: I0223 17:45:27.752295 4724 patch_prober.go:28] interesting pod/machine-config-daemon-rw78r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 17:45:27 crc kubenswrapper[4724]: I0223 17:45:27.752369 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 17:45:30 crc kubenswrapper[4724]: I0223 17:45:30.834925 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-9d7777f98-c6ttl" Feb 23 17:45:33 crc kubenswrapper[4724]: I0223 17:45:33.106743 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gr5gv"] Feb 23 17:45:33 crc kubenswrapper[4724]: I0223 17:45:33.108770 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gr5gv" Feb 23 17:45:33 crc kubenswrapper[4724]: I0223 17:45:33.115668 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gr5gv"] Feb 23 17:45:33 crc kubenswrapper[4724]: I0223 17:45:33.217305 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l68g\" (UniqueName: \"kubernetes.io/projected/4f30ea74-e80f-4a6d-99bc-39f2f1ec1608-kube-api-access-4l68g\") pod \"redhat-operators-gr5gv\" (UID: \"4f30ea74-e80f-4a6d-99bc-39f2f1ec1608\") " pod="openshift-marketplace/redhat-operators-gr5gv" Feb 23 17:45:33 crc kubenswrapper[4724]: I0223 17:45:33.217401 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f30ea74-e80f-4a6d-99bc-39f2f1ec1608-utilities\") pod \"redhat-operators-gr5gv\" (UID: \"4f30ea74-e80f-4a6d-99bc-39f2f1ec1608\") " pod="openshift-marketplace/redhat-operators-gr5gv" Feb 23 17:45:33 crc kubenswrapper[4724]: I0223 17:45:33.217435 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f30ea74-e80f-4a6d-99bc-39f2f1ec1608-catalog-content\") pod \"redhat-operators-gr5gv\" (UID: \"4f30ea74-e80f-4a6d-99bc-39f2f1ec1608\") " pod="openshift-marketplace/redhat-operators-gr5gv" Feb 23 17:45:33 crc kubenswrapper[4724]: I0223 17:45:33.319635 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4l68g\" (UniqueName: \"kubernetes.io/projected/4f30ea74-e80f-4a6d-99bc-39f2f1ec1608-kube-api-access-4l68g\") pod \"redhat-operators-gr5gv\" (UID: \"4f30ea74-e80f-4a6d-99bc-39f2f1ec1608\") " pod="openshift-marketplace/redhat-operators-gr5gv" Feb 23 17:45:33 crc kubenswrapper[4724]: I0223 17:45:33.319717 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f30ea74-e80f-4a6d-99bc-39f2f1ec1608-utilities\") pod \"redhat-operators-gr5gv\" (UID: \"4f30ea74-e80f-4a6d-99bc-39f2f1ec1608\") " pod="openshift-marketplace/redhat-operators-gr5gv" Feb 23 17:45:33 crc kubenswrapper[4724]: I0223 17:45:33.319769 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f30ea74-e80f-4a6d-99bc-39f2f1ec1608-catalog-content\") pod \"redhat-operators-gr5gv\" (UID: \"4f30ea74-e80f-4a6d-99bc-39f2f1ec1608\") " pod="openshift-marketplace/redhat-operators-gr5gv" Feb 23 17:45:33 crc kubenswrapper[4724]: I0223 17:45:33.320229 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f30ea74-e80f-4a6d-99bc-39f2f1ec1608-utilities\") pod \"redhat-operators-gr5gv\" (UID: \"4f30ea74-e80f-4a6d-99bc-39f2f1ec1608\") " pod="openshift-marketplace/redhat-operators-gr5gv" Feb 23 17:45:33 crc kubenswrapper[4724]: I0223 17:45:33.320337 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f30ea74-e80f-4a6d-99bc-39f2f1ec1608-catalog-content\") pod \"redhat-operators-gr5gv\" (UID: \"4f30ea74-e80f-4a6d-99bc-39f2f1ec1608\") " pod="openshift-marketplace/redhat-operators-gr5gv" Feb 23 17:45:33 crc kubenswrapper[4724]: I0223 17:45:33.344600 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4l68g\" (UniqueName: \"kubernetes.io/projected/4f30ea74-e80f-4a6d-99bc-39f2f1ec1608-kube-api-access-4l68g\") pod \"redhat-operators-gr5gv\" (UID: \"4f30ea74-e80f-4a6d-99bc-39f2f1ec1608\") " pod="openshift-marketplace/redhat-operators-gr5gv" Feb 23 17:45:33 crc kubenswrapper[4724]: I0223 17:45:33.436673 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gr5gv" Feb 23 17:45:33 crc kubenswrapper[4724]: I0223 17:45:33.893970 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gr5gv"] Feb 23 17:45:34 crc kubenswrapper[4724]: I0223 17:45:34.129804 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gr5gv" event={"ID":"4f30ea74-e80f-4a6d-99bc-39f2f1ec1608","Type":"ContainerStarted","Data":"abbceca4f083b28ae85ac4dccc5233e640f158f61263c4d625ce616fd9a4bedb"} Feb 23 17:45:35 crc kubenswrapper[4724]: I0223 17:45:35.138717 4724 generic.go:334] "Generic (PLEG): container finished" podID="4f30ea74-e80f-4a6d-99bc-39f2f1ec1608" containerID="13627d586fe2fa5e81796f76ef9dd2cfddefc7c8a9c107ddd8d1956f23b678a4" exitCode=0 Feb 23 17:45:35 crc kubenswrapper[4724]: I0223 17:45:35.138760 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gr5gv" event={"ID":"4f30ea74-e80f-4a6d-99bc-39f2f1ec1608","Type":"ContainerDied","Data":"13627d586fe2fa5e81796f76ef9dd2cfddefc7c8a9c107ddd8d1956f23b678a4"} Feb 23 17:45:37 crc kubenswrapper[4724]: I0223 17:45:37.154900 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gr5gv" event={"ID":"4f30ea74-e80f-4a6d-99bc-39f2f1ec1608","Type":"ContainerStarted","Data":"a2500ba67cd05c470126b8314a4877efbba0869e7d1b226f81454834e6ea0992"} Feb 23 17:45:38 crc kubenswrapper[4724]: I0223 17:45:38.163443 4724 generic.go:334] "Generic (PLEG): container finished" podID="4f30ea74-e80f-4a6d-99bc-39f2f1ec1608" containerID="a2500ba67cd05c470126b8314a4877efbba0869e7d1b226f81454834e6ea0992" exitCode=0 Feb 23 17:45:38 crc kubenswrapper[4724]: I0223 17:45:38.163546 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gr5gv" event={"ID":"4f30ea74-e80f-4a6d-99bc-39f2f1ec1608","Type":"ContainerDied","Data":"a2500ba67cd05c470126b8314a4877efbba0869e7d1b226f81454834e6ea0992"} Feb 23 17:45:40 crc kubenswrapper[4724]: I0223 17:45:40.177566 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gr5gv" event={"ID":"4f30ea74-e80f-4a6d-99bc-39f2f1ec1608","Type":"ContainerStarted","Data":"ef6ebd122c42e5fbcde2c3f5c12ac794f9eda6f33c1c80329356588cd510cfe5"} Feb 23 17:45:40 crc kubenswrapper[4724]: I0223 17:45:40.201135 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gr5gv" podStartSLOduration=2.999147756 podStartE2EDuration="7.201103982s" podCreationTimestamp="2026-02-23 17:45:33 +0000 UTC" firstStartedPulling="2026-02-23 17:45:35.140356274 +0000 UTC m=+890.956555874" lastFinishedPulling="2026-02-23 17:45:39.3423125 +0000 UTC m=+895.158512100" observedRunningTime="2026-02-23 17:45:40.195115787 +0000 UTC m=+896.011315407" watchObservedRunningTime="2026-02-23 17:45:40.201103982 +0000 UTC m=+896.017303582" Feb 23 17:45:43 crc kubenswrapper[4724]: I0223 17:45:43.437819 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gr5gv" Feb 23 17:45:43 crc kubenswrapper[4724]: I0223 17:45:43.438288 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gr5gv" Feb 23 17:45:44 crc kubenswrapper[4724]: I0223 17:45:44.497094 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gr5gv" podUID="4f30ea74-e80f-4a6d-99bc-39f2f1ec1608" containerName="registry-server" probeResult="failure" output=< Feb 23 17:45:44 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 23 17:45:44 crc kubenswrapper[4724]: > Feb 23 17:45:50 crc kubenswrapper[4724]: I0223 17:45:50.162774 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-dxbt6" podUID="a55b73c4-da87-4ce8-8418-3d6d854c0b0e" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": dial tcp [::1]:29150: connect: connection refused (Client.Timeout exceeded while awaiting headers)" Feb 23 17:45:50 crc kubenswrapper[4724]: I0223 17:45:50.162900 4724 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 17:45:50 crc kubenswrapper[4724]: I0223 17:45:50.164431 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 23 17:45:50 crc kubenswrapper[4724]: I0223 17:45:50.289953 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-75ft9"] Feb 23 17:45:50 crc kubenswrapper[4724]: I0223 17:45:50.299653 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-75ft9" Feb 23 17:45:50 crc kubenswrapper[4724]: I0223 17:45:50.326542 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-75ft9"] Feb 23 17:45:50 crc kubenswrapper[4724]: I0223 17:45:50.462106 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fb435d7-a53d-44d4-b800-23f60d2aac7c-catalog-content\") pod \"community-operators-75ft9\" (UID: \"0fb435d7-a53d-44d4-b800-23f60d2aac7c\") " pod="openshift-marketplace/community-operators-75ft9" Feb 23 17:45:50 crc kubenswrapper[4724]: I0223 17:45:50.462287 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fb435d7-a53d-44d4-b800-23f60d2aac7c-utilities\") pod \"community-operators-75ft9\" (UID: \"0fb435d7-a53d-44d4-b800-23f60d2aac7c\") " pod="openshift-marketplace/community-operators-75ft9" Feb 23 17:45:50 crc kubenswrapper[4724]: I0223 17:45:50.462550 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdh9j\" (UniqueName: \"kubernetes.io/projected/0fb435d7-a53d-44d4-b800-23f60d2aac7c-kube-api-access-jdh9j\") pod \"community-operators-75ft9\" (UID: \"0fb435d7-a53d-44d4-b800-23f60d2aac7c\") " pod="openshift-marketplace/community-operators-75ft9" Feb 23 17:45:50 crc kubenswrapper[4724]: I0223 17:45:50.563968 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdh9j\" (UniqueName: \"kubernetes.io/projected/0fb435d7-a53d-44d4-b800-23f60d2aac7c-kube-api-access-jdh9j\") pod \"community-operators-75ft9\" (UID: \"0fb435d7-a53d-44d4-b800-23f60d2aac7c\") " pod="openshift-marketplace/community-operators-75ft9" Feb 23 17:45:50 crc kubenswrapper[4724]: I0223 17:45:50.564076 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fb435d7-a53d-44d4-b800-23f60d2aac7c-catalog-content\") pod \"community-operators-75ft9\" (UID: \"0fb435d7-a53d-44d4-b800-23f60d2aac7c\") " pod="openshift-marketplace/community-operators-75ft9" Feb 23 17:45:50 crc kubenswrapper[4724]: I0223 17:45:50.564117 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fb435d7-a53d-44d4-b800-23f60d2aac7c-utilities\") pod \"community-operators-75ft9\" (UID: \"0fb435d7-a53d-44d4-b800-23f60d2aac7c\") " pod="openshift-marketplace/community-operators-75ft9" Feb 23 17:45:50 crc kubenswrapper[4724]: I0223 17:45:50.564696 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fb435d7-a53d-44d4-b800-23f60d2aac7c-utilities\") pod \"community-operators-75ft9\" (UID: \"0fb435d7-a53d-44d4-b800-23f60d2aac7c\") " pod="openshift-marketplace/community-operators-75ft9" Feb 23 17:45:50 crc kubenswrapper[4724]: I0223 17:45:50.564981 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fb435d7-a53d-44d4-b800-23f60d2aac7c-catalog-content\") pod \"community-operators-75ft9\" (UID: \"0fb435d7-a53d-44d4-b800-23f60d2aac7c\") " pod="openshift-marketplace/community-operators-75ft9" Feb 23 17:45:50 crc kubenswrapper[4724]: I0223 17:45:50.591694 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdh9j\" (UniqueName: \"kubernetes.io/projected/0fb435d7-a53d-44d4-b800-23f60d2aac7c-kube-api-access-jdh9j\") pod \"community-operators-75ft9\" (UID: \"0fb435d7-a53d-44d4-b800-23f60d2aac7c\") " pod="openshift-marketplace/community-operators-75ft9" Feb 23 17:45:50 crc kubenswrapper[4724]: I0223 17:45:50.644965 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-75ft9" Feb 23 17:45:51 crc kubenswrapper[4724]: I0223 17:45:51.267859 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-75ft9"] Feb 23 17:45:51 crc kubenswrapper[4724]: W0223 17:45:51.280921 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0fb435d7_a53d_44d4_b800_23f60d2aac7c.slice/crio-a0f50e73d1b184eb7bd4b6e257ecfe5dc6fdec3ed1b4c66fcf9c12d010361aff WatchSource:0}: Error finding container a0f50e73d1b184eb7bd4b6e257ecfe5dc6fdec3ed1b4c66fcf9c12d010361aff: Status 404 returned error can't find the container with id a0f50e73d1b184eb7bd4b6e257ecfe5dc6fdec3ed1b4c66fcf9c12d010361aff Feb 23 17:45:51 crc kubenswrapper[4724]: I0223 17:45:51.884334 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-4zgfm"] Feb 23 17:45:51 crc kubenswrapper[4724]: I0223 17:45:51.886208 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-4zgfm" Feb 23 17:45:51 crc kubenswrapper[4724]: I0223 17:45:51.889531 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-flzq8" Feb 23 17:45:51 crc kubenswrapper[4724]: I0223 17:45:51.889554 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-zm7cw"] Feb 23 17:45:51 crc kubenswrapper[4724]: I0223 17:45:51.891973 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-zm7cw" Feb 23 17:45:51 crc kubenswrapper[4724]: I0223 17:45:51.894141 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-5lzzx" Feb 23 17:45:51 crc kubenswrapper[4724]: I0223 17:45:51.905930 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-4zgfm"] Feb 23 17:45:51 crc kubenswrapper[4724]: I0223 17:45:51.911208 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-zm7cw"] Feb 23 17:45:51 crc kubenswrapper[4724]: I0223 17:45:51.921366 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-vqls9"] Feb 23 17:45:51 crc kubenswrapper[4724]: I0223 17:45:51.922679 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-vqls9" Feb 23 17:45:51 crc kubenswrapper[4724]: I0223 17:45:51.924950 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-784b5bb6c5-gmdl7"] Feb 23 17:45:51 crc kubenswrapper[4724]: I0223 17:45:51.925999 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-gmdl7" Feb 23 17:45:51 crc kubenswrapper[4724]: I0223 17:45:51.934240 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-js45d" Feb 23 17:45:51 crc kubenswrapper[4724]: I0223 17:45:51.934845 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-r5v2d" Feb 23 17:45:51 crc kubenswrapper[4724]: I0223 17:45:51.947588 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-vqls9"] Feb 23 17:45:51 crc kubenswrapper[4724]: I0223 17:45:51.963097 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-784b5bb6c5-gmdl7"] Feb 23 17:45:51 crc kubenswrapper[4724]: I0223 17:45:51.985477 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-f5x72"] Feb 23 17:45:51 crc kubenswrapper[4724]: I0223 17:45:51.986703 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-f5x72" Feb 23 17:45:51 crc kubenswrapper[4724]: I0223 17:45:51.989649 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-f7jf8" Feb 23 17:45:51 crc kubenswrapper[4724]: I0223 17:45:51.991485 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skm5p\" (UniqueName: \"kubernetes.io/projected/a4842ca7-909d-4d11-bba6-75555f3599b3-kube-api-access-skm5p\") pod \"cinder-operator-controller-manager-55d77d7b5c-zm7cw\" (UID: \"a4842ca7-909d-4d11-bba6-75555f3599b3\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-zm7cw" Feb 23 17:45:51 crc kubenswrapper[4724]: I0223 17:45:51.991598 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7krwc\" (UniqueName: \"kubernetes.io/projected/70c55fa9-1fa4-415c-98c4-adfe080201d1-kube-api-access-7krwc\") pod \"barbican-operator-controller-manager-868647ff47-4zgfm\" (UID: \"70c55fa9-1fa4-415c-98c4-adfe080201d1\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-4zgfm" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.009500 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-9gtq7"] Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.010578 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-9gtq7" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.014003 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-nv4tf" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.043918 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-9gtq7"] Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.055760 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-f5x72"] Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.070759 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-pb2dv"] Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.072115 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-pb2dv" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.075871 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-t9rh9" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.076123 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.093308 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvjzx\" (UniqueName: \"kubernetes.io/projected/6b607306-d732-4142-83d4-92ae20c714cd-kube-api-access-zvjzx\") pod \"heat-operator-controller-manager-69f49c598c-f5x72\" (UID: \"6b607306-d732-4142-83d4-92ae20c714cd\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-f5x72" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.093446 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skm5p\" (UniqueName: \"kubernetes.io/projected/a4842ca7-909d-4d11-bba6-75555f3599b3-kube-api-access-skm5p\") pod \"cinder-operator-controller-manager-55d77d7b5c-zm7cw\" (UID: \"a4842ca7-909d-4d11-bba6-75555f3599b3\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-zm7cw" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.093497 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqxdj\" (UniqueName: \"kubernetes.io/projected/dedf8817-f3cf-4630-a825-71059f681d10-kube-api-access-jqxdj\") pod \"glance-operator-controller-manager-784b5bb6c5-gmdl7\" (UID: \"dedf8817-f3cf-4630-a825-71059f681d10\") " pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-gmdl7" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.093617 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7krwc\" (UniqueName: \"kubernetes.io/projected/70c55fa9-1fa4-415c-98c4-adfe080201d1-kube-api-access-7krwc\") pod \"barbican-operator-controller-manager-868647ff47-4zgfm\" (UID: \"70c55fa9-1fa4-415c-98c4-adfe080201d1\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-4zgfm" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.093658 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7n2p\" (UniqueName: \"kubernetes.io/projected/967a6928-46e0-4a1e-90bd-cc9a204d9099-kube-api-access-n7n2p\") pod \"designate-operator-controller-manager-6d8bf5c495-vqls9\" (UID: \"967a6928-46e0-4a1e-90bd-cc9a204d9099\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-vqls9" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.129508 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-pb2dv"] Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.143033 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-22lgm"] Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.144122 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-22lgm" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.149967 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-sx6x6" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.151068 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skm5p\" (UniqueName: \"kubernetes.io/projected/a4842ca7-909d-4d11-bba6-75555f3599b3-kube-api-access-skm5p\") pod \"cinder-operator-controller-manager-55d77d7b5c-zm7cw\" (UID: \"a4842ca7-909d-4d11-bba6-75555f3599b3\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-zm7cw" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.151071 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7krwc\" (UniqueName: \"kubernetes.io/projected/70c55fa9-1fa4-415c-98c4-adfe080201d1-kube-api-access-7krwc\") pod \"barbican-operator-controller-manager-868647ff47-4zgfm\" (UID: \"70c55fa9-1fa4-415c-98c4-adfe080201d1\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-4zgfm" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.156462 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-djmpk"] Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.157526 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-djmpk" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.164000 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-4qnl4" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.187988 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-fxj7d"] Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.189533 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-67d996989d-fxj7d" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.197632 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-cvdfr" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.198768 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d8xv\" (UniqueName: \"kubernetes.io/projected/7af82cf5-bcff-40d4-8c1b-4bc71e5ca9a3-kube-api-access-2d8xv\") pod \"infra-operator-controller-manager-79d975b745-pb2dv\" (UID: \"7af82cf5-bcff-40d4-8c1b-4bc71e5ca9a3\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-pb2dv" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.198829 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7n2p\" (UniqueName: \"kubernetes.io/projected/967a6928-46e0-4a1e-90bd-cc9a204d9099-kube-api-access-n7n2p\") pod \"designate-operator-controller-manager-6d8bf5c495-vqls9\" (UID: \"967a6928-46e0-4a1e-90bd-cc9a204d9099\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-vqls9" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.198860 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvjzx\" (UniqueName: \"kubernetes.io/projected/6b607306-d732-4142-83d4-92ae20c714cd-kube-api-access-zvjzx\") pod \"heat-operator-controller-manager-69f49c598c-f5x72\" (UID: \"6b607306-d732-4142-83d4-92ae20c714cd\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-f5x72" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.198898 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7af82cf5-bcff-40d4-8c1b-4bc71e5ca9a3-cert\") pod \"infra-operator-controller-manager-79d975b745-pb2dv\" (UID: \"7af82cf5-bcff-40d4-8c1b-4bc71e5ca9a3\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-pb2dv" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.198957 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqxdj\" (UniqueName: \"kubernetes.io/projected/dedf8817-f3cf-4630-a825-71059f681d10-kube-api-access-jqxdj\") pod \"glance-operator-controller-manager-784b5bb6c5-gmdl7\" (UID: \"dedf8817-f3cf-4630-a825-71059f681d10\") " pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-gmdl7" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.198990 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn8c7\" (UniqueName: \"kubernetes.io/projected/2bc5c9a5-0293-4efd-b5a4-0f5c85b238b5-kube-api-access-nn8c7\") pod \"horizon-operator-controller-manager-5b9b8895d5-9gtq7\" (UID: \"2bc5c9a5-0293-4efd-b5a4-0f5c85b238b5\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-9gtq7" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.203383 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-fxj7d"] Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.211055 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-4zgfm" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.234136 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-djmpk"] Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.265224 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvjzx\" (UniqueName: \"kubernetes.io/projected/6b607306-d732-4142-83d4-92ae20c714cd-kube-api-access-zvjzx\") pod \"heat-operator-controller-manager-69f49c598c-f5x72\" (UID: \"6b607306-d732-4142-83d4-92ae20c714cd\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-f5x72" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.266173 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqxdj\" (UniqueName: \"kubernetes.io/projected/dedf8817-f3cf-4630-a825-71059f681d10-kube-api-access-jqxdj\") pod \"glance-operator-controller-manager-784b5bb6c5-gmdl7\" (UID: \"dedf8817-f3cf-4630-a825-71059f681d10\") " pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-gmdl7" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.269033 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7n2p\" (UniqueName: \"kubernetes.io/projected/967a6928-46e0-4a1e-90bd-cc9a204d9099-kube-api-access-n7n2p\") pod \"designate-operator-controller-manager-6d8bf5c495-vqls9\" (UID: \"967a6928-46e0-4a1e-90bd-cc9a204d9099\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-vqls9" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.276078 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-zm7cw" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.281071 4724 generic.go:334] "Generic (PLEG): container finished" podID="0fb435d7-a53d-44d4-b800-23f60d2aac7c" containerID="64e1204d137d00021831cf1bb440cb4743e9f200ac107e7c6294c36ffa84c9f2" exitCode=0 Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.281148 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-75ft9" event={"ID":"0fb435d7-a53d-44d4-b800-23f60d2aac7c","Type":"ContainerDied","Data":"64e1204d137d00021831cf1bb440cb4743e9f200ac107e7c6294c36ffa84c9f2"} Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.281190 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-75ft9" event={"ID":"0fb435d7-a53d-44d4-b800-23f60d2aac7c","Type":"ContainerStarted","Data":"a0f50e73d1b184eb7bd4b6e257ecfe5dc6fdec3ed1b4c66fcf9c12d010361aff"} Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.297924 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-22lgm"] Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.305922 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn8c7\" (UniqueName: \"kubernetes.io/projected/2bc5c9a5-0293-4efd-b5a4-0f5c85b238b5-kube-api-access-nn8c7\") pod \"horizon-operator-controller-manager-5b9b8895d5-9gtq7\" (UID: \"2bc5c9a5-0293-4efd-b5a4-0f5c85b238b5\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-9gtq7" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.310006 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2d8xv\" (UniqueName: \"kubernetes.io/projected/7af82cf5-bcff-40d4-8c1b-4bc71e5ca9a3-kube-api-access-2d8xv\") pod \"infra-operator-controller-manager-79d975b745-pb2dv\" (UID: \"7af82cf5-bcff-40d4-8c1b-4bc71e5ca9a3\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-pb2dv" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.310091 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svrrm\" (UniqueName: \"kubernetes.io/projected/dd866f81-0e85-4690-b16d-45baf5e856ed-kube-api-access-svrrm\") pod \"ironic-operator-controller-manager-554564d7fc-22lgm\" (UID: \"dd866f81-0e85-4690-b16d-45baf5e856ed\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-22lgm" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.310206 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5fwc\" (UniqueName: \"kubernetes.io/projected/973124e7-0723-4a5d-ab81-0ef8619f8754-kube-api-access-g5fwc\") pod \"keystone-operator-controller-manager-b4d948c87-djmpk\" (UID: \"973124e7-0723-4a5d-ab81-0ef8619f8754\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-djmpk" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.310248 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7af82cf5-bcff-40d4-8c1b-4bc71e5ca9a3-cert\") pod \"infra-operator-controller-manager-79d975b745-pb2dv\" (UID: \"7af82cf5-bcff-40d4-8c1b-4bc71e5ca9a3\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-pb2dv" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.310323 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swgp4\" (UniqueName: \"kubernetes.io/projected/b906fefc-aaf5-48c0-b45b-3d11dbda1c3e-kube-api-access-swgp4\") pod \"manila-operator-controller-manager-67d996989d-fxj7d\" (UID: \"b906fefc-aaf5-48c0-b45b-3d11dbda1c3e\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-fxj7d" Feb 23 17:45:52 crc kubenswrapper[4724]: E0223 17:45:52.313092 4724 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 23 17:45:52 crc kubenswrapper[4724]: E0223 17:45:52.313201 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7af82cf5-bcff-40d4-8c1b-4bc71e5ca9a3-cert podName:7af82cf5-bcff-40d4-8c1b-4bc71e5ca9a3 nodeName:}" failed. No retries permitted until 2026-02-23 17:45:52.81316729 +0000 UTC m=+908.629366890 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7af82cf5-bcff-40d4-8c1b-4bc71e5ca9a3-cert") pod "infra-operator-controller-manager-79d975b745-pb2dv" (UID: "7af82cf5-bcff-40d4-8c1b-4bc71e5ca9a3") : secret "infra-operator-webhook-server-cert" not found Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.313700 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-vqls9" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.345803 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-gmdl7" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.369825 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-f5x72" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.415546 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2d8xv\" (UniqueName: \"kubernetes.io/projected/7af82cf5-bcff-40d4-8c1b-4bc71e5ca9a3-kube-api-access-2d8xv\") pod \"infra-operator-controller-manager-79d975b745-pb2dv\" (UID: \"7af82cf5-bcff-40d4-8c1b-4bc71e5ca9a3\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-pb2dv" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.432584 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn8c7\" (UniqueName: \"kubernetes.io/projected/2bc5c9a5-0293-4efd-b5a4-0f5c85b238b5-kube-api-access-nn8c7\") pod \"horizon-operator-controller-manager-5b9b8895d5-9gtq7\" (UID: \"2bc5c9a5-0293-4efd-b5a4-0f5c85b238b5\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-9gtq7" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.433406 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6bd4687957-9s4mk"] Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.433466 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swgp4\" (UniqueName: \"kubernetes.io/projected/b906fefc-aaf5-48c0-b45b-3d11dbda1c3e-kube-api-access-swgp4\") pod \"manila-operator-controller-manager-67d996989d-fxj7d\" (UID: \"b906fefc-aaf5-48c0-b45b-3d11dbda1c3e\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-fxj7d" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.434988 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svrrm\" (UniqueName: \"kubernetes.io/projected/dd866f81-0e85-4690-b16d-45baf5e856ed-kube-api-access-svrrm\") pod \"ironic-operator-controller-manager-554564d7fc-22lgm\" (UID: \"dd866f81-0e85-4690-b16d-45baf5e856ed\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-22lgm" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.435094 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5fwc\" (UniqueName: \"kubernetes.io/projected/973124e7-0723-4a5d-ab81-0ef8619f8754-kube-api-access-g5fwc\") pod \"keystone-operator-controller-manager-b4d948c87-djmpk\" (UID: \"973124e7-0723-4a5d-ab81-0ef8619f8754\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-djmpk" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.442828 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-9s4mk" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.448264 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-7jnmk" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.461022 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-xdfp8"] Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.462366 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-xdfp8" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.470879 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svrrm\" (UniqueName: \"kubernetes.io/projected/dd866f81-0e85-4690-b16d-45baf5e856ed-kube-api-access-svrrm\") pod \"ironic-operator-controller-manager-554564d7fc-22lgm\" (UID: \"dd866f81-0e85-4690-b16d-45baf5e856ed\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-22lgm" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.470955 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6bd4687957-9s4mk"] Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.479830 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-mx9f6" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.482059 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swgp4\" (UniqueName: \"kubernetes.io/projected/b906fefc-aaf5-48c0-b45b-3d11dbda1c3e-kube-api-access-swgp4\") pod \"manila-operator-controller-manager-67d996989d-fxj7d\" (UID: \"b906fefc-aaf5-48c0-b45b-3d11dbda1c3e\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-fxj7d" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.491249 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5fwc\" (UniqueName: \"kubernetes.io/projected/973124e7-0723-4a5d-ab81-0ef8619f8754-kube-api-access-g5fwc\") pod \"keystone-operator-controller-manager-b4d948c87-djmpk\" (UID: \"973124e7-0723-4a5d-ab81-0ef8619f8754\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-djmpk" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.498221 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-xdfp8"] Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.504132 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-659dc6bbfc-p42tx"] Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.509432 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-p42tx" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.513026 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-prqwh" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.513221 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-d5z2j"] Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.514721 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-d5z2j" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.516730 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-tcvwn" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.516946 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-659dc6bbfc-p42tx"] Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.538177 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68dnc\" (UniqueName: \"kubernetes.io/projected/8b193934-08d8-4435-ae40-8b4d7b4878e7-kube-api-access-68dnc\") pod \"neutron-operator-controller-manager-6bd4687957-9s4mk\" (UID: \"8b193934-08d8-4435-ae40-8b4d7b4878e7\") " pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-9s4mk" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.538508 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-22lgm" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.561625 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-d5z2j"] Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.568027 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c44nwv"] Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.571601 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c44nwv" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.573561 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.575170 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-ttrtr" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.589132 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-szmk8"] Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.590097 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-szmk8" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.594046 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-mgdws" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.596563 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5955d8c787-92g5j"] Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.600059 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-92g5j" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.607279 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c44nwv"] Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.618472 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-5z42p" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.629311 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-djmpk" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.629453 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-szmk8"] Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.649551 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-wqsvk"] Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.650610 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-wqsvk" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.651073 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsn2g\" (UniqueName: \"kubernetes.io/projected/24d796b9-e6ea-4b70-9424-1352f71c80a6-kube-api-access-nsn2g\") pod \"octavia-operator-controller-manager-659dc6bbfc-p42tx\" (UID: \"24d796b9-e6ea-4b70-9424-1352f71c80a6\") " pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-p42tx" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.651134 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wfvz\" (UniqueName: \"kubernetes.io/projected/73da6414-95e9-4d5a-a0ca-fbeb32048153-kube-api-access-6wfvz\") pod \"mariadb-operator-controller-manager-6994f66f48-xdfp8\" (UID: \"73da6414-95e9-4d5a-a0ca-fbeb32048153\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-xdfp8" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.651215 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68dnc\" (UniqueName: \"kubernetes.io/projected/8b193934-08d8-4435-ae40-8b4d7b4878e7-kube-api-access-68dnc\") pod \"neutron-operator-controller-manager-6bd4687957-9s4mk\" (UID: \"8b193934-08d8-4435-ae40-8b4d7b4878e7\") " pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-9s4mk" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.651262 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stmwq\" (UniqueName: \"kubernetes.io/projected/8bc03a47-9ded-40c0-b924-0c936950a12a-kube-api-access-stmwq\") pod \"nova-operator-controller-manager-567668f5cf-d5z2j\" (UID: \"8bc03a47-9ded-40c0-b924-0c936950a12a\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-d5z2j" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.656227 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-q8n5h" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.659972 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5955d8c787-92g5j"] Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.672055 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-9gtq7" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.687472 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-67d996989d-fxj7d" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.702607 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68dnc\" (UniqueName: \"kubernetes.io/projected/8b193934-08d8-4435-ae40-8b4d7b4878e7-kube-api-access-68dnc\") pod \"neutron-operator-controller-manager-6bd4687957-9s4mk\" (UID: \"8b193934-08d8-4435-ae40-8b4d7b4878e7\") " pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-9s4mk" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.703825 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-wqsvk"] Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.734689 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-589c568786-d85f4"] Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.735996 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-d85f4" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.745700 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-lf55v" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.745919 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5dc6794d5b-4tnw2"] Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.747684 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-4tnw2" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.749242 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5dc6794d5b-4tnw2"] Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.756721 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-589c568786-d85f4"] Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.757419 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-2mcfz" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.758558 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8r4t\" (UniqueName: \"kubernetes.io/projected/e37a1f8b-cee7-4a13-879e-496d26735ab4-kube-api-access-v8r4t\") pod \"swift-operator-controller-manager-68f46476f-wqsvk\" (UID: \"e37a1f8b-cee7-4a13-879e-496d26735ab4\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-wqsvk" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.758896 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsn2g\" (UniqueName: \"kubernetes.io/projected/24d796b9-e6ea-4b70-9424-1352f71c80a6-kube-api-access-nsn2g\") pod \"octavia-operator-controller-manager-659dc6bbfc-p42tx\" (UID: \"24d796b9-e6ea-4b70-9424-1352f71c80a6\") " pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-p42tx" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.759021 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wfvz\" (UniqueName: \"kubernetes.io/projected/73da6414-95e9-4d5a-a0ca-fbeb32048153-kube-api-access-6wfvz\") pod \"mariadb-operator-controller-manager-6994f66f48-xdfp8\" (UID: \"73da6414-95e9-4d5a-a0ca-fbeb32048153\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-xdfp8" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.759143 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bq4s\" (UniqueName: \"kubernetes.io/projected/63923048-2ad5-45f9-9285-9d84dc711fa7-kube-api-access-4bq4s\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c44nwv\" (UID: \"63923048-2ad5-45f9-9285-9d84dc711fa7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c44nwv" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.759271 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xktmw\" (UniqueName: \"kubernetes.io/projected/a8f9c97e-0259-4c6e-b188-33081d1706fd-kube-api-access-xktmw\") pod \"ovn-operator-controller-manager-5955d8c787-92g5j\" (UID: \"a8f9c97e-0259-4c6e-b188-33081d1706fd\") " pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-92g5j" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.759387 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfqlz\" (UniqueName: \"kubernetes.io/projected/77ba1933-d39b-4b30-9d8c-1500d7293444-kube-api-access-pfqlz\") pod \"placement-operator-controller-manager-8497b45c89-szmk8\" (UID: \"77ba1933-d39b-4b30-9d8c-1500d7293444\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-szmk8" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.759544 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stmwq\" (UniqueName: \"kubernetes.io/projected/8bc03a47-9ded-40c0-b924-0c936950a12a-kube-api-access-stmwq\") pod \"nova-operator-controller-manager-567668f5cf-d5z2j\" (UID: \"8bc03a47-9ded-40c0-b924-0c936950a12a\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-d5z2j" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.759646 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/63923048-2ad5-45f9-9285-9d84dc711fa7-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c44nwv\" (UID: \"63923048-2ad5-45f9-9285-9d84dc711fa7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c44nwv" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.793401 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5cb6b78489-7tdgw"] Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.794768 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5cb6b78489-7tdgw" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.800649 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-9s4mk" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.801260 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wfvz\" (UniqueName: \"kubernetes.io/projected/73da6414-95e9-4d5a-a0ca-fbeb32048153-kube-api-access-6wfvz\") pod \"mariadb-operator-controller-manager-6994f66f48-xdfp8\" (UID: \"73da6414-95e9-4d5a-a0ca-fbeb32048153\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-xdfp8" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.801466 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stmwq\" (UniqueName: \"kubernetes.io/projected/8bc03a47-9ded-40c0-b924-0c936950a12a-kube-api-access-stmwq\") pod \"nova-operator-controller-manager-567668f5cf-d5z2j\" (UID: \"8bc03a47-9ded-40c0-b924-0c936950a12a\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-d5z2j" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.802730 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-87fdz" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.811327 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5cb6b78489-7tdgw"] Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.818729 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsn2g\" (UniqueName: \"kubernetes.io/projected/24d796b9-e6ea-4b70-9424-1352f71c80a6-kube-api-access-nsn2g\") pod \"octavia-operator-controller-manager-659dc6bbfc-p42tx\" (UID: \"24d796b9-e6ea-4b70-9424-1352f71c80a6\") " pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-p42tx" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.862723 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grlqz\" (UniqueName: \"kubernetes.io/projected/3b37faa8-6e4e-427a-9c1a-84993ed85290-kube-api-access-grlqz\") pod \"telemetry-operator-controller-manager-589c568786-d85f4\" (UID: \"3b37faa8-6e4e-427a-9c1a-84993ed85290\") " pod="openstack-operators/telemetry-operator-controller-manager-589c568786-d85f4" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.862785 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7af82cf5-bcff-40d4-8c1b-4bc71e5ca9a3-cert\") pod \"infra-operator-controller-manager-79d975b745-pb2dv\" (UID: \"7af82cf5-bcff-40d4-8c1b-4bc71e5ca9a3\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-pb2dv" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.862817 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8r4t\" (UniqueName: \"kubernetes.io/projected/e37a1f8b-cee7-4a13-879e-496d26735ab4-kube-api-access-v8r4t\") pod \"swift-operator-controller-manager-68f46476f-wqsvk\" (UID: \"e37a1f8b-cee7-4a13-879e-496d26735ab4\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-wqsvk" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.862856 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjvtd\" (UniqueName: \"kubernetes.io/projected/ca793345-c1e2-4207-844b-170dd5b70066-kube-api-access-pjvtd\") pod \"test-operator-controller-manager-5dc6794d5b-4tnw2\" (UID: \"ca793345-c1e2-4207-844b-170dd5b70066\") " pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-4tnw2" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.862878 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bq4s\" (UniqueName: \"kubernetes.io/projected/63923048-2ad5-45f9-9285-9d84dc711fa7-kube-api-access-4bq4s\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c44nwv\" (UID: \"63923048-2ad5-45f9-9285-9d84dc711fa7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c44nwv" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.862916 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xktmw\" (UniqueName: \"kubernetes.io/projected/a8f9c97e-0259-4c6e-b188-33081d1706fd-kube-api-access-xktmw\") pod \"ovn-operator-controller-manager-5955d8c787-92g5j\" (UID: \"a8f9c97e-0259-4c6e-b188-33081d1706fd\") " pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-92g5j" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.862941 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfqlz\" (UniqueName: \"kubernetes.io/projected/77ba1933-d39b-4b30-9d8c-1500d7293444-kube-api-access-pfqlz\") pod \"placement-operator-controller-manager-8497b45c89-szmk8\" (UID: \"77ba1933-d39b-4b30-9d8c-1500d7293444\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-szmk8" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.862961 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/63923048-2ad5-45f9-9285-9d84dc711fa7-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c44nwv\" (UID: \"63923048-2ad5-45f9-9285-9d84dc711fa7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c44nwv" Feb 23 17:45:52 crc kubenswrapper[4724]: E0223 17:45:52.863108 4724 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 17:45:52 crc kubenswrapper[4724]: E0223 17:45:52.863170 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63923048-2ad5-45f9-9285-9d84dc711fa7-cert podName:63923048-2ad5-45f9-9285-9d84dc711fa7 nodeName:}" failed. No retries permitted until 2026-02-23 17:45:53.363147576 +0000 UTC m=+909.179347176 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/63923048-2ad5-45f9-9285-9d84dc711fa7-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9c44nwv" (UID: "63923048-2ad5-45f9-9285-9d84dc711fa7") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 17:45:52 crc kubenswrapper[4724]: E0223 17:45:52.863186 4724 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 23 17:45:52 crc kubenswrapper[4724]: E0223 17:45:52.863263 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7af82cf5-bcff-40d4-8c1b-4bc71e5ca9a3-cert podName:7af82cf5-bcff-40d4-8c1b-4bc71e5ca9a3 nodeName:}" failed. No retries permitted until 2026-02-23 17:45:53.863234978 +0000 UTC m=+909.679434578 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7af82cf5-bcff-40d4-8c1b-4bc71e5ca9a3-cert") pod "infra-operator-controller-manager-79d975b745-pb2dv" (UID: "7af82cf5-bcff-40d4-8c1b-4bc71e5ca9a3") : secret "infra-operator-webhook-server-cert" not found Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.882714 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-bf9ddc465-xrp8k"] Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.883752 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-bf9ddc465-xrp8k" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.890968 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8r4t\" (UniqueName: \"kubernetes.io/projected/e37a1f8b-cee7-4a13-879e-496d26735ab4-kube-api-access-v8r4t\") pod \"swift-operator-controller-manager-68f46476f-wqsvk\" (UID: \"e37a1f8b-cee7-4a13-879e-496d26735ab4\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-wqsvk" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.891249 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.893149 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-xn8s6" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.893279 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.922190 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xktmw\" (UniqueName: \"kubernetes.io/projected/a8f9c97e-0259-4c6e-b188-33081d1706fd-kube-api-access-xktmw\") pod \"ovn-operator-controller-manager-5955d8c787-92g5j\" (UID: \"a8f9c97e-0259-4c6e-b188-33081d1706fd\") " pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-92g5j" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.925442 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfqlz\" (UniqueName: \"kubernetes.io/projected/77ba1933-d39b-4b30-9d8c-1500d7293444-kube-api-access-pfqlz\") pod \"placement-operator-controller-manager-8497b45c89-szmk8\" (UID: \"77ba1933-d39b-4b30-9d8c-1500d7293444\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-szmk8" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.925988 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-xdfp8" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.928245 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bq4s\" (UniqueName: \"kubernetes.io/projected/63923048-2ad5-45f9-9285-9d84dc711fa7-kube-api-access-4bq4s\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c44nwv\" (UID: \"63923048-2ad5-45f9-9285-9d84dc711fa7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c44nwv" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.943679 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-bf9ddc465-xrp8k"] Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.969036 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-p42tx" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.972126 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grlqz\" (UniqueName: \"kubernetes.io/projected/3b37faa8-6e4e-427a-9c1a-84993ed85290-kube-api-access-grlqz\") pod \"telemetry-operator-controller-manager-589c568786-d85f4\" (UID: \"3b37faa8-6e4e-427a-9c1a-84993ed85290\") " pod="openstack-operators/telemetry-operator-controller-manager-589c568786-d85f4" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.972200 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5v8z8\" (UniqueName: \"kubernetes.io/projected/5c1a94e8-9d64-4e28-b2f9-fdd066fddd6a-kube-api-access-5v8z8\") pod \"watcher-operator-controller-manager-5cb6b78489-7tdgw\" (UID: \"5c1a94e8-9d64-4e28-b2f9-fdd066fddd6a\") " pod="openstack-operators/watcher-operator-controller-manager-5cb6b78489-7tdgw" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.972474 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c38380c9-1ff8-4a96-9c4a-15ed760a25db-metrics-certs\") pod \"openstack-operator-controller-manager-bf9ddc465-xrp8k\" (UID: \"c38380c9-1ff8-4a96-9c4a-15ed760a25db\") " pod="openstack-operators/openstack-operator-controller-manager-bf9ddc465-xrp8k" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.972586 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c38380c9-1ff8-4a96-9c4a-15ed760a25db-webhook-certs\") pod \"openstack-operator-controller-manager-bf9ddc465-xrp8k\" (UID: \"c38380c9-1ff8-4a96-9c4a-15ed760a25db\") " pod="openstack-operators/openstack-operator-controller-manager-bf9ddc465-xrp8k" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.972703 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjvtd\" (UniqueName: \"kubernetes.io/projected/ca793345-c1e2-4207-844b-170dd5b70066-kube-api-access-pjvtd\") pod \"test-operator-controller-manager-5dc6794d5b-4tnw2\" (UID: \"ca793345-c1e2-4207-844b-170dd5b70066\") " pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-4tnw2" Feb 23 17:45:52 crc kubenswrapper[4724]: I0223 17:45:52.972991 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6znh7\" (UniqueName: \"kubernetes.io/projected/c38380c9-1ff8-4a96-9c4a-15ed760a25db-kube-api-access-6znh7\") pod \"openstack-operator-controller-manager-bf9ddc465-xrp8k\" (UID: \"c38380c9-1ff8-4a96-9c4a-15ed760a25db\") " pod="openstack-operators/openstack-operator-controller-manager-bf9ddc465-xrp8k" Feb 23 17:45:53 crc kubenswrapper[4724]: I0223 17:45:53.004879 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-d5z2j" Feb 23 17:45:53 crc kubenswrapper[4724]: I0223 17:45:53.010176 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjvtd\" (UniqueName: \"kubernetes.io/projected/ca793345-c1e2-4207-844b-170dd5b70066-kube-api-access-pjvtd\") pod \"test-operator-controller-manager-5dc6794d5b-4tnw2\" (UID: \"ca793345-c1e2-4207-844b-170dd5b70066\") " pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-4tnw2" Feb 23 17:45:53 crc kubenswrapper[4724]: I0223 17:45:53.012065 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grlqz\" (UniqueName: \"kubernetes.io/projected/3b37faa8-6e4e-427a-9c1a-84993ed85290-kube-api-access-grlqz\") pod \"telemetry-operator-controller-manager-589c568786-d85f4\" (UID: \"3b37faa8-6e4e-427a-9c1a-84993ed85290\") " pod="openstack-operators/telemetry-operator-controller-manager-589c568786-d85f4" Feb 23 17:45:53 crc kubenswrapper[4724]: I0223 17:45:53.038122 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-t5pkl"] Feb 23 17:45:53 crc kubenswrapper[4724]: I0223 17:45:53.044572 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-t5pkl" Feb 23 17:45:53 crc kubenswrapper[4724]: I0223 17:45:53.050640 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-n4g8c" Feb 23 17:45:53 crc kubenswrapper[4724]: I0223 17:45:53.060873 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-t5pkl"] Feb 23 17:45:53 crc kubenswrapper[4724]: I0223 17:45:53.081411 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6znh7\" (UniqueName: \"kubernetes.io/projected/c38380c9-1ff8-4a96-9c4a-15ed760a25db-kube-api-access-6znh7\") pod \"openstack-operator-controller-manager-bf9ddc465-xrp8k\" (UID: \"c38380c9-1ff8-4a96-9c4a-15ed760a25db\") " pod="openstack-operators/openstack-operator-controller-manager-bf9ddc465-xrp8k" Feb 23 17:45:53 crc kubenswrapper[4724]: I0223 17:45:53.081574 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v8z8\" (UniqueName: \"kubernetes.io/projected/5c1a94e8-9d64-4e28-b2f9-fdd066fddd6a-kube-api-access-5v8z8\") pod \"watcher-operator-controller-manager-5cb6b78489-7tdgw\" (UID: \"5c1a94e8-9d64-4e28-b2f9-fdd066fddd6a\") " pod="openstack-operators/watcher-operator-controller-manager-5cb6b78489-7tdgw" Feb 23 17:45:53 crc kubenswrapper[4724]: I0223 17:45:53.081606 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c38380c9-1ff8-4a96-9c4a-15ed760a25db-metrics-certs\") pod \"openstack-operator-controller-manager-bf9ddc465-xrp8k\" (UID: \"c38380c9-1ff8-4a96-9c4a-15ed760a25db\") " pod="openstack-operators/openstack-operator-controller-manager-bf9ddc465-xrp8k" Feb 23 17:45:53 crc kubenswrapper[4724]: I0223 17:45:53.081631 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c38380c9-1ff8-4a96-9c4a-15ed760a25db-webhook-certs\") pod \"openstack-operator-controller-manager-bf9ddc465-xrp8k\" (UID: \"c38380c9-1ff8-4a96-9c4a-15ed760a25db\") " pod="openstack-operators/openstack-operator-controller-manager-bf9ddc465-xrp8k" Feb 23 17:45:53 crc kubenswrapper[4724]: E0223 17:45:53.081794 4724 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 23 17:45:53 crc kubenswrapper[4724]: E0223 17:45:53.081862 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c38380c9-1ff8-4a96-9c4a-15ed760a25db-webhook-certs podName:c38380c9-1ff8-4a96-9c4a-15ed760a25db nodeName:}" failed. No retries permitted until 2026-02-23 17:45:53.581843177 +0000 UTC m=+909.398042777 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c38380c9-1ff8-4a96-9c4a-15ed760a25db-webhook-certs") pod "openstack-operator-controller-manager-bf9ddc465-xrp8k" (UID: "c38380c9-1ff8-4a96-9c4a-15ed760a25db") : secret "webhook-server-cert" not found Feb 23 17:45:53 crc kubenswrapper[4724]: E0223 17:45:53.082509 4724 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 23 17:45:53 crc kubenswrapper[4724]: E0223 17:45:53.082535 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c38380c9-1ff8-4a96-9c4a-15ed760a25db-metrics-certs podName:c38380c9-1ff8-4a96-9c4a-15ed760a25db nodeName:}" failed. No retries permitted until 2026-02-23 17:45:53.582527175 +0000 UTC m=+909.398726775 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c38380c9-1ff8-4a96-9c4a-15ed760a25db-metrics-certs") pod "openstack-operator-controller-manager-bf9ddc465-xrp8k" (UID: "c38380c9-1ff8-4a96-9c4a-15ed760a25db") : secret "metrics-server-cert" not found Feb 23 17:45:53 crc kubenswrapper[4724]: I0223 17:45:53.100950 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-szmk8" Feb 23 17:45:53 crc kubenswrapper[4724]: I0223 17:45:53.111524 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6znh7\" (UniqueName: \"kubernetes.io/projected/c38380c9-1ff8-4a96-9c4a-15ed760a25db-kube-api-access-6znh7\") pod \"openstack-operator-controller-manager-bf9ddc465-xrp8k\" (UID: \"c38380c9-1ff8-4a96-9c4a-15ed760a25db\") " pod="openstack-operators/openstack-operator-controller-manager-bf9ddc465-xrp8k" Feb 23 17:45:53 crc kubenswrapper[4724]: I0223 17:45:53.118291 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5v8z8\" (UniqueName: \"kubernetes.io/projected/5c1a94e8-9d64-4e28-b2f9-fdd066fddd6a-kube-api-access-5v8z8\") pod \"watcher-operator-controller-manager-5cb6b78489-7tdgw\" (UID: \"5c1a94e8-9d64-4e28-b2f9-fdd066fddd6a\") " pod="openstack-operators/watcher-operator-controller-manager-5cb6b78489-7tdgw" Feb 23 17:45:53 crc kubenswrapper[4724]: I0223 17:45:53.134875 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-92g5j" Feb 23 17:45:53 crc kubenswrapper[4724]: I0223 17:45:53.151159 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-zm7cw"] Feb 23 17:45:53 crc kubenswrapper[4724]: I0223 17:45:53.159669 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-wqsvk" Feb 23 17:45:53 crc kubenswrapper[4724]: I0223 17:45:53.183092 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fds97\" (UniqueName: \"kubernetes.io/projected/6848c8bf-d8f5-4215-90fb-454b794e33ae-kube-api-access-fds97\") pod \"rabbitmq-cluster-operator-manager-668c99d594-t5pkl\" (UID: \"6848c8bf-d8f5-4215-90fb-454b794e33ae\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-t5pkl" Feb 23 17:45:53 crc kubenswrapper[4724]: I0223 17:45:53.245169 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-4tnw2" Feb 23 17:45:53 crc kubenswrapper[4724]: I0223 17:45:53.246214 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-d85f4" Feb 23 17:45:53 crc kubenswrapper[4724]: I0223 17:45:53.268003 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-784b5bb6c5-gmdl7"] Feb 23 17:45:53 crc kubenswrapper[4724]: I0223 17:45:53.277807 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5cb6b78489-7tdgw" Feb 23 17:45:53 crc kubenswrapper[4724]: I0223 17:45:53.286274 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fds97\" (UniqueName: \"kubernetes.io/projected/6848c8bf-d8f5-4215-90fb-454b794e33ae-kube-api-access-fds97\") pod \"rabbitmq-cluster-operator-manager-668c99d594-t5pkl\" (UID: \"6848c8bf-d8f5-4215-90fb-454b794e33ae\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-t5pkl" Feb 23 17:45:53 crc kubenswrapper[4724]: I0223 17:45:53.310072 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-4zgfm"] Feb 23 17:45:53 crc kubenswrapper[4724]: I0223 17:45:53.346948 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fds97\" (UniqueName: \"kubernetes.io/projected/6848c8bf-d8f5-4215-90fb-454b794e33ae-kube-api-access-fds97\") pod \"rabbitmq-cluster-operator-manager-668c99d594-t5pkl\" (UID: \"6848c8bf-d8f5-4215-90fb-454b794e33ae\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-t5pkl" Feb 23 17:45:53 crc kubenswrapper[4724]: I0223 17:45:53.388789 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-t5pkl" Feb 23 17:45:53 crc kubenswrapper[4724]: I0223 17:45:53.389410 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/63923048-2ad5-45f9-9285-9d84dc711fa7-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c44nwv\" (UID: \"63923048-2ad5-45f9-9285-9d84dc711fa7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c44nwv" Feb 23 17:45:53 crc kubenswrapper[4724]: E0223 17:45:53.389566 4724 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 17:45:53 crc kubenswrapper[4724]: E0223 17:45:53.389635 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63923048-2ad5-45f9-9285-9d84dc711fa7-cert podName:63923048-2ad5-45f9-9285-9d84dc711fa7 nodeName:}" failed. No retries permitted until 2026-02-23 17:45:54.389613561 +0000 UTC m=+910.205813161 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/63923048-2ad5-45f9-9285-9d84dc711fa7-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9c44nwv" (UID: "63923048-2ad5-45f9-9285-9d84dc711fa7") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 17:45:53 crc kubenswrapper[4724]: I0223 17:45:53.403094 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-zm7cw" event={"ID":"a4842ca7-909d-4d11-bba6-75555f3599b3","Type":"ContainerStarted","Data":"77e192e724a5fd51d78a9079321383e7e08f2b139eae04e2a63543ddd3fa43a4"} Feb 23 17:45:53 crc kubenswrapper[4724]: I0223 17:45:53.482153 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-vqls9"] Feb 23 17:45:53 crc kubenswrapper[4724]: W0223 17:45:53.503901 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70c55fa9_1fa4_415c_98c4_adfe080201d1.slice/crio-bcb81dc4ca88caac241e70cbae3ae0eaa0d0e53f2f6362998fa6abf7ec653a1c WatchSource:0}: Error finding container bcb81dc4ca88caac241e70cbae3ae0eaa0d0e53f2f6362998fa6abf7ec653a1c: Status 404 returned error can't find the container with id bcb81dc4ca88caac241e70cbae3ae0eaa0d0e53f2f6362998fa6abf7ec653a1c Feb 23 17:45:53 crc kubenswrapper[4724]: I0223 17:45:53.570791 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-22lgm"] Feb 23 17:45:53 crc kubenswrapper[4724]: W0223 17:45:53.579301 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod967a6928_46e0_4a1e_90bd_cc9a204d9099.slice/crio-4f6fcf7015ca74850af16a09069c1202c5b389cc4953bc4fbd71288dfa07cedf WatchSource:0}: Error finding container 4f6fcf7015ca74850af16a09069c1202c5b389cc4953bc4fbd71288dfa07cedf: Status 404 returned error can't find the container with id 4f6fcf7015ca74850af16a09069c1202c5b389cc4953bc4fbd71288dfa07cedf Feb 23 17:45:53 crc kubenswrapper[4724]: I0223 17:45:53.584249 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gr5gv" Feb 23 17:45:53 crc kubenswrapper[4724]: I0223 17:45:53.585891 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-f5x72"] Feb 23 17:45:53 crc kubenswrapper[4724]: I0223 17:45:53.600588 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c38380c9-1ff8-4a96-9c4a-15ed760a25db-webhook-certs\") pod \"openstack-operator-controller-manager-bf9ddc465-xrp8k\" (UID: \"c38380c9-1ff8-4a96-9c4a-15ed760a25db\") " pod="openstack-operators/openstack-operator-controller-manager-bf9ddc465-xrp8k" Feb 23 17:45:53 crc kubenswrapper[4724]: I0223 17:45:53.600751 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c38380c9-1ff8-4a96-9c4a-15ed760a25db-metrics-certs\") pod \"openstack-operator-controller-manager-bf9ddc465-xrp8k\" (UID: \"c38380c9-1ff8-4a96-9c4a-15ed760a25db\") " pod="openstack-operators/openstack-operator-controller-manager-bf9ddc465-xrp8k" Feb 23 17:45:53 crc kubenswrapper[4724]: E0223 17:45:53.600918 4724 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 23 17:45:53 crc kubenswrapper[4724]: E0223 17:45:53.601048 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c38380c9-1ff8-4a96-9c4a-15ed760a25db-metrics-certs podName:c38380c9-1ff8-4a96-9c4a-15ed760a25db nodeName:}" failed. No retries permitted until 2026-02-23 17:45:54.600966546 +0000 UTC m=+910.417166146 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c38380c9-1ff8-4a96-9c4a-15ed760a25db-metrics-certs") pod "openstack-operator-controller-manager-bf9ddc465-xrp8k" (UID: "c38380c9-1ff8-4a96-9c4a-15ed760a25db") : secret "metrics-server-cert" not found Feb 23 17:45:53 crc kubenswrapper[4724]: E0223 17:45:53.602000 4724 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 23 17:45:53 crc kubenswrapper[4724]: E0223 17:45:53.602128 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c38380c9-1ff8-4a96-9c4a-15ed760a25db-webhook-certs podName:c38380c9-1ff8-4a96-9c4a-15ed760a25db nodeName:}" failed. No retries permitted until 2026-02-23 17:45:54.602087299 +0000 UTC m=+910.418286899 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c38380c9-1ff8-4a96-9c4a-15ed760a25db-webhook-certs") pod "openstack-operator-controller-manager-bf9ddc465-xrp8k" (UID: "c38380c9-1ff8-4a96-9c4a-15ed760a25db") : secret "webhook-server-cert" not found Feb 23 17:45:53 crc kubenswrapper[4724]: I0223 17:45:53.665184 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gr5gv" Feb 23 17:45:53 crc kubenswrapper[4724]: W0223 17:45:53.680649 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b607306_d732_4142_83d4_92ae20c714cd.slice/crio-6e176ff00a48bb90a6bf031cfb54ec7d480289a91091dac28feddf8e73eb369f WatchSource:0}: Error finding container 6e176ff00a48bb90a6bf031cfb54ec7d480289a91091dac28feddf8e73eb369f: Status 404 returned error can't find the container with id 6e176ff00a48bb90a6bf031cfb54ec7d480289a91091dac28feddf8e73eb369f Feb 23 17:45:53 crc kubenswrapper[4724]: I0223 17:45:53.771813 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-9gtq7"] Feb 23 17:45:53 crc kubenswrapper[4724]: I0223 17:45:53.825137 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-djmpk"] Feb 23 17:45:53 crc kubenswrapper[4724]: W0223 17:45:53.832630 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb906fefc_aaf5_48c0_b45b_3d11dbda1c3e.slice/crio-ab97157d294730295c6b53405d2d4efa9e9cedae445cca1ae31a68aff1cae07f WatchSource:0}: Error finding container ab97157d294730295c6b53405d2d4efa9e9cedae445cca1ae31a68aff1cae07f: Status 404 returned error can't find the container with id ab97157d294730295c6b53405d2d4efa9e9cedae445cca1ae31a68aff1cae07f Feb 23 17:45:53 crc kubenswrapper[4724]: I0223 17:45:53.836763 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-fxj7d"] Feb 23 17:45:53 crc kubenswrapper[4724]: W0223 17:45:53.836868 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod973124e7_0723_4a5d_ab81_0ef8619f8754.slice/crio-7d27d862f6b4466e51f6e8e43ae76e6b7b4ab714f7de19164deb80e995ce8912 WatchSource:0}: Error finding container 7d27d862f6b4466e51f6e8e43ae76e6b7b4ab714f7de19164deb80e995ce8912: Status 404 returned error can't find the container with id 7d27d862f6b4466e51f6e8e43ae76e6b7b4ab714f7de19164deb80e995ce8912 Feb 23 17:45:53 crc kubenswrapper[4724]: I0223 17:45:53.910811 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7af82cf5-bcff-40d4-8c1b-4bc71e5ca9a3-cert\") pod \"infra-operator-controller-manager-79d975b745-pb2dv\" (UID: \"7af82cf5-bcff-40d4-8c1b-4bc71e5ca9a3\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-pb2dv" Feb 23 17:45:53 crc kubenswrapper[4724]: E0223 17:45:53.911080 4724 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 23 17:45:53 crc kubenswrapper[4724]: E0223 17:45:53.911144 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7af82cf5-bcff-40d4-8c1b-4bc71e5ca9a3-cert podName:7af82cf5-bcff-40d4-8c1b-4bc71e5ca9a3 nodeName:}" failed. No retries permitted until 2026-02-23 17:45:55.911124112 +0000 UTC m=+911.727323712 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7af82cf5-bcff-40d4-8c1b-4bc71e5ca9a3-cert") pod "infra-operator-controller-manager-79d975b745-pb2dv" (UID: "7af82cf5-bcff-40d4-8c1b-4bc71e5ca9a3") : secret "infra-operator-webhook-server-cert" not found Feb 23 17:45:53 crc kubenswrapper[4724]: I0223 17:45:53.942822 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6bd4687957-9s4mk"] Feb 23 17:45:53 crc kubenswrapper[4724]: W0223 17:45:53.954203 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b193934_08d8_4435_ae40_8b4d7b4878e7.slice/crio-df99aaf29d693a344e15438f62a3f767af3272aefc345ee67d4178e15d1be623 WatchSource:0}: Error finding container df99aaf29d693a344e15438f62a3f767af3272aefc345ee67d4178e15d1be623: Status 404 returned error can't find the container with id df99aaf29d693a344e15438f62a3f767af3272aefc345ee67d4178e15d1be623 Feb 23 17:45:54 crc kubenswrapper[4724]: I0223 17:45:54.000067 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-xdfp8"] Feb 23 17:45:54 crc kubenswrapper[4724]: W0223 17:45:54.002113 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73da6414_95e9_4d5a_a0ca_fbeb32048153.slice/crio-98f41fc6bcd4cd3b7efe1697abd27d9c911da6bee293e1210987770ff1950ce2 WatchSource:0}: Error finding container 98f41fc6bcd4cd3b7efe1697abd27d9c911da6bee293e1210987770ff1950ce2: Status 404 returned error can't find the container with id 98f41fc6bcd4cd3b7efe1697abd27d9c911da6bee293e1210987770ff1950ce2 Feb 23 17:45:54 crc kubenswrapper[4724]: I0223 17:45:54.088301 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-659dc6bbfc-p42tx"] Feb 23 17:45:54 crc kubenswrapper[4724]: I0223 17:45:54.118386 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-d5z2j"] Feb 23 17:45:54 crc kubenswrapper[4724]: I0223 17:45:54.129153 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-szmk8"] Feb 23 17:45:54 crc kubenswrapper[4724]: I0223 17:45:54.134453 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5955d8c787-92g5j"] Feb 23 17:45:54 crc kubenswrapper[4724]: I0223 17:45:54.142286 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5dc6794d5b-4tnw2"] Feb 23 17:45:54 crc kubenswrapper[4724]: E0223 17:45:54.143730 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pfqlz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-8497b45c89-szmk8_openstack-operators(77ba1933-d39b-4b30-9d8c-1500d7293444): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 23 17:45:54 crc kubenswrapper[4724]: E0223 17:45:54.143908 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-stmwq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-d5z2j_openstack-operators(8bc03a47-9ded-40c0-b924-0c936950a12a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 23 17:45:54 crc kubenswrapper[4724]: E0223 17:45:54.144939 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-szmk8" podUID="77ba1933-d39b-4b30-9d8c-1500d7293444" Feb 23 17:45:54 crc kubenswrapper[4724]: E0223 17:45:54.145000 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-d5z2j" podUID="8bc03a47-9ded-40c0-b924-0c936950a12a" Feb 23 17:45:54 crc kubenswrapper[4724]: I0223 17:45:54.261451 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-589c568786-d85f4"] Feb 23 17:45:54 crc kubenswrapper[4724]: I0223 17:45:54.269723 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-wqsvk"] Feb 23 17:45:54 crc kubenswrapper[4724]: E0223 17:45:54.281609 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.147:5001/openstack-k8s-operators/watcher-operator:eaf82eeed7c641cca4b0e467ff9bfd7468ff8986,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5v8z8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5cb6b78489-7tdgw_openstack-operators(5c1a94e8-9d64-4e28-b2f9-fdd066fddd6a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 23 17:45:54 crc kubenswrapper[4724]: E0223 17:45:54.283004 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-5cb6b78489-7tdgw" podUID="5c1a94e8-9d64-4e28-b2f9-fdd066fddd6a" Feb 23 17:45:54 crc kubenswrapper[4724]: I0223 17:45:54.284378 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5cb6b78489-7tdgw"] Feb 23 17:45:54 crc kubenswrapper[4724]: E0223 17:45:54.302812 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:4eb8fab5530a08915d3ab3e11e2808aeae16c8a220ed34ee04a186b2ae2303dc,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-grlqz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-589c568786-d85f4_openstack-operators(3b37faa8-6e4e-427a-9c1a-84993ed85290): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 23 17:45:54 crc kubenswrapper[4724]: E0223 17:45:54.304000 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-d85f4" podUID="3b37faa8-6e4e-427a-9c1a-84993ed85290" Feb 23 17:45:54 crc kubenswrapper[4724]: I0223 17:45:54.372267 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-t5pkl"] Feb 23 17:45:54 crc kubenswrapper[4724]: W0223 17:45:54.379233 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6848c8bf_d8f5_4215_90fb_454b794e33ae.slice/crio-653cb9d7e487565e2828a20658d470d7fda4f965c58ade0d69de768fad4dcfd5 WatchSource:0}: Error finding container 653cb9d7e487565e2828a20658d470d7fda4f965c58ade0d69de768fad4dcfd5: Status 404 returned error can't find the container with id 653cb9d7e487565e2828a20658d470d7fda4f965c58ade0d69de768fad4dcfd5 Feb 23 17:45:54 crc kubenswrapper[4724]: E0223 17:45:54.381986 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fds97,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-t5pkl_openstack-operators(6848c8bf-d8f5-4215-90fb-454b794e33ae): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 23 17:45:54 crc kubenswrapper[4724]: E0223 17:45:54.383537 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-t5pkl" podUID="6848c8bf-d8f5-4215-90fb-454b794e33ae" Feb 23 17:45:54 crc kubenswrapper[4724]: I0223 17:45:54.413285 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-d5z2j" event={"ID":"8bc03a47-9ded-40c0-b924-0c936950a12a","Type":"ContainerStarted","Data":"c1d3c657cbff857e030b71c021a4f6e35d8518930d0ae5ea4e3a7df8a49d3fb6"} Feb 23 17:45:54 crc kubenswrapper[4724]: E0223 17:45:54.415964 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-d5z2j" podUID="8bc03a47-9ded-40c0-b924-0c936950a12a" Feb 23 17:45:54 crc kubenswrapper[4724]: I0223 17:45:54.417309 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-9s4mk" event={"ID":"8b193934-08d8-4435-ae40-8b4d7b4878e7","Type":"ContainerStarted","Data":"df99aaf29d693a344e15438f62a3f767af3272aefc345ee67d4178e15d1be623"} Feb 23 17:45:54 crc kubenswrapper[4724]: I0223 17:45:54.420325 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/63923048-2ad5-45f9-9285-9d84dc711fa7-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c44nwv\" (UID: \"63923048-2ad5-45f9-9285-9d84dc711fa7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c44nwv" Feb 23 17:45:54 crc kubenswrapper[4724]: I0223 17:45:54.420466 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-9gtq7" event={"ID":"2bc5c9a5-0293-4efd-b5a4-0f5c85b238b5","Type":"ContainerStarted","Data":"d99a72c1bad8ab35f3933c1ba408678b9be53b5e4fb5b49e7fb5726a9487973f"} Feb 23 17:45:54 crc kubenswrapper[4724]: E0223 17:45:54.420544 4724 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 17:45:54 crc kubenswrapper[4724]: E0223 17:45:54.420720 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63923048-2ad5-45f9-9285-9d84dc711fa7-cert podName:63923048-2ad5-45f9-9285-9d84dc711fa7 nodeName:}" failed. No retries permitted until 2026-02-23 17:45:56.420668482 +0000 UTC m=+912.236868142 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/63923048-2ad5-45f9-9285-9d84dc711fa7-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9c44nwv" (UID: "63923048-2ad5-45f9-9285-9d84dc711fa7") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 17:45:54 crc kubenswrapper[4724]: I0223 17:45:54.428511 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-4tnw2" event={"ID":"ca793345-c1e2-4207-844b-170dd5b70066","Type":"ContainerStarted","Data":"915989b4fedb020ec81ff3b313af1aa9c059c2dd993920f7353091ff9ef52716"} Feb 23 17:45:54 crc kubenswrapper[4724]: I0223 17:45:54.433518 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-92g5j" event={"ID":"a8f9c97e-0259-4c6e-b188-33081d1706fd","Type":"ContainerStarted","Data":"44ad89444115c8e0c9707f7374fee87c869d94b945df7baa39fb21a87328b999"} Feb 23 17:45:54 crc kubenswrapper[4724]: I0223 17:45:54.438680 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-djmpk" event={"ID":"973124e7-0723-4a5d-ab81-0ef8619f8754","Type":"ContainerStarted","Data":"7d27d862f6b4466e51f6e8e43ae76e6b7b4ab714f7de19164deb80e995ce8912"} Feb 23 17:45:54 crc kubenswrapper[4724]: I0223 17:45:54.440917 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5cb6b78489-7tdgw" event={"ID":"5c1a94e8-9d64-4e28-b2f9-fdd066fddd6a","Type":"ContainerStarted","Data":"af36729a1948f8ac28284cb075d2b33d3a65fad5f1f2e1ad8cf3d61c5b7ab799"} Feb 23 17:45:54 crc kubenswrapper[4724]: E0223 17:45:54.443695 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.147:5001/openstack-k8s-operators/watcher-operator:eaf82eeed7c641cca4b0e467ff9bfd7468ff8986\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5cb6b78489-7tdgw" podUID="5c1a94e8-9d64-4e28-b2f9-fdd066fddd6a" Feb 23 17:45:54 crc kubenswrapper[4724]: I0223 17:45:54.444826 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-d85f4" event={"ID":"3b37faa8-6e4e-427a-9c1a-84993ed85290","Type":"ContainerStarted","Data":"8ae18fef86f3c40dbfe77b35890c61e747bb8533f130a36b44d61b6c218660fe"} Feb 23 17:45:54 crc kubenswrapper[4724]: E0223 17:45:54.446883 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:4eb8fab5530a08915d3ab3e11e2808aeae16c8a220ed34ee04a186b2ae2303dc\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-d85f4" podUID="3b37faa8-6e4e-427a-9c1a-84993ed85290" Feb 23 17:45:54 crc kubenswrapper[4724]: I0223 17:45:54.448906 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-vqls9" event={"ID":"967a6928-46e0-4a1e-90bd-cc9a204d9099","Type":"ContainerStarted","Data":"4f6fcf7015ca74850af16a09069c1202c5b389cc4953bc4fbd71288dfa07cedf"} Feb 23 17:45:54 crc kubenswrapper[4724]: I0223 17:45:54.451465 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-szmk8" event={"ID":"77ba1933-d39b-4b30-9d8c-1500d7293444","Type":"ContainerStarted","Data":"a4e99ea3922250c4e08bb34c529d3d653b4e7317fc554473c44df898ee7b0c05"} Feb 23 17:45:54 crc kubenswrapper[4724]: E0223 17:45:54.453149 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-szmk8" podUID="77ba1933-d39b-4b30-9d8c-1500d7293444" Feb 23 17:45:54 crc kubenswrapper[4724]: I0223 17:45:54.455042 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-gmdl7" event={"ID":"dedf8817-f3cf-4630-a825-71059f681d10","Type":"ContainerStarted","Data":"12146ea5c882e1bee8f45de522a171ec2e67fcddfcbd132f5c9ff9e70b3c329c"} Feb 23 17:45:54 crc kubenswrapper[4724]: I0223 17:45:54.466859 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-t5pkl" event={"ID":"6848c8bf-d8f5-4215-90fb-454b794e33ae","Type":"ContainerStarted","Data":"653cb9d7e487565e2828a20658d470d7fda4f965c58ade0d69de768fad4dcfd5"} Feb 23 17:45:54 crc kubenswrapper[4724]: E0223 17:45:54.470826 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-t5pkl" podUID="6848c8bf-d8f5-4215-90fb-454b794e33ae" Feb 23 17:45:54 crc kubenswrapper[4724]: I0223 17:45:54.471448 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-22lgm" event={"ID":"dd866f81-0e85-4690-b16d-45baf5e856ed","Type":"ContainerStarted","Data":"9e93664fa7bc9817971f8aeffbb35d781faa4910899ae1fa0d79b1fef9b55c62"} Feb 23 17:45:54 crc kubenswrapper[4724]: I0223 17:45:54.473231 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-wqsvk" event={"ID":"e37a1f8b-cee7-4a13-879e-496d26735ab4","Type":"ContainerStarted","Data":"03940ece7053c788b19d333abf7bf9cea18cc8f6d7292e007e806620cd7a6846"} Feb 23 17:45:54 crc kubenswrapper[4724]: I0223 17:45:54.474698 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-67d996989d-fxj7d" event={"ID":"b906fefc-aaf5-48c0-b45b-3d11dbda1c3e","Type":"ContainerStarted","Data":"ab97157d294730295c6b53405d2d4efa9e9cedae445cca1ae31a68aff1cae07f"} Feb 23 17:45:54 crc kubenswrapper[4724]: I0223 17:45:54.479805 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-xdfp8" event={"ID":"73da6414-95e9-4d5a-a0ca-fbeb32048153","Type":"ContainerStarted","Data":"98f41fc6bcd4cd3b7efe1697abd27d9c911da6bee293e1210987770ff1950ce2"} Feb 23 17:45:54 crc kubenswrapper[4724]: I0223 17:45:54.483253 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-f5x72" event={"ID":"6b607306-d732-4142-83d4-92ae20c714cd","Type":"ContainerStarted","Data":"6e176ff00a48bb90a6bf031cfb54ec7d480289a91091dac28feddf8e73eb369f"} Feb 23 17:45:54 crc kubenswrapper[4724]: I0223 17:45:54.484454 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-4zgfm" event={"ID":"70c55fa9-1fa4-415c-98c4-adfe080201d1","Type":"ContainerStarted","Data":"bcb81dc4ca88caac241e70cbae3ae0eaa0d0e53f2f6362998fa6abf7ec653a1c"} Feb 23 17:45:54 crc kubenswrapper[4724]: I0223 17:45:54.487327 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-p42tx" event={"ID":"24d796b9-e6ea-4b70-9424-1352f71c80a6","Type":"ContainerStarted","Data":"e03b13fd533b3625e74a21b7d412510b7b52af770d1e9556e200373ca8b73b48"} Feb 23 17:45:54 crc kubenswrapper[4724]: I0223 17:45:54.584030 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gr5gv"] Feb 23 17:45:54 crc kubenswrapper[4724]: I0223 17:45:54.625931 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c38380c9-1ff8-4a96-9c4a-15ed760a25db-webhook-certs\") pod \"openstack-operator-controller-manager-bf9ddc465-xrp8k\" (UID: \"c38380c9-1ff8-4a96-9c4a-15ed760a25db\") " pod="openstack-operators/openstack-operator-controller-manager-bf9ddc465-xrp8k" Feb 23 17:45:54 crc kubenswrapper[4724]: I0223 17:45:54.626147 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c38380c9-1ff8-4a96-9c4a-15ed760a25db-metrics-certs\") pod \"openstack-operator-controller-manager-bf9ddc465-xrp8k\" (UID: \"c38380c9-1ff8-4a96-9c4a-15ed760a25db\") " pod="openstack-operators/openstack-operator-controller-manager-bf9ddc465-xrp8k" Feb 23 17:45:54 crc kubenswrapper[4724]: E0223 17:45:54.626285 4724 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 23 17:45:54 crc kubenswrapper[4724]: E0223 17:45:54.626348 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c38380c9-1ff8-4a96-9c4a-15ed760a25db-metrics-certs podName:c38380c9-1ff8-4a96-9c4a-15ed760a25db nodeName:}" failed. No retries permitted until 2026-02-23 17:45:56.626328563 +0000 UTC m=+912.442528163 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c38380c9-1ff8-4a96-9c4a-15ed760a25db-metrics-certs") pod "openstack-operator-controller-manager-bf9ddc465-xrp8k" (UID: "c38380c9-1ff8-4a96-9c4a-15ed760a25db") : secret "metrics-server-cert" not found Feb 23 17:45:54 crc kubenswrapper[4724]: E0223 17:45:54.626734 4724 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 23 17:45:54 crc kubenswrapper[4724]: E0223 17:45:54.626759 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c38380c9-1ff8-4a96-9c4a-15ed760a25db-webhook-certs podName:c38380c9-1ff8-4a96-9c4a-15ed760a25db nodeName:}" failed. No retries permitted until 2026-02-23 17:45:56.626751911 +0000 UTC m=+912.442951511 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c38380c9-1ff8-4a96-9c4a-15ed760a25db-webhook-certs") pod "openstack-operator-controller-manager-bf9ddc465-xrp8k" (UID: "c38380c9-1ff8-4a96-9c4a-15ed760a25db") : secret "webhook-server-cert" not found Feb 23 17:45:55 crc kubenswrapper[4724]: I0223 17:45:55.526013 4724 generic.go:334] "Generic (PLEG): container finished" podID="0fb435d7-a53d-44d4-b800-23f60d2aac7c" containerID="5243abe0260b4d7be9e78ddd0648164966e7ba35c579da6fb6f9c73b79784cab" exitCode=0 Feb 23 17:45:55 crc kubenswrapper[4724]: I0223 17:45:55.527321 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gr5gv" podUID="4f30ea74-e80f-4a6d-99bc-39f2f1ec1608" containerName="registry-server" containerID="cri-o://ef6ebd122c42e5fbcde2c3f5c12ac794f9eda6f33c1c80329356588cd510cfe5" gracePeriod=2 Feb 23 17:45:55 crc kubenswrapper[4724]: E0223 17:45:55.529412 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-d5z2j" podUID="8bc03a47-9ded-40c0-b924-0c936950a12a" Feb 23 17:45:55 crc kubenswrapper[4724]: E0223 17:45:55.529452 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.147:5001/openstack-k8s-operators/watcher-operator:eaf82eeed7c641cca4b0e467ff9bfd7468ff8986\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5cb6b78489-7tdgw" podUID="5c1a94e8-9d64-4e28-b2f9-fdd066fddd6a" Feb 23 17:45:55 crc kubenswrapper[4724]: E0223 17:45:55.529859 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-szmk8" podUID="77ba1933-d39b-4b30-9d8c-1500d7293444" Feb 23 17:45:55 crc kubenswrapper[4724]: E0223 17:45:55.529957 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:4eb8fab5530a08915d3ab3e11e2808aeae16c8a220ed34ee04a186b2ae2303dc\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-d85f4" podUID="3b37faa8-6e4e-427a-9c1a-84993ed85290" Feb 23 17:45:55 crc kubenswrapper[4724]: E0223 17:45:55.532081 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-t5pkl" podUID="6848c8bf-d8f5-4215-90fb-454b794e33ae" Feb 23 17:45:55 crc kubenswrapper[4724]: I0223 17:45:55.526122 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-75ft9" event={"ID":"0fb435d7-a53d-44d4-b800-23f60d2aac7c","Type":"ContainerDied","Data":"5243abe0260b4d7be9e78ddd0648164966e7ba35c579da6fb6f9c73b79784cab"} Feb 23 17:45:55 crc kubenswrapper[4724]: I0223 17:45:55.954026 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7af82cf5-bcff-40d4-8c1b-4bc71e5ca9a3-cert\") pod \"infra-operator-controller-manager-79d975b745-pb2dv\" (UID: \"7af82cf5-bcff-40d4-8c1b-4bc71e5ca9a3\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-pb2dv" Feb 23 17:45:55 crc kubenswrapper[4724]: E0223 17:45:55.954262 4724 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 23 17:45:55 crc kubenswrapper[4724]: E0223 17:45:55.954339 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7af82cf5-bcff-40d4-8c1b-4bc71e5ca9a3-cert podName:7af82cf5-bcff-40d4-8c1b-4bc71e5ca9a3 nodeName:}" failed. No retries permitted until 2026-02-23 17:45:59.954317541 +0000 UTC m=+915.770517141 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7af82cf5-bcff-40d4-8c1b-4bc71e5ca9a3-cert") pod "infra-operator-controller-manager-79d975b745-pb2dv" (UID: "7af82cf5-bcff-40d4-8c1b-4bc71e5ca9a3") : secret "infra-operator-webhook-server-cert" not found Feb 23 17:45:56 crc kubenswrapper[4724]: I0223 17:45:56.461904 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/63923048-2ad5-45f9-9285-9d84dc711fa7-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c44nwv\" (UID: \"63923048-2ad5-45f9-9285-9d84dc711fa7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c44nwv" Feb 23 17:45:56 crc kubenswrapper[4724]: E0223 17:45:56.462192 4724 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 17:45:56 crc kubenswrapper[4724]: E0223 17:45:56.462634 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63923048-2ad5-45f9-9285-9d84dc711fa7-cert podName:63923048-2ad5-45f9-9285-9d84dc711fa7 nodeName:}" failed. No retries permitted until 2026-02-23 17:46:00.462611146 +0000 UTC m=+916.278810746 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/63923048-2ad5-45f9-9285-9d84dc711fa7-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9c44nwv" (UID: "63923048-2ad5-45f9-9285-9d84dc711fa7") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 17:45:56 crc kubenswrapper[4724]: I0223 17:45:56.545326 4724 generic.go:334] "Generic (PLEG): container finished" podID="4f30ea74-e80f-4a6d-99bc-39f2f1ec1608" containerID="ef6ebd122c42e5fbcde2c3f5c12ac794f9eda6f33c1c80329356588cd510cfe5" exitCode=0 Feb 23 17:45:56 crc kubenswrapper[4724]: I0223 17:45:56.545384 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gr5gv" event={"ID":"4f30ea74-e80f-4a6d-99bc-39f2f1ec1608","Type":"ContainerDied","Data":"ef6ebd122c42e5fbcde2c3f5c12ac794f9eda6f33c1c80329356588cd510cfe5"} Feb 23 17:45:56 crc kubenswrapper[4724]: I0223 17:45:56.665658 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c38380c9-1ff8-4a96-9c4a-15ed760a25db-metrics-certs\") pod \"openstack-operator-controller-manager-bf9ddc465-xrp8k\" (UID: \"c38380c9-1ff8-4a96-9c4a-15ed760a25db\") " pod="openstack-operators/openstack-operator-controller-manager-bf9ddc465-xrp8k" Feb 23 17:45:56 crc kubenswrapper[4724]: I0223 17:45:56.665744 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c38380c9-1ff8-4a96-9c4a-15ed760a25db-webhook-certs\") pod \"openstack-operator-controller-manager-bf9ddc465-xrp8k\" (UID: \"c38380c9-1ff8-4a96-9c4a-15ed760a25db\") " pod="openstack-operators/openstack-operator-controller-manager-bf9ddc465-xrp8k" Feb 23 17:45:56 crc kubenswrapper[4724]: E0223 17:45:56.665888 4724 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 23 17:45:56 crc kubenswrapper[4724]: E0223 17:45:56.665950 4724 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 23 17:45:56 crc kubenswrapper[4724]: E0223 17:45:56.666009 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c38380c9-1ff8-4a96-9c4a-15ed760a25db-metrics-certs podName:c38380c9-1ff8-4a96-9c4a-15ed760a25db nodeName:}" failed. No retries permitted until 2026-02-23 17:46:00.665979281 +0000 UTC m=+916.482178941 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c38380c9-1ff8-4a96-9c4a-15ed760a25db-metrics-certs") pod "openstack-operator-controller-manager-bf9ddc465-xrp8k" (UID: "c38380c9-1ff8-4a96-9c4a-15ed760a25db") : secret "metrics-server-cert" not found Feb 23 17:45:56 crc kubenswrapper[4724]: E0223 17:45:56.666033 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c38380c9-1ff8-4a96-9c4a-15ed760a25db-webhook-certs podName:c38380c9-1ff8-4a96-9c4a-15ed760a25db nodeName:}" failed. No retries permitted until 2026-02-23 17:46:00.666025272 +0000 UTC m=+916.482224962 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c38380c9-1ff8-4a96-9c4a-15ed760a25db-webhook-certs") pod "openstack-operator-controller-manager-bf9ddc465-xrp8k" (UID: "c38380c9-1ff8-4a96-9c4a-15ed760a25db") : secret "webhook-server-cert" not found Feb 23 17:45:57 crc kubenswrapper[4724]: I0223 17:45:57.752580 4724 patch_prober.go:28] interesting pod/machine-config-daemon-rw78r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 17:45:57 crc kubenswrapper[4724]: I0223 17:45:57.752660 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 17:45:57 crc kubenswrapper[4724]: I0223 17:45:57.752716 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" Feb 23 17:45:57 crc kubenswrapper[4724]: I0223 17:45:57.753422 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"558f0555580cf65f49e1db87e25baa9b3fcbcc94e63b57b3a835c127120a597f"} pod="openshift-machine-config-operator/machine-config-daemon-rw78r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 17:45:57 crc kubenswrapper[4724]: I0223 17:45:57.753479 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" containerID="cri-o://558f0555580cf65f49e1db87e25baa9b3fcbcc94e63b57b3a835c127120a597f" gracePeriod=600 Feb 23 17:45:58 crc kubenswrapper[4724]: I0223 17:45:58.563209 4724 generic.go:334] "Generic (PLEG): container finished" podID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerID="558f0555580cf65f49e1db87e25baa9b3fcbcc94e63b57b3a835c127120a597f" exitCode=0 Feb 23 17:45:58 crc kubenswrapper[4724]: I0223 17:45:58.563777 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" event={"ID":"a065b197-b354-4d9b-b2e9-7d4882a3d1a2","Type":"ContainerDied","Data":"558f0555580cf65f49e1db87e25baa9b3fcbcc94e63b57b3a835c127120a597f"} Feb 23 17:45:58 crc kubenswrapper[4724]: I0223 17:45:58.563876 4724 scope.go:117] "RemoveContainer" containerID="9be474f9627637d77fe947efade6a752f0ba58fbd772db2e8c59cd37a04b285e" Feb 23 17:46:00 crc kubenswrapper[4724]: I0223 17:46:00.013542 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7af82cf5-bcff-40d4-8c1b-4bc71e5ca9a3-cert\") pod \"infra-operator-controller-manager-79d975b745-pb2dv\" (UID: \"7af82cf5-bcff-40d4-8c1b-4bc71e5ca9a3\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-pb2dv" Feb 23 17:46:00 crc kubenswrapper[4724]: E0223 17:46:00.013789 4724 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 23 17:46:00 crc kubenswrapper[4724]: E0223 17:46:00.013859 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7af82cf5-bcff-40d4-8c1b-4bc71e5ca9a3-cert podName:7af82cf5-bcff-40d4-8c1b-4bc71e5ca9a3 nodeName:}" failed. No retries permitted until 2026-02-23 17:46:08.01383713 +0000 UTC m=+923.830036730 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7af82cf5-bcff-40d4-8c1b-4bc71e5ca9a3-cert") pod "infra-operator-controller-manager-79d975b745-pb2dv" (UID: "7af82cf5-bcff-40d4-8c1b-4bc71e5ca9a3") : secret "infra-operator-webhook-server-cert" not found Feb 23 17:46:00 crc kubenswrapper[4724]: I0223 17:46:00.524865 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/63923048-2ad5-45f9-9285-9d84dc711fa7-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c44nwv\" (UID: \"63923048-2ad5-45f9-9285-9d84dc711fa7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c44nwv" Feb 23 17:46:00 crc kubenswrapper[4724]: E0223 17:46:00.525008 4724 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 17:46:00 crc kubenswrapper[4724]: E0223 17:46:00.525077 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63923048-2ad5-45f9-9285-9d84dc711fa7-cert podName:63923048-2ad5-45f9-9285-9d84dc711fa7 nodeName:}" failed. No retries permitted until 2026-02-23 17:46:08.525057594 +0000 UTC m=+924.341257194 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/63923048-2ad5-45f9-9285-9d84dc711fa7-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9c44nwv" (UID: "63923048-2ad5-45f9-9285-9d84dc711fa7") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 17:46:00 crc kubenswrapper[4724]: I0223 17:46:00.729066 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c38380c9-1ff8-4a96-9c4a-15ed760a25db-webhook-certs\") pod \"openstack-operator-controller-manager-bf9ddc465-xrp8k\" (UID: \"c38380c9-1ff8-4a96-9c4a-15ed760a25db\") " pod="openstack-operators/openstack-operator-controller-manager-bf9ddc465-xrp8k" Feb 23 17:46:00 crc kubenswrapper[4724]: I0223 17:46:00.729242 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c38380c9-1ff8-4a96-9c4a-15ed760a25db-metrics-certs\") pod \"openstack-operator-controller-manager-bf9ddc465-xrp8k\" (UID: \"c38380c9-1ff8-4a96-9c4a-15ed760a25db\") " pod="openstack-operators/openstack-operator-controller-manager-bf9ddc465-xrp8k" Feb 23 17:46:00 crc kubenswrapper[4724]: E0223 17:46:00.729308 4724 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 23 17:46:00 crc kubenswrapper[4724]: E0223 17:46:00.729417 4724 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 23 17:46:00 crc kubenswrapper[4724]: E0223 17:46:00.729461 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c38380c9-1ff8-4a96-9c4a-15ed760a25db-webhook-certs podName:c38380c9-1ff8-4a96-9c4a-15ed760a25db nodeName:}" failed. No retries permitted until 2026-02-23 17:46:08.729433259 +0000 UTC m=+924.545632859 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c38380c9-1ff8-4a96-9c4a-15ed760a25db-webhook-certs") pod "openstack-operator-controller-manager-bf9ddc465-xrp8k" (UID: "c38380c9-1ff8-4a96-9c4a-15ed760a25db") : secret "webhook-server-cert" not found Feb 23 17:46:00 crc kubenswrapper[4724]: E0223 17:46:00.729521 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c38380c9-1ff8-4a96-9c4a-15ed760a25db-metrics-certs podName:c38380c9-1ff8-4a96-9c4a-15ed760a25db nodeName:}" failed. No retries permitted until 2026-02-23 17:46:08.72947523 +0000 UTC m=+924.545674880 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c38380c9-1ff8-4a96-9c4a-15ed760a25db-metrics-certs") pod "openstack-operator-controller-manager-bf9ddc465-xrp8k" (UID: "c38380c9-1ff8-4a96-9c4a-15ed760a25db") : secret "metrics-server-cert" not found Feb 23 17:46:03 crc kubenswrapper[4724]: E0223 17:46:03.438681 4724 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ef6ebd122c42e5fbcde2c3f5c12ac794f9eda6f33c1c80329356588cd510cfe5 is running failed: container process not found" containerID="ef6ebd122c42e5fbcde2c3f5c12ac794f9eda6f33c1c80329356588cd510cfe5" cmd=["grpc_health_probe","-addr=:50051"] Feb 23 17:46:03 crc kubenswrapper[4724]: E0223 17:46:03.440067 4724 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ef6ebd122c42e5fbcde2c3f5c12ac794f9eda6f33c1c80329356588cd510cfe5 is running failed: container process not found" containerID="ef6ebd122c42e5fbcde2c3f5c12ac794f9eda6f33c1c80329356588cd510cfe5" cmd=["grpc_health_probe","-addr=:50051"] Feb 23 17:46:03 crc kubenswrapper[4724]: E0223 17:46:03.440741 4724 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ef6ebd122c42e5fbcde2c3f5c12ac794f9eda6f33c1c80329356588cd510cfe5 is running failed: container process not found" containerID="ef6ebd122c42e5fbcde2c3f5c12ac794f9eda6f33c1c80329356588cd510cfe5" cmd=["grpc_health_probe","-addr=:50051"] Feb 23 17:46:03 crc kubenswrapper[4724]: E0223 17:46:03.440859 4724 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ef6ebd122c42e5fbcde2c3f5c12ac794f9eda6f33c1c80329356588cd510cfe5 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-gr5gv" podUID="4f30ea74-e80f-4a6d-99bc-39f2f1ec1608" containerName="registry-server" Feb 23 17:46:05 crc kubenswrapper[4724]: I0223 17:46:05.953986 4724 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 17:46:08 crc kubenswrapper[4724]: I0223 17:46:08.057202 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7af82cf5-bcff-40d4-8c1b-4bc71e5ca9a3-cert\") pod \"infra-operator-controller-manager-79d975b745-pb2dv\" (UID: \"7af82cf5-bcff-40d4-8c1b-4bc71e5ca9a3\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-pb2dv" Feb 23 17:46:08 crc kubenswrapper[4724]: E0223 17:46:08.057482 4724 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 23 17:46:08 crc kubenswrapper[4724]: E0223 17:46:08.057849 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7af82cf5-bcff-40d4-8c1b-4bc71e5ca9a3-cert podName:7af82cf5-bcff-40d4-8c1b-4bc71e5ca9a3 nodeName:}" failed. No retries permitted until 2026-02-23 17:46:24.057821957 +0000 UTC m=+939.874021557 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7af82cf5-bcff-40d4-8c1b-4bc71e5ca9a3-cert") pod "infra-operator-controller-manager-79d975b745-pb2dv" (UID: "7af82cf5-bcff-40d4-8c1b-4bc71e5ca9a3") : secret "infra-operator-webhook-server-cert" not found Feb 23 17:46:08 crc kubenswrapper[4724]: I0223 17:46:08.566593 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/63923048-2ad5-45f9-9285-9d84dc711fa7-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c44nwv\" (UID: \"63923048-2ad5-45f9-9285-9d84dc711fa7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c44nwv" Feb 23 17:46:08 crc kubenswrapper[4724]: E0223 17:46:08.566974 4724 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 17:46:08 crc kubenswrapper[4724]: E0223 17:46:08.567050 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63923048-2ad5-45f9-9285-9d84dc711fa7-cert podName:63923048-2ad5-45f9-9285-9d84dc711fa7 nodeName:}" failed. No retries permitted until 2026-02-23 17:46:24.56702625 +0000 UTC m=+940.383225850 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/63923048-2ad5-45f9-9285-9d84dc711fa7-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9c44nwv" (UID: "63923048-2ad5-45f9-9285-9d84dc711fa7") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 17:46:08 crc kubenswrapper[4724]: I0223 17:46:08.770024 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c38380c9-1ff8-4a96-9c4a-15ed760a25db-metrics-certs\") pod \"openstack-operator-controller-manager-bf9ddc465-xrp8k\" (UID: \"c38380c9-1ff8-4a96-9c4a-15ed760a25db\") " pod="openstack-operators/openstack-operator-controller-manager-bf9ddc465-xrp8k" Feb 23 17:46:08 crc kubenswrapper[4724]: I0223 17:46:08.770088 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c38380c9-1ff8-4a96-9c4a-15ed760a25db-webhook-certs\") pod \"openstack-operator-controller-manager-bf9ddc465-xrp8k\" (UID: \"c38380c9-1ff8-4a96-9c4a-15ed760a25db\") " pod="openstack-operators/openstack-operator-controller-manager-bf9ddc465-xrp8k" Feb 23 17:46:08 crc kubenswrapper[4724]: E0223 17:46:08.770224 4724 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 23 17:46:08 crc kubenswrapper[4724]: E0223 17:46:08.770289 4724 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 23 17:46:08 crc kubenswrapper[4724]: E0223 17:46:08.770307 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c38380c9-1ff8-4a96-9c4a-15ed760a25db-metrics-certs podName:c38380c9-1ff8-4a96-9c4a-15ed760a25db nodeName:}" failed. No retries permitted until 2026-02-23 17:46:24.770285073 +0000 UTC m=+940.586484673 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c38380c9-1ff8-4a96-9c4a-15ed760a25db-metrics-certs") pod "openstack-operator-controller-manager-bf9ddc465-xrp8k" (UID: "c38380c9-1ff8-4a96-9c4a-15ed760a25db") : secret "metrics-server-cert" not found Feb 23 17:46:08 crc kubenswrapper[4724]: E0223 17:46:08.770350 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c38380c9-1ff8-4a96-9c4a-15ed760a25db-webhook-certs podName:c38380c9-1ff8-4a96-9c4a-15ed760a25db nodeName:}" failed. No retries permitted until 2026-02-23 17:46:24.770331974 +0000 UTC m=+940.586531574 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c38380c9-1ff8-4a96-9c4a-15ed760a25db-webhook-certs") pod "openstack-operator-controller-manager-bf9ddc465-xrp8k" (UID: "c38380c9-1ff8-4a96-9c4a-15ed760a25db") : secret "webhook-server-cert" not found Feb 23 17:46:08 crc kubenswrapper[4724]: I0223 17:46:08.976040 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rbd7w"] Feb 23 17:46:08 crc kubenswrapper[4724]: I0223 17:46:08.977688 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rbd7w" Feb 23 17:46:08 crc kubenswrapper[4724]: I0223 17:46:08.989512 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rbd7w"] Feb 23 17:46:09 crc kubenswrapper[4724]: I0223 17:46:09.074888 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzgfw\" (UniqueName: \"kubernetes.io/projected/4fe3a7ec-fc03-4115-9ba3-5c2ab162382c-kube-api-access-jzgfw\") pod \"redhat-marketplace-rbd7w\" (UID: \"4fe3a7ec-fc03-4115-9ba3-5c2ab162382c\") " pod="openshift-marketplace/redhat-marketplace-rbd7w" Feb 23 17:46:09 crc kubenswrapper[4724]: I0223 17:46:09.074996 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fe3a7ec-fc03-4115-9ba3-5c2ab162382c-utilities\") pod \"redhat-marketplace-rbd7w\" (UID: \"4fe3a7ec-fc03-4115-9ba3-5c2ab162382c\") " pod="openshift-marketplace/redhat-marketplace-rbd7w" Feb 23 17:46:09 crc kubenswrapper[4724]: I0223 17:46:09.075031 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fe3a7ec-fc03-4115-9ba3-5c2ab162382c-catalog-content\") pod \"redhat-marketplace-rbd7w\" (UID: \"4fe3a7ec-fc03-4115-9ba3-5c2ab162382c\") " pod="openshift-marketplace/redhat-marketplace-rbd7w" Feb 23 17:46:09 crc kubenswrapper[4724]: I0223 17:46:09.176908 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzgfw\" (UniqueName: \"kubernetes.io/projected/4fe3a7ec-fc03-4115-9ba3-5c2ab162382c-kube-api-access-jzgfw\") pod \"redhat-marketplace-rbd7w\" (UID: \"4fe3a7ec-fc03-4115-9ba3-5c2ab162382c\") " pod="openshift-marketplace/redhat-marketplace-rbd7w" Feb 23 17:46:09 crc kubenswrapper[4724]: I0223 17:46:09.177014 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fe3a7ec-fc03-4115-9ba3-5c2ab162382c-utilities\") pod \"redhat-marketplace-rbd7w\" (UID: \"4fe3a7ec-fc03-4115-9ba3-5c2ab162382c\") " pod="openshift-marketplace/redhat-marketplace-rbd7w" Feb 23 17:46:09 crc kubenswrapper[4724]: I0223 17:46:09.177052 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fe3a7ec-fc03-4115-9ba3-5c2ab162382c-catalog-content\") pod \"redhat-marketplace-rbd7w\" (UID: \"4fe3a7ec-fc03-4115-9ba3-5c2ab162382c\") " pod="openshift-marketplace/redhat-marketplace-rbd7w" Feb 23 17:46:09 crc kubenswrapper[4724]: I0223 17:46:09.177612 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fe3a7ec-fc03-4115-9ba3-5c2ab162382c-catalog-content\") pod \"redhat-marketplace-rbd7w\" (UID: \"4fe3a7ec-fc03-4115-9ba3-5c2ab162382c\") " pod="openshift-marketplace/redhat-marketplace-rbd7w" Feb 23 17:46:09 crc kubenswrapper[4724]: I0223 17:46:09.178463 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fe3a7ec-fc03-4115-9ba3-5c2ab162382c-utilities\") pod \"redhat-marketplace-rbd7w\" (UID: \"4fe3a7ec-fc03-4115-9ba3-5c2ab162382c\") " pod="openshift-marketplace/redhat-marketplace-rbd7w" Feb 23 17:46:09 crc kubenswrapper[4724]: I0223 17:46:09.200295 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzgfw\" (UniqueName: \"kubernetes.io/projected/4fe3a7ec-fc03-4115-9ba3-5c2ab162382c-kube-api-access-jzgfw\") pod \"redhat-marketplace-rbd7w\" (UID: \"4fe3a7ec-fc03-4115-9ba3-5c2ab162382c\") " pod="openshift-marketplace/redhat-marketplace-rbd7w" Feb 23 17:46:09 crc kubenswrapper[4724]: I0223 17:46:09.305641 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rbd7w" Feb 23 17:46:11 crc kubenswrapper[4724]: E0223 17:46:11.972777 4724 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:f1158ec4d879c4646eee4323bc501eba4d377beb2ad6fbe08ed30070c441ac26" Feb 23 17:46:11 crc kubenswrapper[4724]: E0223 17:46:11.973862 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:f1158ec4d879c4646eee4323bc501eba4d377beb2ad6fbe08ed30070c441ac26,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-swgp4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-67d996989d-fxj7d_openstack-operators(b906fefc-aaf5-48c0-b45b-3d11dbda1c3e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 17:46:11 crc kubenswrapper[4724]: E0223 17:46:11.975057 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-67d996989d-fxj7d" podUID="b906fefc-aaf5-48c0-b45b-3d11dbda1c3e" Feb 23 17:46:12 crc kubenswrapper[4724]: I0223 17:46:12.136180 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gr5gv" Feb 23 17:46:12 crc kubenswrapper[4724]: I0223 17:46:12.229121 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f30ea74-e80f-4a6d-99bc-39f2f1ec1608-utilities\") pod \"4f30ea74-e80f-4a6d-99bc-39f2f1ec1608\" (UID: \"4f30ea74-e80f-4a6d-99bc-39f2f1ec1608\") " Feb 23 17:46:12 crc kubenswrapper[4724]: I0223 17:46:12.229210 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4l68g\" (UniqueName: \"kubernetes.io/projected/4f30ea74-e80f-4a6d-99bc-39f2f1ec1608-kube-api-access-4l68g\") pod \"4f30ea74-e80f-4a6d-99bc-39f2f1ec1608\" (UID: \"4f30ea74-e80f-4a6d-99bc-39f2f1ec1608\") " Feb 23 17:46:12 crc kubenswrapper[4724]: I0223 17:46:12.229249 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f30ea74-e80f-4a6d-99bc-39f2f1ec1608-catalog-content\") pod \"4f30ea74-e80f-4a6d-99bc-39f2f1ec1608\" (UID: \"4f30ea74-e80f-4a6d-99bc-39f2f1ec1608\") " Feb 23 17:46:12 crc kubenswrapper[4724]: I0223 17:46:12.230062 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f30ea74-e80f-4a6d-99bc-39f2f1ec1608-utilities" (OuterVolumeSpecName: "utilities") pod "4f30ea74-e80f-4a6d-99bc-39f2f1ec1608" (UID: "4f30ea74-e80f-4a6d-99bc-39f2f1ec1608"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:46:12 crc kubenswrapper[4724]: I0223 17:46:12.233240 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f30ea74-e80f-4a6d-99bc-39f2f1ec1608-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 17:46:12 crc kubenswrapper[4724]: I0223 17:46:12.236941 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f30ea74-e80f-4a6d-99bc-39f2f1ec1608-kube-api-access-4l68g" (OuterVolumeSpecName: "kube-api-access-4l68g") pod "4f30ea74-e80f-4a6d-99bc-39f2f1ec1608" (UID: "4f30ea74-e80f-4a6d-99bc-39f2f1ec1608"). InnerVolumeSpecName "kube-api-access-4l68g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:46:12 crc kubenswrapper[4724]: I0223 17:46:12.337105 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4l68g\" (UniqueName: \"kubernetes.io/projected/4f30ea74-e80f-4a6d-99bc-39f2f1ec1608-kube-api-access-4l68g\") on node \"crc\" DevicePath \"\"" Feb 23 17:46:12 crc kubenswrapper[4724]: I0223 17:46:12.351160 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bdnnb"] Feb 23 17:46:12 crc kubenswrapper[4724]: E0223 17:46:12.357025 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f30ea74-e80f-4a6d-99bc-39f2f1ec1608" containerName="extract-content" Feb 23 17:46:12 crc kubenswrapper[4724]: I0223 17:46:12.357077 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f30ea74-e80f-4a6d-99bc-39f2f1ec1608" containerName="extract-content" Feb 23 17:46:12 crc kubenswrapper[4724]: E0223 17:46:12.357096 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f30ea74-e80f-4a6d-99bc-39f2f1ec1608" containerName="registry-server" Feb 23 17:46:12 crc kubenswrapper[4724]: I0223 17:46:12.357105 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f30ea74-e80f-4a6d-99bc-39f2f1ec1608" containerName="registry-server" Feb 23 17:46:12 crc kubenswrapper[4724]: E0223 17:46:12.357122 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f30ea74-e80f-4a6d-99bc-39f2f1ec1608" containerName="extract-utilities" Feb 23 17:46:12 crc kubenswrapper[4724]: I0223 17:46:12.357130 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f30ea74-e80f-4a6d-99bc-39f2f1ec1608" containerName="extract-utilities" Feb 23 17:46:12 crc kubenswrapper[4724]: I0223 17:46:12.357628 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f30ea74-e80f-4a6d-99bc-39f2f1ec1608" containerName="registry-server" Feb 23 17:46:12 crc kubenswrapper[4724]: I0223 17:46:12.361034 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bdnnb" Feb 23 17:46:12 crc kubenswrapper[4724]: I0223 17:46:12.375511 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bdnnb"] Feb 23 17:46:12 crc kubenswrapper[4724]: I0223 17:46:12.386733 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f30ea74-e80f-4a6d-99bc-39f2f1ec1608-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4f30ea74-e80f-4a6d-99bc-39f2f1ec1608" (UID: "4f30ea74-e80f-4a6d-99bc-39f2f1ec1608"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:46:12 crc kubenswrapper[4724]: I0223 17:46:12.439850 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c84eafd3-7b24-45c7-b6ad-d813d6cf9ede-utilities\") pod \"certified-operators-bdnnb\" (UID: \"c84eafd3-7b24-45c7-b6ad-d813d6cf9ede\") " pod="openshift-marketplace/certified-operators-bdnnb" Feb 23 17:46:12 crc kubenswrapper[4724]: I0223 17:46:12.439968 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skgc2\" (UniqueName: \"kubernetes.io/projected/c84eafd3-7b24-45c7-b6ad-d813d6cf9ede-kube-api-access-skgc2\") pod \"certified-operators-bdnnb\" (UID: \"c84eafd3-7b24-45c7-b6ad-d813d6cf9ede\") " pod="openshift-marketplace/certified-operators-bdnnb" Feb 23 17:46:12 crc kubenswrapper[4724]: I0223 17:46:12.440018 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c84eafd3-7b24-45c7-b6ad-d813d6cf9ede-catalog-content\") pod \"certified-operators-bdnnb\" (UID: \"c84eafd3-7b24-45c7-b6ad-d813d6cf9ede\") " pod="openshift-marketplace/certified-operators-bdnnb" Feb 23 17:46:12 crc kubenswrapper[4724]: I0223 17:46:12.440264 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f30ea74-e80f-4a6d-99bc-39f2f1ec1608-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 17:46:12 crc kubenswrapper[4724]: I0223 17:46:12.541881 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c84eafd3-7b24-45c7-b6ad-d813d6cf9ede-utilities\") pod \"certified-operators-bdnnb\" (UID: \"c84eafd3-7b24-45c7-b6ad-d813d6cf9ede\") " pod="openshift-marketplace/certified-operators-bdnnb" Feb 23 17:46:12 crc kubenswrapper[4724]: I0223 17:46:12.541990 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skgc2\" (UniqueName: \"kubernetes.io/projected/c84eafd3-7b24-45c7-b6ad-d813d6cf9ede-kube-api-access-skgc2\") pod \"certified-operators-bdnnb\" (UID: \"c84eafd3-7b24-45c7-b6ad-d813d6cf9ede\") " pod="openshift-marketplace/certified-operators-bdnnb" Feb 23 17:46:12 crc kubenswrapper[4724]: I0223 17:46:12.542025 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c84eafd3-7b24-45c7-b6ad-d813d6cf9ede-catalog-content\") pod \"certified-operators-bdnnb\" (UID: \"c84eafd3-7b24-45c7-b6ad-d813d6cf9ede\") " pod="openshift-marketplace/certified-operators-bdnnb" Feb 23 17:46:12 crc kubenswrapper[4724]: I0223 17:46:12.542679 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c84eafd3-7b24-45c7-b6ad-d813d6cf9ede-catalog-content\") pod \"certified-operators-bdnnb\" (UID: \"c84eafd3-7b24-45c7-b6ad-d813d6cf9ede\") " pod="openshift-marketplace/certified-operators-bdnnb" Feb 23 17:46:12 crc kubenswrapper[4724]: I0223 17:46:12.542888 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c84eafd3-7b24-45c7-b6ad-d813d6cf9ede-utilities\") pod \"certified-operators-bdnnb\" (UID: \"c84eafd3-7b24-45c7-b6ad-d813d6cf9ede\") " pod="openshift-marketplace/certified-operators-bdnnb" Feb 23 17:46:12 crc kubenswrapper[4724]: I0223 17:46:12.564318 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skgc2\" (UniqueName: \"kubernetes.io/projected/c84eafd3-7b24-45c7-b6ad-d813d6cf9ede-kube-api-access-skgc2\") pod \"certified-operators-bdnnb\" (UID: \"c84eafd3-7b24-45c7-b6ad-d813d6cf9ede\") " pod="openshift-marketplace/certified-operators-bdnnb" Feb 23 17:46:12 crc kubenswrapper[4724]: I0223 17:46:12.676883 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gr5gv" event={"ID":"4f30ea74-e80f-4a6d-99bc-39f2f1ec1608","Type":"ContainerDied","Data":"abbceca4f083b28ae85ac4dccc5233e640f158f61263c4d625ce616fd9a4bedb"} Feb 23 17:46:12 crc kubenswrapper[4724]: I0223 17:46:12.676989 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gr5gv" Feb 23 17:46:12 crc kubenswrapper[4724]: E0223 17:46:12.679247 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:f1158ec4d879c4646eee4323bc501eba4d377beb2ad6fbe08ed30070c441ac26\\\"\"" pod="openstack-operators/manila-operator-controller-manager-67d996989d-fxj7d" podUID="b906fefc-aaf5-48c0-b45b-3d11dbda1c3e" Feb 23 17:46:12 crc kubenswrapper[4724]: I0223 17:46:12.706963 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bdnnb" Feb 23 17:46:12 crc kubenswrapper[4724]: I0223 17:46:12.744728 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gr5gv"] Feb 23 17:46:12 crc kubenswrapper[4724]: I0223 17:46:12.751976 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gr5gv"] Feb 23 17:46:12 crc kubenswrapper[4724]: I0223 17:46:12.962047 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f30ea74-e80f-4a6d-99bc-39f2f1ec1608" path="/var/lib/kubelet/pods/4f30ea74-e80f-4a6d-99bc-39f2f1ec1608/volumes" Feb 23 17:46:14 crc kubenswrapper[4724]: E0223 17:46:14.228892 4724 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:c7c7d4228994efb8b93cfabe4d78b40b085d91848dc49db247b7bbca689dae06" Feb 23 17:46:14 crc kubenswrapper[4724]: E0223 17:46:14.229138 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:c7c7d4228994efb8b93cfabe4d78b40b085d91848dc49db247b7bbca689dae06,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nsn2g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-659dc6bbfc-p42tx_openstack-operators(24d796b9-e6ea-4b70-9424-1352f71c80a6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 17:46:14 crc kubenswrapper[4724]: E0223 17:46:14.231881 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-p42tx" podUID="24d796b9-e6ea-4b70-9424-1352f71c80a6" Feb 23 17:46:14 crc kubenswrapper[4724]: E0223 17:46:14.691667 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:c7c7d4228994efb8b93cfabe4d78b40b085d91848dc49db247b7bbca689dae06\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-p42tx" podUID="24d796b9-e6ea-4b70-9424-1352f71c80a6" Feb 23 17:46:15 crc kubenswrapper[4724]: E0223 17:46:15.026668 4724 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:f4143497c70c048a7733c284060347a0c74ef4e628aca22ee191e5bc9e4c7192" Feb 23 17:46:15 crc kubenswrapper[4724]: E0223 17:46:15.027017 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:f4143497c70c048a7733c284060347a0c74ef4e628aca22ee191e5bc9e4c7192,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xktmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-5955d8c787-92g5j_openstack-operators(a8f9c97e-0259-4c6e-b188-33081d1706fd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 17:46:15 crc kubenswrapper[4724]: E0223 17:46:15.028261 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-92g5j" podUID="a8f9c97e-0259-4c6e-b188-33081d1706fd" Feb 23 17:46:15 crc kubenswrapper[4724]: E0223 17:46:15.701979 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:f4143497c70c048a7733c284060347a0c74ef4e628aca22ee191e5bc9e4c7192\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-92g5j" podUID="a8f9c97e-0259-4c6e-b188-33081d1706fd" Feb 23 17:46:16 crc kubenswrapper[4724]: E0223 17:46:16.060811 4724 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:14ae1fb8d065e2317959ce7490a878dc87731d27ebf40259f801ba1a83cfefcf" Feb 23 17:46:16 crc kubenswrapper[4724]: E0223 17:46:16.061045 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:14ae1fb8d065e2317959ce7490a878dc87731d27ebf40259f801ba1a83cfefcf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-68dnc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-6bd4687957-9s4mk_openstack-operators(8b193934-08d8-4435-ae40-8b4d7b4878e7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 17:46:16 crc kubenswrapper[4724]: E0223 17:46:16.062362 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-9s4mk" podUID="8b193934-08d8-4435-ae40-8b4d7b4878e7" Feb 23 17:46:16 crc kubenswrapper[4724]: E0223 17:46:16.878808 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:14ae1fb8d065e2317959ce7490a878dc87731d27ebf40259f801ba1a83cfefcf\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-9s4mk" podUID="8b193934-08d8-4435-ae40-8b4d7b4878e7" Feb 23 17:46:16 crc kubenswrapper[4724]: E0223 17:46:16.891892 4724 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04" Feb 23 17:46:16 crc kubenswrapper[4724]: E0223 17:46:16.892082 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v8r4t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68f46476f-wqsvk_openstack-operators(e37a1f8b-cee7-4a13-879e-496d26735ab4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 17:46:16 crc kubenswrapper[4724]: E0223 17:46:16.894029 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-wqsvk" podUID="e37a1f8b-cee7-4a13-879e-496d26735ab4" Feb 23 17:46:17 crc kubenswrapper[4724]: E0223 17:46:17.643876 4724 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" Feb 23 17:46:17 crc kubenswrapper[4724]: E0223 17:46:17.644216 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g5fwc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b4d948c87-djmpk_openstack-operators(973124e7-0723-4a5d-ab81-0ef8619f8754): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 17:46:17 crc kubenswrapper[4724]: E0223 17:46:17.645555 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-djmpk" podUID="973124e7-0723-4a5d-ab81-0ef8619f8754" Feb 23 17:46:17 crc kubenswrapper[4724]: E0223 17:46:17.725520 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-djmpk" podUID="973124e7-0723-4a5d-ab81-0ef8619f8754" Feb 23 17:46:17 crc kubenswrapper[4724]: E0223 17:46:17.726502 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-wqsvk" podUID="e37a1f8b-cee7-4a13-879e-496d26735ab4" Feb 23 17:46:21 crc kubenswrapper[4724]: I0223 17:46:21.419012 4724 scope.go:117] "RemoveContainer" containerID="ef6ebd122c42e5fbcde2c3f5c12ac794f9eda6f33c1c80329356588cd510cfe5" Feb 23 17:46:21 crc kubenswrapper[4724]: I0223 17:46:21.765591 4724 scope.go:117] "RemoveContainer" containerID="a2500ba67cd05c470126b8314a4877efbba0869e7d1b226f81454834e6ea0992" Feb 23 17:46:22 crc kubenswrapper[4724]: I0223 17:46:22.049375 4724 scope.go:117] "RemoveContainer" containerID="13627d586fe2fa5e81796f76ef9dd2cfddefc7c8a9c107ddd8d1956f23b678a4" Feb 23 17:46:22 crc kubenswrapper[4724]: I0223 17:46:22.123621 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rbd7w"] Feb 23 17:46:22 crc kubenswrapper[4724]: I0223 17:46:22.152723 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bdnnb"] Feb 23 17:46:22 crc kubenswrapper[4724]: I0223 17:46:22.766128 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-d5z2j" event={"ID":"8bc03a47-9ded-40c0-b924-0c936950a12a","Type":"ContainerStarted","Data":"41bb7e8e478ef388c2b7708b4bb0edcc55b007fc32b986e59c198323bcd10803"} Feb 23 17:46:22 crc kubenswrapper[4724]: I0223 17:46:22.766907 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-d5z2j" Feb 23 17:46:22 crc kubenswrapper[4724]: I0223 17:46:22.770353 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-f5x72" event={"ID":"6b607306-d732-4142-83d4-92ae20c714cd","Type":"ContainerStarted","Data":"f1b87de618cbab733858fc7fa4a3c42934db13a180541bcd661a5df9e947cb25"} Feb 23 17:46:22 crc kubenswrapper[4724]: I0223 17:46:22.770445 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-f5x72" Feb 23 17:46:22 crc kubenswrapper[4724]: I0223 17:46:22.774096 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-22lgm" event={"ID":"dd866f81-0e85-4690-b16d-45baf5e856ed","Type":"ContainerStarted","Data":"a7e438c22b2e1e89e9f56d4cd0165cf5f4fc03f79d1412ccef121eedd1e48d58"} Feb 23 17:46:22 crc kubenswrapper[4724]: I0223 17:46:22.774196 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-22lgm" Feb 23 17:46:22 crc kubenswrapper[4724]: I0223 17:46:22.777102 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-gmdl7" event={"ID":"dedf8817-f3cf-4630-a825-71059f681d10","Type":"ContainerStarted","Data":"aeb4520aa70571e56df67670f48051aa3b0a0e09612c6b081345da0faf1b8d86"} Feb 23 17:46:22 crc kubenswrapper[4724]: I0223 17:46:22.778945 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-9gtq7" event={"ID":"2bc5c9a5-0293-4efd-b5a4-0f5c85b238b5","Type":"ContainerStarted","Data":"a6a395b4804c6876b594e6b83e38570decc41a5efc53d35360fe2de6bd133fd8"} Feb 23 17:46:22 crc kubenswrapper[4724]: I0223 17:46:22.780093 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bdnnb" event={"ID":"c84eafd3-7b24-45c7-b6ad-d813d6cf9ede","Type":"ContainerStarted","Data":"71fb9107eec885209d25042a2b6aa1b4118bc38a69bdbd9cdfc51a51a69d3e26"} Feb 23 17:46:22 crc kubenswrapper[4724]: I0223 17:46:22.782142 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-zm7cw" event={"ID":"a4842ca7-909d-4d11-bba6-75555f3599b3","Type":"ContainerStarted","Data":"2d6bf3acdf2d26ea49c1e6ef58827486ce6b378ffbeaa015e9fc5e5067bed7ac"} Feb 23 17:46:22 crc kubenswrapper[4724]: I0223 17:46:22.782284 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-zm7cw" Feb 23 17:46:22 crc kubenswrapper[4724]: I0223 17:46:22.782992 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rbd7w" event={"ID":"4fe3a7ec-fc03-4115-9ba3-5c2ab162382c","Type":"ContainerStarted","Data":"decb3487ff38eebf49f21ed718b543a3f5953c55c2362711e1e0498d2f61ec76"} Feb 23 17:46:22 crc kubenswrapper[4724]: I0223 17:46:22.784211 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-xdfp8" event={"ID":"73da6414-95e9-4d5a-a0ca-fbeb32048153","Type":"ContainerStarted","Data":"238ae9dca5b578cc166aa50b903d2002e16109bc63f1133e00ddbf0510af2b10"} Feb 23 17:46:22 crc kubenswrapper[4724]: I0223 17:46:22.784872 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-xdfp8" Feb 23 17:46:22 crc kubenswrapper[4724]: I0223 17:46:22.786797 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-4zgfm" event={"ID":"70c55fa9-1fa4-415c-98c4-adfe080201d1","Type":"ContainerStarted","Data":"03c3d3404b130eadb856772433c46669e827f93c63d29bda506b3ee59e6740dc"} Feb 23 17:46:22 crc kubenswrapper[4724]: I0223 17:46:22.787336 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-4zgfm" Feb 23 17:46:22 crc kubenswrapper[4724]: I0223 17:46:22.789598 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-vqls9" event={"ID":"967a6928-46e0-4a1e-90bd-cc9a204d9099","Type":"ContainerStarted","Data":"83539d1a2cc451cb1c21d9e210c05429e71bee309727374cb969fd44330ca573"} Feb 23 17:46:22 crc kubenswrapper[4724]: I0223 17:46:22.790237 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-vqls9" Feb 23 17:46:22 crc kubenswrapper[4724]: I0223 17:46:22.792209 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-d5z2j" podStartSLOduration=3.516716941 podStartE2EDuration="30.792192777s" podCreationTimestamp="2026-02-23 17:45:52 +0000 UTC" firstStartedPulling="2026-02-23 17:45:54.143825657 +0000 UTC m=+909.960025257" lastFinishedPulling="2026-02-23 17:46:21.419301493 +0000 UTC m=+937.235501093" observedRunningTime="2026-02-23 17:46:22.790686967 +0000 UTC m=+938.606886587" watchObservedRunningTime="2026-02-23 17:46:22.792192777 +0000 UTC m=+938.608392377" Feb 23 17:46:22 crc kubenswrapper[4724]: I0223 17:46:22.793550 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-75ft9" event={"ID":"0fb435d7-a53d-44d4-b800-23f60d2aac7c","Type":"ContainerStarted","Data":"f3d444a83f571af6f6ae56c967144fa50089367f520d716e94aed28bc64ce628"} Feb 23 17:46:22 crc kubenswrapper[4724]: I0223 17:46:22.797212 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" event={"ID":"a065b197-b354-4d9b-b2e9-7d4882a3d1a2","Type":"ContainerStarted","Data":"9dc23005496a1839d115f25e420d8012af50267d7439025ce701b41626936c3c"} Feb 23 17:46:22 crc kubenswrapper[4724]: I0223 17:46:22.799254 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-d85f4" event={"ID":"3b37faa8-6e4e-427a-9c1a-84993ed85290","Type":"ContainerStarted","Data":"aa41e2083400fe2024fe440a4c341e42cf6b0f69527593e57bc7ca4a41277641"} Feb 23 17:46:22 crc kubenswrapper[4724]: I0223 17:46:22.801285 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-szmk8" event={"ID":"77ba1933-d39b-4b30-9d8c-1500d7293444","Type":"ContainerStarted","Data":"6c0c514428664acb972697420dead6949afe781a0757ebb02d361e1d109029b3"} Feb 23 17:46:22 crc kubenswrapper[4724]: I0223 17:46:22.801494 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-szmk8" Feb 23 17:46:22 crc kubenswrapper[4724]: I0223 17:46:22.805412 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-4tnw2" event={"ID":"ca793345-c1e2-4207-844b-170dd5b70066","Type":"ContainerStarted","Data":"8bd7be415943b50da875fa20d44e1d0ee87f91523716ffd6b58ab08280f0e910"} Feb 23 17:46:22 crc kubenswrapper[4724]: I0223 17:46:22.805583 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-4tnw2" Feb 23 17:46:22 crc kubenswrapper[4724]: I0223 17:46:22.825329 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-zm7cw" podStartSLOduration=7.376548614 podStartE2EDuration="31.825297503s" podCreationTimestamp="2026-02-23 17:45:51 +0000 UTC" firstStartedPulling="2026-02-23 17:45:53.126995653 +0000 UTC m=+908.943195253" lastFinishedPulling="2026-02-23 17:46:17.575744552 +0000 UTC m=+933.391944142" observedRunningTime="2026-02-23 17:46:22.822239372 +0000 UTC m=+938.638438992" watchObservedRunningTime="2026-02-23 17:46:22.825297503 +0000 UTC m=+938.641497123" Feb 23 17:46:22 crc kubenswrapper[4724]: I0223 17:46:22.851309 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-f5x72" podStartSLOduration=7.406528308 podStartE2EDuration="31.851281437s" podCreationTimestamp="2026-02-23 17:45:51 +0000 UTC" firstStartedPulling="2026-02-23 17:45:53.707530652 +0000 UTC m=+909.523730252" lastFinishedPulling="2026-02-23 17:46:18.152283781 +0000 UTC m=+933.968483381" observedRunningTime="2026-02-23 17:46:22.851151274 +0000 UTC m=+938.667350874" watchObservedRunningTime="2026-02-23 17:46:22.851281437 +0000 UTC m=+938.667481037" Feb 23 17:46:22 crc kubenswrapper[4724]: I0223 17:46:22.884653 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-xdfp8" podStartSLOduration=6.776334443 podStartE2EDuration="30.884622738s" podCreationTimestamp="2026-02-23 17:45:52 +0000 UTC" firstStartedPulling="2026-02-23 17:45:54.044021297 +0000 UTC m=+909.860220897" lastFinishedPulling="2026-02-23 17:46:18.152309592 +0000 UTC m=+933.968509192" observedRunningTime="2026-02-23 17:46:22.883306351 +0000 UTC m=+938.699505951" watchObservedRunningTime="2026-02-23 17:46:22.884622738 +0000 UTC m=+938.700822328" Feb 23 17:46:22 crc kubenswrapper[4724]: I0223 17:46:22.962367 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-22lgm" podStartSLOduration=6.503719705 podStartE2EDuration="30.962344663s" podCreationTimestamp="2026-02-23 17:45:52 +0000 UTC" firstStartedPulling="2026-02-23 17:45:53.693813856 +0000 UTC m=+909.510013456" lastFinishedPulling="2026-02-23 17:46:18.152438814 +0000 UTC m=+933.968638414" observedRunningTime="2026-02-23 17:46:22.913709294 +0000 UTC m=+938.729908894" watchObservedRunningTime="2026-02-23 17:46:22.962344663 +0000 UTC m=+938.778544263" Feb 23 17:46:22 crc kubenswrapper[4724]: I0223 17:46:22.977609 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-4zgfm" podStartSLOduration=7.367704145 podStartE2EDuration="31.977566159s" podCreationTimestamp="2026-02-23 17:45:51 +0000 UTC" firstStartedPulling="2026-02-23 17:45:53.542149072 +0000 UTC m=+909.358348672" lastFinishedPulling="2026-02-23 17:46:18.152011086 +0000 UTC m=+933.968210686" observedRunningTime="2026-02-23 17:46:22.956293821 +0000 UTC m=+938.772493421" watchObservedRunningTime="2026-02-23 17:46:22.977566159 +0000 UTC m=+938.793765759" Feb 23 17:46:23 crc kubenswrapper[4724]: I0223 17:46:23.119613 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-vqls9" podStartSLOduration=5.999862504 podStartE2EDuration="32.119589799s" podCreationTimestamp="2026-02-23 17:45:51 +0000 UTC" firstStartedPulling="2026-02-23 17:45:53.651460473 +0000 UTC m=+909.467660073" lastFinishedPulling="2026-02-23 17:46:19.771187768 +0000 UTC m=+935.587387368" observedRunningTime="2026-02-23 17:46:23.117535958 +0000 UTC m=+938.933735558" watchObservedRunningTime="2026-02-23 17:46:23.119589799 +0000 UTC m=+938.935789389" Feb 23 17:46:23 crc kubenswrapper[4724]: I0223 17:46:23.189527 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-4tnw2" podStartSLOduration=7.180573644 podStartE2EDuration="31.189498727s" podCreationTimestamp="2026-02-23 17:45:52 +0000 UTC" firstStartedPulling="2026-02-23 17:45:54.143109633 +0000 UTC m=+909.959309233" lastFinishedPulling="2026-02-23 17:46:18.152034716 +0000 UTC m=+933.968234316" observedRunningTime="2026-02-23 17:46:23.1846701 +0000 UTC m=+939.000869700" watchObservedRunningTime="2026-02-23 17:46:23.189498727 +0000 UTC m=+939.005698327" Feb 23 17:46:23 crc kubenswrapper[4724]: I0223 17:46:23.295106 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-szmk8" podStartSLOduration=3.782091234 podStartE2EDuration="31.295080663s" podCreationTimestamp="2026-02-23 17:45:52 +0000 UTC" firstStartedPulling="2026-02-23 17:45:54.143537961 +0000 UTC m=+909.959737561" lastFinishedPulling="2026-02-23 17:46:21.65652739 +0000 UTC m=+937.472726990" observedRunningTime="2026-02-23 17:46:23.293592573 +0000 UTC m=+939.109792163" watchObservedRunningTime="2026-02-23 17:46:23.295080663 +0000 UTC m=+939.111280263" Feb 23 17:46:23 crc kubenswrapper[4724]: I0223 17:46:23.332493 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-75ft9" podStartSLOduration=8.074325768 podStartE2EDuration="33.332465126s" podCreationTimestamp="2026-02-23 17:45:50 +0000 UTC" firstStartedPulling="2026-02-23 17:45:52.317552473 +0000 UTC m=+908.133752073" lastFinishedPulling="2026-02-23 17:46:17.575691831 +0000 UTC m=+933.391891431" observedRunningTime="2026-02-23 17:46:23.324518065 +0000 UTC m=+939.140717665" watchObservedRunningTime="2026-02-23 17:46:23.332465126 +0000 UTC m=+939.148664726" Feb 23 17:46:23 crc kubenswrapper[4724]: I0223 17:46:23.815015 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-t5pkl" event={"ID":"6848c8bf-d8f5-4215-90fb-454b794e33ae","Type":"ContainerStarted","Data":"d57bc929f663baead11117bce11c7897a733ad105cd15ed227dcced67989e161"} Feb 23 17:46:23 crc kubenswrapper[4724]: I0223 17:46:23.818129 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5cb6b78489-7tdgw" event={"ID":"5c1a94e8-9d64-4e28-b2f9-fdd066fddd6a","Type":"ContainerStarted","Data":"406addae86db72c19fb1eba2785b957578a26819d25a73becae831c8225bf25b"} Feb 23 17:46:23 crc kubenswrapper[4724]: I0223 17:46:23.818749 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5cb6b78489-7tdgw" Feb 23 17:46:23 crc kubenswrapper[4724]: I0223 17:46:23.821104 4724 generic.go:334] "Generic (PLEG): container finished" podID="c84eafd3-7b24-45c7-b6ad-d813d6cf9ede" containerID="5dbfc1c30d598a86e8d3daaa8a206a2be6a5c3e0dbc7c85fd08dc8d1435544ba" exitCode=0 Feb 23 17:46:23 crc kubenswrapper[4724]: I0223 17:46:23.821188 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bdnnb" event={"ID":"c84eafd3-7b24-45c7-b6ad-d813d6cf9ede","Type":"ContainerDied","Data":"5dbfc1c30d598a86e8d3daaa8a206a2be6a5c3e0dbc7c85fd08dc8d1435544ba"} Feb 23 17:46:23 crc kubenswrapper[4724]: I0223 17:46:23.823443 4724 generic.go:334] "Generic (PLEG): container finished" podID="4fe3a7ec-fc03-4115-9ba3-5c2ab162382c" containerID="0a9d82626480647767c8fef9fb0e14590d459d9bb19f2d3571d8700727d6a366" exitCode=0 Feb 23 17:46:23 crc kubenswrapper[4724]: I0223 17:46:23.823580 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rbd7w" event={"ID":"4fe3a7ec-fc03-4115-9ba3-5c2ab162382c","Type":"ContainerDied","Data":"0a9d82626480647767c8fef9fb0e14590d459d9bb19f2d3571d8700727d6a366"} Feb 23 17:46:23 crc kubenswrapper[4724]: I0223 17:46:23.845696 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-t5pkl" podStartSLOduration=4.139874817 podStartE2EDuration="31.845672218s" podCreationTimestamp="2026-02-23 17:45:52 +0000 UTC" firstStartedPulling="2026-02-23 17:45:54.38185958 +0000 UTC m=+910.198059180" lastFinishedPulling="2026-02-23 17:46:22.087656981 +0000 UTC m=+937.903856581" observedRunningTime="2026-02-23 17:46:23.84031962 +0000 UTC m=+939.656519220" watchObservedRunningTime="2026-02-23 17:46:23.845672218 +0000 UTC m=+939.661871808" Feb 23 17:46:23 crc kubenswrapper[4724]: I0223 17:46:23.858987 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5cb6b78489-7tdgw" podStartSLOduration=3.966719731 podStartE2EDuration="31.858957716s" podCreationTimestamp="2026-02-23 17:45:52 +0000 UTC" firstStartedPulling="2026-02-23 17:45:54.281110281 +0000 UTC m=+910.097309891" lastFinishedPulling="2026-02-23 17:46:22.173348276 +0000 UTC m=+937.989547876" observedRunningTime="2026-02-23 17:46:23.857997386 +0000 UTC m=+939.674196996" watchObservedRunningTime="2026-02-23 17:46:23.858957716 +0000 UTC m=+939.675157316" Feb 23 17:46:23 crc kubenswrapper[4724]: I0223 17:46:23.931009 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-gmdl7" podStartSLOduration=6.67115304 podStartE2EDuration="32.930976726s" podCreationTimestamp="2026-02-23 17:45:51 +0000 UTC" firstStartedPulling="2026-02-23 17:45:53.477545661 +0000 UTC m=+909.293745261" lastFinishedPulling="2026-02-23 17:46:19.737369357 +0000 UTC m=+935.553568947" observedRunningTime="2026-02-23 17:46:23.921499175 +0000 UTC m=+939.737698785" watchObservedRunningTime="2026-02-23 17:46:23.930976726 +0000 UTC m=+939.747176326" Feb 23 17:46:23 crc kubenswrapper[4724]: I0223 17:46:23.950144 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-d85f4" podStartSLOduration=4.446634884 podStartE2EDuration="31.950116631s" podCreationTimestamp="2026-02-23 17:45:52 +0000 UTC" firstStartedPulling="2026-02-23 17:45:54.302577304 +0000 UTC m=+910.118776904" lastFinishedPulling="2026-02-23 17:46:21.806059051 +0000 UTC m=+937.622258651" observedRunningTime="2026-02-23 17:46:23.947956768 +0000 UTC m=+939.764156378" watchObservedRunningTime="2026-02-23 17:46:23.950116631 +0000 UTC m=+939.766316231" Feb 23 17:46:24 crc kubenswrapper[4724]: I0223 17:46:24.142759 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7af82cf5-bcff-40d4-8c1b-4bc71e5ca9a3-cert\") pod \"infra-operator-controller-manager-79d975b745-pb2dv\" (UID: \"7af82cf5-bcff-40d4-8c1b-4bc71e5ca9a3\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-pb2dv" Feb 23 17:46:24 crc kubenswrapper[4724]: I0223 17:46:24.151576 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7af82cf5-bcff-40d4-8c1b-4bc71e5ca9a3-cert\") pod \"infra-operator-controller-manager-79d975b745-pb2dv\" (UID: \"7af82cf5-bcff-40d4-8c1b-4bc71e5ca9a3\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-pb2dv" Feb 23 17:46:24 crc kubenswrapper[4724]: I0223 17:46:24.205062 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-pb2dv" Feb 23 17:46:24 crc kubenswrapper[4724]: I0223 17:46:24.493820 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-9gtq7" podStartSLOduration=9.291756076 podStartE2EDuration="33.493793818s" podCreationTimestamp="2026-02-23 17:45:51 +0000 UTC" firstStartedPulling="2026-02-23 17:45:53.950026385 +0000 UTC m=+909.766225985" lastFinishedPulling="2026-02-23 17:46:18.152064127 +0000 UTC m=+933.968263727" observedRunningTime="2026-02-23 17:46:24.02602718 +0000 UTC m=+939.842226780" watchObservedRunningTime="2026-02-23 17:46:24.493793818 +0000 UTC m=+940.309993418" Feb 23 17:46:24 crc kubenswrapper[4724]: I0223 17:46:24.497409 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-pb2dv"] Feb 23 17:46:24 crc kubenswrapper[4724]: I0223 17:46:24.650572 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/63923048-2ad5-45f9-9285-9d84dc711fa7-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c44nwv\" (UID: \"63923048-2ad5-45f9-9285-9d84dc711fa7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c44nwv" Feb 23 17:46:24 crc kubenswrapper[4724]: I0223 17:46:24.659700 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/63923048-2ad5-45f9-9285-9d84dc711fa7-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c44nwv\" (UID: \"63923048-2ad5-45f9-9285-9d84dc711fa7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c44nwv" Feb 23 17:46:24 crc kubenswrapper[4724]: I0223 17:46:24.833235 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-pb2dv" event={"ID":"7af82cf5-bcff-40d4-8c1b-4bc71e5ca9a3","Type":"ContainerStarted","Data":"0abbffe7706d1e0d8ad6374c4174fd7a55177670f043db476316302d826e9463"} Feb 23 17:46:24 crc kubenswrapper[4724]: I0223 17:46:24.841444 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c44nwv" Feb 23 17:46:24 crc kubenswrapper[4724]: I0223 17:46:24.853622 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c38380c9-1ff8-4a96-9c4a-15ed760a25db-metrics-certs\") pod \"openstack-operator-controller-manager-bf9ddc465-xrp8k\" (UID: \"c38380c9-1ff8-4a96-9c4a-15ed760a25db\") " pod="openstack-operators/openstack-operator-controller-manager-bf9ddc465-xrp8k" Feb 23 17:46:24 crc kubenswrapper[4724]: I0223 17:46:24.853924 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c38380c9-1ff8-4a96-9c4a-15ed760a25db-webhook-certs\") pod \"openstack-operator-controller-manager-bf9ddc465-xrp8k\" (UID: \"c38380c9-1ff8-4a96-9c4a-15ed760a25db\") " pod="openstack-operators/openstack-operator-controller-manager-bf9ddc465-xrp8k" Feb 23 17:46:24 crc kubenswrapper[4724]: I0223 17:46:24.859208 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c38380c9-1ff8-4a96-9c4a-15ed760a25db-webhook-certs\") pod \"openstack-operator-controller-manager-bf9ddc465-xrp8k\" (UID: \"c38380c9-1ff8-4a96-9c4a-15ed760a25db\") " pod="openstack-operators/openstack-operator-controller-manager-bf9ddc465-xrp8k" Feb 23 17:46:24 crc kubenswrapper[4724]: I0223 17:46:24.859242 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c38380c9-1ff8-4a96-9c4a-15ed760a25db-metrics-certs\") pod \"openstack-operator-controller-manager-bf9ddc465-xrp8k\" (UID: \"c38380c9-1ff8-4a96-9c4a-15ed760a25db\") " pod="openstack-operators/openstack-operator-controller-manager-bf9ddc465-xrp8k" Feb 23 17:46:25 crc kubenswrapper[4724]: I0223 17:46:25.124000 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-bf9ddc465-xrp8k" Feb 23 17:46:25 crc kubenswrapper[4724]: I0223 17:46:25.350955 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c44nwv"] Feb 23 17:46:25 crc kubenswrapper[4724]: W0223 17:46:25.370512 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63923048_2ad5_45f9_9285_9d84dc711fa7.slice/crio-8545de9fa6cf94cdcd113ebbffcd3e1e041c0fb2fe2d5d32d2de052c87691e51 WatchSource:0}: Error finding container 8545de9fa6cf94cdcd113ebbffcd3e1e041c0fb2fe2d5d32d2de052c87691e51: Status 404 returned error can't find the container with id 8545de9fa6cf94cdcd113ebbffcd3e1e041c0fb2fe2d5d32d2de052c87691e51 Feb 23 17:46:25 crc kubenswrapper[4724]: I0223 17:46:25.651876 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-bf9ddc465-xrp8k"] Feb 23 17:46:25 crc kubenswrapper[4724]: W0223 17:46:25.660225 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc38380c9_1ff8_4a96_9c4a_15ed760a25db.slice/crio-fca1bb2887619bfc4ffba1385d416d7da0dd5407ffcb38e4683ef0e8d2534bbb WatchSource:0}: Error finding container fca1bb2887619bfc4ffba1385d416d7da0dd5407ffcb38e4683ef0e8d2534bbb: Status 404 returned error can't find the container with id fca1bb2887619bfc4ffba1385d416d7da0dd5407ffcb38e4683ef0e8d2534bbb Feb 23 17:46:25 crc kubenswrapper[4724]: I0223 17:46:25.845218 4724 generic.go:334] "Generic (PLEG): container finished" podID="c84eafd3-7b24-45c7-b6ad-d813d6cf9ede" containerID="8f2a24830312982d8ef53b62393e61cef02b40fb1c47d20798457679e6d8649f" exitCode=0 Feb 23 17:46:25 crc kubenswrapper[4724]: I0223 17:46:25.845330 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bdnnb" event={"ID":"c84eafd3-7b24-45c7-b6ad-d813d6cf9ede","Type":"ContainerDied","Data":"8f2a24830312982d8ef53b62393e61cef02b40fb1c47d20798457679e6d8649f"} Feb 23 17:46:25 crc kubenswrapper[4724]: I0223 17:46:25.852050 4724 generic.go:334] "Generic (PLEG): container finished" podID="4fe3a7ec-fc03-4115-9ba3-5c2ab162382c" containerID="8b07114d851686e0b324d850290d219907af9da2b7324c624d606c2724cd9b4c" exitCode=0 Feb 23 17:46:25 crc kubenswrapper[4724]: I0223 17:46:25.852133 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rbd7w" event={"ID":"4fe3a7ec-fc03-4115-9ba3-5c2ab162382c","Type":"ContainerDied","Data":"8b07114d851686e0b324d850290d219907af9da2b7324c624d606c2724cd9b4c"} Feb 23 17:46:25 crc kubenswrapper[4724]: I0223 17:46:25.861023 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c44nwv" event={"ID":"63923048-2ad5-45f9-9285-9d84dc711fa7","Type":"ContainerStarted","Data":"8545de9fa6cf94cdcd113ebbffcd3e1e041c0fb2fe2d5d32d2de052c87691e51"} Feb 23 17:46:25 crc kubenswrapper[4724]: I0223 17:46:25.864326 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-bf9ddc465-xrp8k" event={"ID":"c38380c9-1ff8-4a96-9c4a-15ed760a25db","Type":"ContainerStarted","Data":"fca1bb2887619bfc4ffba1385d416d7da0dd5407ffcb38e4683ef0e8d2534bbb"} Feb 23 17:46:26 crc kubenswrapper[4724]: I0223 17:46:26.883784 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-bf9ddc465-xrp8k" event={"ID":"c38380c9-1ff8-4a96-9c4a-15ed760a25db","Type":"ContainerStarted","Data":"07a4b6cab751c365271ef67f0a65c858fbd24326d33eba3c08b2c1c3f361948e"} Feb 23 17:46:26 crc kubenswrapper[4724]: I0223 17:46:26.884584 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-bf9ddc465-xrp8k" Feb 23 17:46:26 crc kubenswrapper[4724]: I0223 17:46:26.886551 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-67d996989d-fxj7d" event={"ID":"b906fefc-aaf5-48c0-b45b-3d11dbda1c3e","Type":"ContainerStarted","Data":"02a6429e92a5a30911181aad78fcef192c5e362579d4016fc5927b4f14520dce"} Feb 23 17:46:26 crc kubenswrapper[4724]: I0223 17:46:26.886890 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-67d996989d-fxj7d" Feb 23 17:46:26 crc kubenswrapper[4724]: I0223 17:46:26.921980 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-bf9ddc465-xrp8k" podStartSLOduration=34.92194496 podStartE2EDuration="34.92194496s" podCreationTimestamp="2026-02-23 17:45:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:46:26.9144539 +0000 UTC m=+942.730653500" watchObservedRunningTime="2026-02-23 17:46:26.92194496 +0000 UTC m=+942.738144560" Feb 23 17:46:26 crc kubenswrapper[4724]: I0223 17:46:26.939183 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-67d996989d-fxj7d" podStartSLOduration=3.188771147 podStartE2EDuration="34.939158357s" podCreationTimestamp="2026-02-23 17:45:52 +0000 UTC" firstStartedPulling="2026-02-23 17:45:53.918075041 +0000 UTC m=+909.734274641" lastFinishedPulling="2026-02-23 17:46:25.668462251 +0000 UTC m=+941.484661851" observedRunningTime="2026-02-23 17:46:26.931958752 +0000 UTC m=+942.748158372" watchObservedRunningTime="2026-02-23 17:46:26.939158357 +0000 UTC m=+942.755357957" Feb 23 17:46:29 crc kubenswrapper[4724]: I0223 17:46:29.914700 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-9s4mk" event={"ID":"8b193934-08d8-4435-ae40-8b4d7b4878e7","Type":"ContainerStarted","Data":"1e10a108ca5bef7900f0f289592f62b0c7963ab3d1f00ed679751cd5edeb0332"} Feb 23 17:46:29 crc kubenswrapper[4724]: I0223 17:46:29.915745 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-9s4mk" Feb 23 17:46:29 crc kubenswrapper[4724]: I0223 17:46:29.916304 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-wqsvk" event={"ID":"e37a1f8b-cee7-4a13-879e-496d26735ab4","Type":"ContainerStarted","Data":"4ae587e53b306b318c1a58bcb3bd2f2f919e0c736e4634a9830a7a76f5a5e415"} Feb 23 17:46:29 crc kubenswrapper[4724]: I0223 17:46:29.916538 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-wqsvk" Feb 23 17:46:29 crc kubenswrapper[4724]: I0223 17:46:29.918848 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bdnnb" event={"ID":"c84eafd3-7b24-45c7-b6ad-d813d6cf9ede","Type":"ContainerStarted","Data":"18b750248c89325c3b1821f30ac879b7505e0b44351df4f5803d40762ad5c1ba"} Feb 23 17:46:29 crc kubenswrapper[4724]: I0223 17:46:29.920794 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rbd7w" event={"ID":"4fe3a7ec-fc03-4115-9ba3-5c2ab162382c","Type":"ContainerStarted","Data":"96d3203dbefc3253c0d7c8ddc8dd825f5fdc9da84f9ecd9d313659ae0841b293"} Feb 23 17:46:29 crc kubenswrapper[4724]: I0223 17:46:29.922078 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c44nwv" event={"ID":"63923048-2ad5-45f9-9285-9d84dc711fa7","Type":"ContainerStarted","Data":"036aa0e8b5562f9451f8d79a5e7371740fe4c681340229b4e18fb6e816ac4d53"} Feb 23 17:46:29 crc kubenswrapper[4724]: I0223 17:46:29.922219 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c44nwv" Feb 23 17:46:29 crc kubenswrapper[4724]: I0223 17:46:29.923148 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-pb2dv" event={"ID":"7af82cf5-bcff-40d4-8c1b-4bc71e5ca9a3","Type":"ContainerStarted","Data":"9aa550d819e62cf4b68ccb86f14150e666da21d33fa0ae9799d1284aa8114161"} Feb 23 17:46:29 crc kubenswrapper[4724]: I0223 17:46:29.923280 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-pb2dv" Feb 23 17:46:29 crc kubenswrapper[4724]: I0223 17:46:29.924310 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-92g5j" event={"ID":"a8f9c97e-0259-4c6e-b188-33081d1706fd","Type":"ContainerStarted","Data":"14904bad8dbd3075aaf81cf955b51db6f954dff218cce97525102aa9f76d3e16"} Feb 23 17:46:29 crc kubenswrapper[4724]: I0223 17:46:29.924448 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-92g5j" Feb 23 17:46:29 crc kubenswrapper[4724]: I0223 17:46:29.946354 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-9s4mk" podStartSLOduration=2.762851171 podStartE2EDuration="37.946328707s" podCreationTimestamp="2026-02-23 17:45:52 +0000 UTC" firstStartedPulling="2026-02-23 17:45:54.043955146 +0000 UTC m=+909.860154746" lastFinishedPulling="2026-02-23 17:46:29.227432682 +0000 UTC m=+945.043632282" observedRunningTime="2026-02-23 17:46:29.936812755 +0000 UTC m=+945.753012355" watchObservedRunningTime="2026-02-23 17:46:29.946328707 +0000 UTC m=+945.762528307" Feb 23 17:46:29 crc kubenswrapper[4724]: I0223 17:46:29.973406 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c44nwv" podStartSLOduration=34.518934485 podStartE2EDuration="37.973354861s" podCreationTimestamp="2026-02-23 17:45:52 +0000 UTC" firstStartedPulling="2026-02-23 17:46:25.375175105 +0000 UTC m=+941.191374715" lastFinishedPulling="2026-02-23 17:46:28.829595491 +0000 UTC m=+944.645795091" observedRunningTime="2026-02-23 17:46:29.970137936 +0000 UTC m=+945.786337546" watchObservedRunningTime="2026-02-23 17:46:29.973354861 +0000 UTC m=+945.789554461" Feb 23 17:46:29 crc kubenswrapper[4724]: I0223 17:46:29.995931 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-92g5j" podStartSLOduration=2.532158897 podStartE2EDuration="37.995906075s" podCreationTimestamp="2026-02-23 17:45:52 +0000 UTC" firstStartedPulling="2026-02-23 17:45:54.110480076 +0000 UTC m=+909.926679676" lastFinishedPulling="2026-02-23 17:46:29.574227254 +0000 UTC m=+945.390426854" observedRunningTime="2026-02-23 17:46:29.990827513 +0000 UTC m=+945.807027113" watchObservedRunningTime="2026-02-23 17:46:29.995906075 +0000 UTC m=+945.812105665" Feb 23 17:46:30 crc kubenswrapper[4724]: I0223 17:46:30.057855 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-pb2dv" podStartSLOduration=34.789546939 podStartE2EDuration="39.057830562s" podCreationTimestamp="2026-02-23 17:45:51 +0000 UTC" firstStartedPulling="2026-02-23 17:46:24.537044189 +0000 UTC m=+940.353243779" lastFinishedPulling="2026-02-23 17:46:28.805327802 +0000 UTC m=+944.621527402" observedRunningTime="2026-02-23 17:46:30.053215339 +0000 UTC m=+945.869414949" watchObservedRunningTime="2026-02-23 17:46:30.057830562 +0000 UTC m=+945.874030162" Feb 23 17:46:30 crc kubenswrapper[4724]: I0223 17:46:30.059931 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bdnnb" podStartSLOduration=13.078257587 podStartE2EDuration="18.059921454s" podCreationTimestamp="2026-02-23 17:46:12 +0000 UTC" firstStartedPulling="2026-02-23 17:46:23.823701736 +0000 UTC m=+939.639901346" lastFinishedPulling="2026-02-23 17:46:28.805365613 +0000 UTC m=+944.621565213" observedRunningTime="2026-02-23 17:46:30.038631305 +0000 UTC m=+945.854830925" watchObservedRunningTime="2026-02-23 17:46:30.059921454 +0000 UTC m=+945.876121054" Feb 23 17:46:30 crc kubenswrapper[4724]: I0223 17:46:30.119613 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-wqsvk" podStartSLOduration=3.172397498 podStartE2EDuration="38.119583745s" podCreationTimestamp="2026-02-23 17:45:52 +0000 UTC" firstStartedPulling="2026-02-23 17:45:54.279440928 +0000 UTC m=+910.095640528" lastFinishedPulling="2026-02-23 17:46:29.226627165 +0000 UTC m=+945.042826775" observedRunningTime="2026-02-23 17:46:30.115285259 +0000 UTC m=+945.931484859" watchObservedRunningTime="2026-02-23 17:46:30.119583745 +0000 UTC m=+945.935783335" Feb 23 17:46:30 crc kubenswrapper[4724]: I0223 17:46:30.122539 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rbd7w" podStartSLOduration=17.11758781 podStartE2EDuration="22.122523125s" podCreationTimestamp="2026-02-23 17:46:08 +0000 UTC" firstStartedPulling="2026-02-23 17:46:23.826321669 +0000 UTC m=+939.642521269" lastFinishedPulling="2026-02-23 17:46:28.831256994 +0000 UTC m=+944.647456584" observedRunningTime="2026-02-23 17:46:30.084817125 +0000 UTC m=+945.901016715" watchObservedRunningTime="2026-02-23 17:46:30.122523125 +0000 UTC m=+945.938722725" Feb 23 17:46:30 crc kubenswrapper[4724]: I0223 17:46:30.645323 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-75ft9" Feb 23 17:46:30 crc kubenswrapper[4724]: I0223 17:46:30.645447 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-75ft9" Feb 23 17:46:30 crc kubenswrapper[4724]: I0223 17:46:30.693691 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-75ft9" Feb 23 17:46:30 crc kubenswrapper[4724]: I0223 17:46:30.938599 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-p42tx" event={"ID":"24d796b9-e6ea-4b70-9424-1352f71c80a6","Type":"ContainerStarted","Data":"e42b2e435826b82cf28e12b2dca42186001a2967c5e8ba328b47047d5b3eac64"} Feb 23 17:46:30 crc kubenswrapper[4724]: I0223 17:46:30.940833 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-p42tx" Feb 23 17:46:30 crc kubenswrapper[4724]: I0223 17:46:30.965025 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-p42tx" podStartSLOduration=3.05190112 podStartE2EDuration="38.965006417s" podCreationTimestamp="2026-02-23 17:45:52 +0000 UTC" firstStartedPulling="2026-02-23 17:45:54.137824206 +0000 UTC m=+909.954023806" lastFinishedPulling="2026-02-23 17:46:30.050929503 +0000 UTC m=+945.867129103" observedRunningTime="2026-02-23 17:46:30.961853754 +0000 UTC m=+946.778053354" watchObservedRunningTime="2026-02-23 17:46:30.965006417 +0000 UTC m=+946.781206017" Feb 23 17:46:30 crc kubenswrapper[4724]: I0223 17:46:30.999485 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-75ft9" Feb 23 17:46:31 crc kubenswrapper[4724]: I0223 17:46:31.243192 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-75ft9"] Feb 23 17:46:32 crc kubenswrapper[4724]: I0223 17:46:32.215038 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-4zgfm" Feb 23 17:46:32 crc kubenswrapper[4724]: I0223 17:46:32.282123 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-zm7cw" Feb 23 17:46:32 crc kubenswrapper[4724]: I0223 17:46:32.317891 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-vqls9" Feb 23 17:46:32 crc kubenswrapper[4724]: I0223 17:46:32.346051 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-gmdl7" Feb 23 17:46:32 crc kubenswrapper[4724]: I0223 17:46:32.350338 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-gmdl7" Feb 23 17:46:32 crc kubenswrapper[4724]: I0223 17:46:32.373914 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-f5x72" Feb 23 17:46:32 crc kubenswrapper[4724]: I0223 17:46:32.541744 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-22lgm" Feb 23 17:46:32 crc kubenswrapper[4724]: I0223 17:46:32.673293 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-9gtq7" Feb 23 17:46:32 crc kubenswrapper[4724]: I0223 17:46:32.676142 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-9gtq7" Feb 23 17:46:32 crc kubenswrapper[4724]: I0223 17:46:32.693349 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-67d996989d-fxj7d" Feb 23 17:46:32 crc kubenswrapper[4724]: I0223 17:46:32.708551 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bdnnb" Feb 23 17:46:32 crc kubenswrapper[4724]: I0223 17:46:32.708639 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bdnnb" Feb 23 17:46:32 crc kubenswrapper[4724]: I0223 17:46:32.762450 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bdnnb" Feb 23 17:46:32 crc kubenswrapper[4724]: I0223 17:46:32.929075 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-xdfp8" Feb 23 17:46:32 crc kubenswrapper[4724]: I0223 17:46:32.985917 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-75ft9" podUID="0fb435d7-a53d-44d4-b800-23f60d2aac7c" containerName="registry-server" containerID="cri-o://f3d444a83f571af6f6ae56c967144fa50089367f520d716e94aed28bc64ce628" gracePeriod=2 Feb 23 17:46:33 crc kubenswrapper[4724]: I0223 17:46:33.008789 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-d5z2j" Feb 23 17:46:33 crc kubenswrapper[4724]: I0223 17:46:33.105241 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-szmk8" Feb 23 17:46:33 crc kubenswrapper[4724]: I0223 17:46:33.250180 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-d85f4" Feb 23 17:46:33 crc kubenswrapper[4724]: I0223 17:46:33.252929 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-4tnw2" Feb 23 17:46:33 crc kubenswrapper[4724]: I0223 17:46:33.252988 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-d85f4" Feb 23 17:46:33 crc kubenswrapper[4724]: I0223 17:46:33.281165 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5cb6b78489-7tdgw" Feb 23 17:46:33 crc kubenswrapper[4724]: I0223 17:46:33.995443 4724 generic.go:334] "Generic (PLEG): container finished" podID="0fb435d7-a53d-44d4-b800-23f60d2aac7c" containerID="f3d444a83f571af6f6ae56c967144fa50089367f520d716e94aed28bc64ce628" exitCode=0 Feb 23 17:46:33 crc kubenswrapper[4724]: I0223 17:46:33.995507 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-75ft9" event={"ID":"0fb435d7-a53d-44d4-b800-23f60d2aac7c","Type":"ContainerDied","Data":"f3d444a83f571af6f6ae56c967144fa50089367f520d716e94aed28bc64ce628"} Feb 23 17:46:34 crc kubenswrapper[4724]: I0223 17:46:34.212118 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-pb2dv" Feb 23 17:46:34 crc kubenswrapper[4724]: I0223 17:46:34.847362 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c44nwv" Feb 23 17:46:35 crc kubenswrapper[4724]: I0223 17:46:35.131854 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-bf9ddc465-xrp8k" Feb 23 17:46:35 crc kubenswrapper[4724]: I0223 17:46:35.525843 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-75ft9" Feb 23 17:46:35 crc kubenswrapper[4724]: I0223 17:46:35.636356 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdh9j\" (UniqueName: \"kubernetes.io/projected/0fb435d7-a53d-44d4-b800-23f60d2aac7c-kube-api-access-jdh9j\") pod \"0fb435d7-a53d-44d4-b800-23f60d2aac7c\" (UID: \"0fb435d7-a53d-44d4-b800-23f60d2aac7c\") " Feb 23 17:46:35 crc kubenswrapper[4724]: I0223 17:46:35.636474 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fb435d7-a53d-44d4-b800-23f60d2aac7c-catalog-content\") pod \"0fb435d7-a53d-44d4-b800-23f60d2aac7c\" (UID: \"0fb435d7-a53d-44d4-b800-23f60d2aac7c\") " Feb 23 17:46:35 crc kubenswrapper[4724]: I0223 17:46:35.636539 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fb435d7-a53d-44d4-b800-23f60d2aac7c-utilities\") pod \"0fb435d7-a53d-44d4-b800-23f60d2aac7c\" (UID: \"0fb435d7-a53d-44d4-b800-23f60d2aac7c\") " Feb 23 17:46:35 crc kubenswrapper[4724]: I0223 17:46:35.637245 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fb435d7-a53d-44d4-b800-23f60d2aac7c-utilities" (OuterVolumeSpecName: "utilities") pod "0fb435d7-a53d-44d4-b800-23f60d2aac7c" (UID: "0fb435d7-a53d-44d4-b800-23f60d2aac7c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:46:35 crc kubenswrapper[4724]: I0223 17:46:35.644222 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fb435d7-a53d-44d4-b800-23f60d2aac7c-kube-api-access-jdh9j" (OuterVolumeSpecName: "kube-api-access-jdh9j") pod "0fb435d7-a53d-44d4-b800-23f60d2aac7c" (UID: "0fb435d7-a53d-44d4-b800-23f60d2aac7c"). InnerVolumeSpecName "kube-api-access-jdh9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:46:35 crc kubenswrapper[4724]: I0223 17:46:35.690346 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fb435d7-a53d-44d4-b800-23f60d2aac7c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0fb435d7-a53d-44d4-b800-23f60d2aac7c" (UID: "0fb435d7-a53d-44d4-b800-23f60d2aac7c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:46:35 crc kubenswrapper[4724]: I0223 17:46:35.738281 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdh9j\" (UniqueName: \"kubernetes.io/projected/0fb435d7-a53d-44d4-b800-23f60d2aac7c-kube-api-access-jdh9j\") on node \"crc\" DevicePath \"\"" Feb 23 17:46:35 crc kubenswrapper[4724]: I0223 17:46:35.738309 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fb435d7-a53d-44d4-b800-23f60d2aac7c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 17:46:35 crc kubenswrapper[4724]: I0223 17:46:35.738320 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fb435d7-a53d-44d4-b800-23f60d2aac7c-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 17:46:36 crc kubenswrapper[4724]: I0223 17:46:36.031983 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-djmpk" event={"ID":"973124e7-0723-4a5d-ab81-0ef8619f8754","Type":"ContainerStarted","Data":"9fb50033a9eef27b60c27d85896ccfda97ef951f3341a51a017f17ee35966ae8"} Feb 23 17:46:36 crc kubenswrapper[4724]: I0223 17:46:36.034927 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-djmpk" Feb 23 17:46:36 crc kubenswrapper[4724]: I0223 17:46:36.036133 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-75ft9" event={"ID":"0fb435d7-a53d-44d4-b800-23f60d2aac7c","Type":"ContainerDied","Data":"a0f50e73d1b184eb7bd4b6e257ecfe5dc6fdec3ed1b4c66fcf9c12d010361aff"} Feb 23 17:46:36 crc kubenswrapper[4724]: I0223 17:46:36.036246 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-75ft9" Feb 23 17:46:36 crc kubenswrapper[4724]: I0223 17:46:36.036350 4724 scope.go:117] "RemoveContainer" containerID="f3d444a83f571af6f6ae56c967144fa50089367f520d716e94aed28bc64ce628" Feb 23 17:46:36 crc kubenswrapper[4724]: I0223 17:46:36.058084 4724 scope.go:117] "RemoveContainer" containerID="5243abe0260b4d7be9e78ddd0648164966e7ba35c579da6fb6f9c73b79784cab" Feb 23 17:46:36 crc kubenswrapper[4724]: I0223 17:46:36.058643 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-djmpk" podStartSLOduration=2.437726305 podStartE2EDuration="44.058621599s" podCreationTimestamp="2026-02-23 17:45:52 +0000 UTC" firstStartedPulling="2026-02-23 17:45:53.88226106 +0000 UTC m=+909.698460660" lastFinishedPulling="2026-02-23 17:46:35.503156354 +0000 UTC m=+951.319355954" observedRunningTime="2026-02-23 17:46:36.0542189 +0000 UTC m=+951.870418500" watchObservedRunningTime="2026-02-23 17:46:36.058621599 +0000 UTC m=+951.874821199" Feb 23 17:46:36 crc kubenswrapper[4724]: I0223 17:46:36.078574 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-75ft9"] Feb 23 17:46:36 crc kubenswrapper[4724]: I0223 17:46:36.085714 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-75ft9"] Feb 23 17:46:36 crc kubenswrapper[4724]: I0223 17:46:36.091277 4724 scope.go:117] "RemoveContainer" containerID="64e1204d137d00021831cf1bb440cb4743e9f200ac107e7c6294c36ffa84c9f2" Feb 23 17:46:36 crc kubenswrapper[4724]: I0223 17:46:36.962303 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fb435d7-a53d-44d4-b800-23f60d2aac7c" path="/var/lib/kubelet/pods/0fb435d7-a53d-44d4-b800-23f60d2aac7c/volumes" Feb 23 17:46:39 crc kubenswrapper[4724]: I0223 17:46:39.306073 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rbd7w" Feb 23 17:46:39 crc kubenswrapper[4724]: I0223 17:46:39.306432 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rbd7w" Feb 23 17:46:39 crc kubenswrapper[4724]: I0223 17:46:39.362762 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rbd7w" Feb 23 17:46:40 crc kubenswrapper[4724]: I0223 17:46:40.110647 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rbd7w" Feb 23 17:46:40 crc kubenswrapper[4724]: I0223 17:46:40.160243 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rbd7w"] Feb 23 17:46:42 crc kubenswrapper[4724]: I0223 17:46:42.082141 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rbd7w" podUID="4fe3a7ec-fc03-4115-9ba3-5c2ab162382c" containerName="registry-server" containerID="cri-o://96d3203dbefc3253c0d7c8ddc8dd825f5fdc9da84f9ecd9d313659ae0841b293" gracePeriod=2 Feb 23 17:46:42 crc kubenswrapper[4724]: I0223 17:46:42.631922 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-djmpk" Feb 23 17:46:42 crc kubenswrapper[4724]: I0223 17:46:42.755708 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bdnnb" Feb 23 17:46:42 crc kubenswrapper[4724]: I0223 17:46:42.804830 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-9s4mk" Feb 23 17:46:42 crc kubenswrapper[4724]: I0223 17:46:42.974596 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-p42tx" Feb 23 17:46:43 crc kubenswrapper[4724]: I0223 17:46:43.090739 4724 generic.go:334] "Generic (PLEG): container finished" podID="4fe3a7ec-fc03-4115-9ba3-5c2ab162382c" containerID="96d3203dbefc3253c0d7c8ddc8dd825f5fdc9da84f9ecd9d313659ae0841b293" exitCode=0 Feb 23 17:46:43 crc kubenswrapper[4724]: I0223 17:46:43.090793 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rbd7w" event={"ID":"4fe3a7ec-fc03-4115-9ba3-5c2ab162382c","Type":"ContainerDied","Data":"96d3203dbefc3253c0d7c8ddc8dd825f5fdc9da84f9ecd9d313659ae0841b293"} Feb 23 17:46:43 crc kubenswrapper[4724]: I0223 17:46:43.138204 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-92g5j" Feb 23 17:46:43 crc kubenswrapper[4724]: I0223 17:46:43.166788 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-wqsvk" Feb 23 17:46:43 crc kubenswrapper[4724]: I0223 17:46:43.444258 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bdnnb"] Feb 23 17:46:43 crc kubenswrapper[4724]: I0223 17:46:43.444566 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bdnnb" podUID="c84eafd3-7b24-45c7-b6ad-d813d6cf9ede" containerName="registry-server" containerID="cri-o://18b750248c89325c3b1821f30ac879b7505e0b44351df4f5803d40762ad5c1ba" gracePeriod=2 Feb 23 17:46:45 crc kubenswrapper[4724]: I0223 17:46:45.068117 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bdnnb" Feb 23 17:46:45 crc kubenswrapper[4724]: I0223 17:46:45.077840 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c84eafd3-7b24-45c7-b6ad-d813d6cf9ede-catalog-content\") pod \"c84eafd3-7b24-45c7-b6ad-d813d6cf9ede\" (UID: \"c84eafd3-7b24-45c7-b6ad-d813d6cf9ede\") " Feb 23 17:46:45 crc kubenswrapper[4724]: I0223 17:46:45.078021 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skgc2\" (UniqueName: \"kubernetes.io/projected/c84eafd3-7b24-45c7-b6ad-d813d6cf9ede-kube-api-access-skgc2\") pod \"c84eafd3-7b24-45c7-b6ad-d813d6cf9ede\" (UID: \"c84eafd3-7b24-45c7-b6ad-d813d6cf9ede\") " Feb 23 17:46:45 crc kubenswrapper[4724]: I0223 17:46:45.078056 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c84eafd3-7b24-45c7-b6ad-d813d6cf9ede-utilities\") pod \"c84eafd3-7b24-45c7-b6ad-d813d6cf9ede\" (UID: \"c84eafd3-7b24-45c7-b6ad-d813d6cf9ede\") " Feb 23 17:46:45 crc kubenswrapper[4724]: I0223 17:46:45.079066 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c84eafd3-7b24-45c7-b6ad-d813d6cf9ede-utilities" (OuterVolumeSpecName: "utilities") pod "c84eafd3-7b24-45c7-b6ad-d813d6cf9ede" (UID: "c84eafd3-7b24-45c7-b6ad-d813d6cf9ede"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:46:45 crc kubenswrapper[4724]: I0223 17:46:45.083673 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c84eafd3-7b24-45c7-b6ad-d813d6cf9ede-kube-api-access-skgc2" (OuterVolumeSpecName: "kube-api-access-skgc2") pod "c84eafd3-7b24-45c7-b6ad-d813d6cf9ede" (UID: "c84eafd3-7b24-45c7-b6ad-d813d6cf9ede"). InnerVolumeSpecName "kube-api-access-skgc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:46:45 crc kubenswrapper[4724]: I0223 17:46:45.129155 4724 generic.go:334] "Generic (PLEG): container finished" podID="c84eafd3-7b24-45c7-b6ad-d813d6cf9ede" containerID="18b750248c89325c3b1821f30ac879b7505e0b44351df4f5803d40762ad5c1ba" exitCode=0 Feb 23 17:46:45 crc kubenswrapper[4724]: I0223 17:46:45.129213 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bdnnb" event={"ID":"c84eafd3-7b24-45c7-b6ad-d813d6cf9ede","Type":"ContainerDied","Data":"18b750248c89325c3b1821f30ac879b7505e0b44351df4f5803d40762ad5c1ba"} Feb 23 17:46:45 crc kubenswrapper[4724]: I0223 17:46:45.129241 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bdnnb" event={"ID":"c84eafd3-7b24-45c7-b6ad-d813d6cf9ede","Type":"ContainerDied","Data":"71fb9107eec885209d25042a2b6aa1b4118bc38a69bdbd9cdfc51a51a69d3e26"} Feb 23 17:46:45 crc kubenswrapper[4724]: I0223 17:46:45.129261 4724 scope.go:117] "RemoveContainer" containerID="18b750248c89325c3b1821f30ac879b7505e0b44351df4f5803d40762ad5c1ba" Feb 23 17:46:45 crc kubenswrapper[4724]: I0223 17:46:45.129894 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bdnnb" Feb 23 17:46:45 crc kubenswrapper[4724]: I0223 17:46:45.141496 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c84eafd3-7b24-45c7-b6ad-d813d6cf9ede-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c84eafd3-7b24-45c7-b6ad-d813d6cf9ede" (UID: "c84eafd3-7b24-45c7-b6ad-d813d6cf9ede"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:46:45 crc kubenswrapper[4724]: I0223 17:46:45.150121 4724 scope.go:117] "RemoveContainer" containerID="8f2a24830312982d8ef53b62393e61cef02b40fb1c47d20798457679e6d8649f" Feb 23 17:46:45 crc kubenswrapper[4724]: I0223 17:46:45.169405 4724 scope.go:117] "RemoveContainer" containerID="5dbfc1c30d598a86e8d3daaa8a206a2be6a5c3e0dbc7c85fd08dc8d1435544ba" Feb 23 17:46:45 crc kubenswrapper[4724]: I0223 17:46:45.179124 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skgc2\" (UniqueName: \"kubernetes.io/projected/c84eafd3-7b24-45c7-b6ad-d813d6cf9ede-kube-api-access-skgc2\") on node \"crc\" DevicePath \"\"" Feb 23 17:46:45 crc kubenswrapper[4724]: I0223 17:46:45.179157 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c84eafd3-7b24-45c7-b6ad-d813d6cf9ede-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 17:46:45 crc kubenswrapper[4724]: I0223 17:46:45.179167 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c84eafd3-7b24-45c7-b6ad-d813d6cf9ede-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 17:46:45 crc kubenswrapper[4724]: I0223 17:46:45.194579 4724 scope.go:117] "RemoveContainer" containerID="18b750248c89325c3b1821f30ac879b7505e0b44351df4f5803d40762ad5c1ba" Feb 23 17:46:45 crc kubenswrapper[4724]: E0223 17:46:45.195152 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18b750248c89325c3b1821f30ac879b7505e0b44351df4f5803d40762ad5c1ba\": container with ID starting with 18b750248c89325c3b1821f30ac879b7505e0b44351df4f5803d40762ad5c1ba not found: ID does not exist" containerID="18b750248c89325c3b1821f30ac879b7505e0b44351df4f5803d40762ad5c1ba" Feb 23 17:46:45 crc kubenswrapper[4724]: I0223 17:46:45.195187 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18b750248c89325c3b1821f30ac879b7505e0b44351df4f5803d40762ad5c1ba"} err="failed to get container status \"18b750248c89325c3b1821f30ac879b7505e0b44351df4f5803d40762ad5c1ba\": rpc error: code = NotFound desc = could not find container \"18b750248c89325c3b1821f30ac879b7505e0b44351df4f5803d40762ad5c1ba\": container with ID starting with 18b750248c89325c3b1821f30ac879b7505e0b44351df4f5803d40762ad5c1ba not found: ID does not exist" Feb 23 17:46:45 crc kubenswrapper[4724]: I0223 17:46:45.195212 4724 scope.go:117] "RemoveContainer" containerID="8f2a24830312982d8ef53b62393e61cef02b40fb1c47d20798457679e6d8649f" Feb 23 17:46:45 crc kubenswrapper[4724]: E0223 17:46:45.195688 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f2a24830312982d8ef53b62393e61cef02b40fb1c47d20798457679e6d8649f\": container with ID starting with 8f2a24830312982d8ef53b62393e61cef02b40fb1c47d20798457679e6d8649f not found: ID does not exist" containerID="8f2a24830312982d8ef53b62393e61cef02b40fb1c47d20798457679e6d8649f" Feb 23 17:46:45 crc kubenswrapper[4724]: I0223 17:46:45.195719 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f2a24830312982d8ef53b62393e61cef02b40fb1c47d20798457679e6d8649f"} err="failed to get container status \"8f2a24830312982d8ef53b62393e61cef02b40fb1c47d20798457679e6d8649f\": rpc error: code = NotFound desc = could not find container \"8f2a24830312982d8ef53b62393e61cef02b40fb1c47d20798457679e6d8649f\": container with ID starting with 8f2a24830312982d8ef53b62393e61cef02b40fb1c47d20798457679e6d8649f not found: ID does not exist" Feb 23 17:46:45 crc kubenswrapper[4724]: I0223 17:46:45.195742 4724 scope.go:117] "RemoveContainer" containerID="5dbfc1c30d598a86e8d3daaa8a206a2be6a5c3e0dbc7c85fd08dc8d1435544ba" Feb 23 17:46:45 crc kubenswrapper[4724]: E0223 17:46:45.196214 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5dbfc1c30d598a86e8d3daaa8a206a2be6a5c3e0dbc7c85fd08dc8d1435544ba\": container with ID starting with 5dbfc1c30d598a86e8d3daaa8a206a2be6a5c3e0dbc7c85fd08dc8d1435544ba not found: ID does not exist" containerID="5dbfc1c30d598a86e8d3daaa8a206a2be6a5c3e0dbc7c85fd08dc8d1435544ba" Feb 23 17:46:45 crc kubenswrapper[4724]: I0223 17:46:45.196240 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5dbfc1c30d598a86e8d3daaa8a206a2be6a5c3e0dbc7c85fd08dc8d1435544ba"} err="failed to get container status \"5dbfc1c30d598a86e8d3daaa8a206a2be6a5c3e0dbc7c85fd08dc8d1435544ba\": rpc error: code = NotFound desc = could not find container \"5dbfc1c30d598a86e8d3daaa8a206a2be6a5c3e0dbc7c85fd08dc8d1435544ba\": container with ID starting with 5dbfc1c30d598a86e8d3daaa8a206a2be6a5c3e0dbc7c85fd08dc8d1435544ba not found: ID does not exist" Feb 23 17:46:45 crc kubenswrapper[4724]: I0223 17:46:45.469871 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bdnnb"] Feb 23 17:46:45 crc kubenswrapper[4724]: I0223 17:46:45.475908 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bdnnb"] Feb 23 17:46:46 crc kubenswrapper[4724]: I0223 17:46:46.695951 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rbd7w" Feb 23 17:46:46 crc kubenswrapper[4724]: I0223 17:46:46.706017 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fe3a7ec-fc03-4115-9ba3-5c2ab162382c-catalog-content\") pod \"4fe3a7ec-fc03-4115-9ba3-5c2ab162382c\" (UID: \"4fe3a7ec-fc03-4115-9ba3-5c2ab162382c\") " Feb 23 17:46:46 crc kubenswrapper[4724]: I0223 17:46:46.706160 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzgfw\" (UniqueName: \"kubernetes.io/projected/4fe3a7ec-fc03-4115-9ba3-5c2ab162382c-kube-api-access-jzgfw\") pod \"4fe3a7ec-fc03-4115-9ba3-5c2ab162382c\" (UID: \"4fe3a7ec-fc03-4115-9ba3-5c2ab162382c\") " Feb 23 17:46:46 crc kubenswrapper[4724]: I0223 17:46:46.706199 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fe3a7ec-fc03-4115-9ba3-5c2ab162382c-utilities\") pod \"4fe3a7ec-fc03-4115-9ba3-5c2ab162382c\" (UID: \"4fe3a7ec-fc03-4115-9ba3-5c2ab162382c\") " Feb 23 17:46:46 crc kubenswrapper[4724]: I0223 17:46:46.707154 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fe3a7ec-fc03-4115-9ba3-5c2ab162382c-utilities" (OuterVolumeSpecName: "utilities") pod "4fe3a7ec-fc03-4115-9ba3-5c2ab162382c" (UID: "4fe3a7ec-fc03-4115-9ba3-5c2ab162382c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:46:46 crc kubenswrapper[4724]: I0223 17:46:46.716610 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fe3a7ec-fc03-4115-9ba3-5c2ab162382c-kube-api-access-jzgfw" (OuterVolumeSpecName: "kube-api-access-jzgfw") pod "4fe3a7ec-fc03-4115-9ba3-5c2ab162382c" (UID: "4fe3a7ec-fc03-4115-9ba3-5c2ab162382c"). InnerVolumeSpecName "kube-api-access-jzgfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:46:46 crc kubenswrapper[4724]: I0223 17:46:46.738367 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fe3a7ec-fc03-4115-9ba3-5c2ab162382c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4fe3a7ec-fc03-4115-9ba3-5c2ab162382c" (UID: "4fe3a7ec-fc03-4115-9ba3-5c2ab162382c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:46:46 crc kubenswrapper[4724]: I0223 17:46:46.808531 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzgfw\" (UniqueName: \"kubernetes.io/projected/4fe3a7ec-fc03-4115-9ba3-5c2ab162382c-kube-api-access-jzgfw\") on node \"crc\" DevicePath \"\"" Feb 23 17:46:46 crc kubenswrapper[4724]: I0223 17:46:46.808573 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fe3a7ec-fc03-4115-9ba3-5c2ab162382c-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 17:46:46 crc kubenswrapper[4724]: I0223 17:46:46.808586 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fe3a7ec-fc03-4115-9ba3-5c2ab162382c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 17:46:46 crc kubenswrapper[4724]: I0223 17:46:46.965490 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c84eafd3-7b24-45c7-b6ad-d813d6cf9ede" path="/var/lib/kubelet/pods/c84eafd3-7b24-45c7-b6ad-d813d6cf9ede/volumes" Feb 23 17:46:47 crc kubenswrapper[4724]: I0223 17:46:47.146431 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rbd7w" event={"ID":"4fe3a7ec-fc03-4115-9ba3-5c2ab162382c","Type":"ContainerDied","Data":"decb3487ff38eebf49f21ed718b543a3f5953c55c2362711e1e0498d2f61ec76"} Feb 23 17:46:47 crc kubenswrapper[4724]: I0223 17:46:47.146505 4724 scope.go:117] "RemoveContainer" containerID="96d3203dbefc3253c0d7c8ddc8dd825f5fdc9da84f9ecd9d313659ae0841b293" Feb 23 17:46:47 crc kubenswrapper[4724]: I0223 17:46:47.146709 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rbd7w" Feb 23 17:46:47 crc kubenswrapper[4724]: I0223 17:46:47.166910 4724 scope.go:117] "RemoveContainer" containerID="8b07114d851686e0b324d850290d219907af9da2b7324c624d606c2724cd9b4c" Feb 23 17:46:47 crc kubenswrapper[4724]: I0223 17:46:47.177530 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rbd7w"] Feb 23 17:46:47 crc kubenswrapper[4724]: I0223 17:46:47.184466 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rbd7w"] Feb 23 17:46:47 crc kubenswrapper[4724]: I0223 17:46:47.193752 4724 scope.go:117] "RemoveContainer" containerID="0a9d82626480647767c8fef9fb0e14590d459d9bb19f2d3571d8700727d6a366" Feb 23 17:46:48 crc kubenswrapper[4724]: I0223 17:46:48.959786 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fe3a7ec-fc03-4115-9ba3-5c2ab162382c" path="/var/lib/kubelet/pods/4fe3a7ec-fc03-4115-9ba3-5c2ab162382c/volumes" Feb 23 17:47:02 crc kubenswrapper[4724]: I0223 17:47:02.477235 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-784b55c5d9-mvlbh"] Feb 23 17:47:02 crc kubenswrapper[4724]: E0223 17:47:02.478188 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fb435d7-a53d-44d4-b800-23f60d2aac7c" containerName="extract-content" Feb 23 17:47:02 crc kubenswrapper[4724]: I0223 17:47:02.478205 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fb435d7-a53d-44d4-b800-23f60d2aac7c" containerName="extract-content" Feb 23 17:47:02 crc kubenswrapper[4724]: E0223 17:47:02.478220 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c84eafd3-7b24-45c7-b6ad-d813d6cf9ede" containerName="extract-utilities" Feb 23 17:47:02 crc kubenswrapper[4724]: I0223 17:47:02.478228 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="c84eafd3-7b24-45c7-b6ad-d813d6cf9ede" containerName="extract-utilities" Feb 23 17:47:02 crc kubenswrapper[4724]: E0223 17:47:02.478244 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fe3a7ec-fc03-4115-9ba3-5c2ab162382c" containerName="extract-content" Feb 23 17:47:02 crc kubenswrapper[4724]: I0223 17:47:02.478253 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fe3a7ec-fc03-4115-9ba3-5c2ab162382c" containerName="extract-content" Feb 23 17:47:02 crc kubenswrapper[4724]: E0223 17:47:02.478271 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c84eafd3-7b24-45c7-b6ad-d813d6cf9ede" containerName="registry-server" Feb 23 17:47:02 crc kubenswrapper[4724]: I0223 17:47:02.478279 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="c84eafd3-7b24-45c7-b6ad-d813d6cf9ede" containerName="registry-server" Feb 23 17:47:02 crc kubenswrapper[4724]: E0223 17:47:02.478292 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fb435d7-a53d-44d4-b800-23f60d2aac7c" containerName="registry-server" Feb 23 17:47:02 crc kubenswrapper[4724]: I0223 17:47:02.478299 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fb435d7-a53d-44d4-b800-23f60d2aac7c" containerName="registry-server" Feb 23 17:47:02 crc kubenswrapper[4724]: E0223 17:47:02.478332 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c84eafd3-7b24-45c7-b6ad-d813d6cf9ede" containerName="extract-content" Feb 23 17:47:02 crc kubenswrapper[4724]: I0223 17:47:02.478339 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="c84eafd3-7b24-45c7-b6ad-d813d6cf9ede" containerName="extract-content" Feb 23 17:47:02 crc kubenswrapper[4724]: E0223 17:47:02.478353 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fe3a7ec-fc03-4115-9ba3-5c2ab162382c" containerName="registry-server" Feb 23 17:47:02 crc kubenswrapper[4724]: I0223 17:47:02.478363 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fe3a7ec-fc03-4115-9ba3-5c2ab162382c" containerName="registry-server" Feb 23 17:47:02 crc kubenswrapper[4724]: E0223 17:47:02.478376 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fe3a7ec-fc03-4115-9ba3-5c2ab162382c" containerName="extract-utilities" Feb 23 17:47:02 crc kubenswrapper[4724]: I0223 17:47:02.478384 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fe3a7ec-fc03-4115-9ba3-5c2ab162382c" containerName="extract-utilities" Feb 23 17:47:02 crc kubenswrapper[4724]: E0223 17:47:02.478412 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fb435d7-a53d-44d4-b800-23f60d2aac7c" containerName="extract-utilities" Feb 23 17:47:02 crc kubenswrapper[4724]: I0223 17:47:02.478421 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fb435d7-a53d-44d4-b800-23f60d2aac7c" containerName="extract-utilities" Feb 23 17:47:02 crc kubenswrapper[4724]: I0223 17:47:02.478583 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fb435d7-a53d-44d4-b800-23f60d2aac7c" containerName="registry-server" Feb 23 17:47:02 crc kubenswrapper[4724]: I0223 17:47:02.478607 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fe3a7ec-fc03-4115-9ba3-5c2ab162382c" containerName="registry-server" Feb 23 17:47:02 crc kubenswrapper[4724]: I0223 17:47:02.478619 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="c84eafd3-7b24-45c7-b6ad-d813d6cf9ede" containerName="registry-server" Feb 23 17:47:02 crc kubenswrapper[4724]: I0223 17:47:02.479568 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-784b55c5d9-mvlbh" Feb 23 17:47:02 crc kubenswrapper[4724]: I0223 17:47:02.484555 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 23 17:47:02 crc kubenswrapper[4724]: I0223 17:47:02.484999 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 23 17:47:02 crc kubenswrapper[4724]: I0223 17:47:02.485045 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 23 17:47:02 crc kubenswrapper[4724]: I0223 17:47:02.491971 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-r5655" Feb 23 17:47:02 crc kubenswrapper[4724]: I0223 17:47:02.542894 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bf56b5889-h4bb8"] Feb 23 17:47:02 crc kubenswrapper[4724]: I0223 17:47:02.544612 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bf56b5889-h4bb8" Feb 23 17:47:02 crc kubenswrapper[4724]: I0223 17:47:02.548164 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 23 17:47:02 crc kubenswrapper[4724]: I0223 17:47:02.550597 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-784b55c5d9-mvlbh"] Feb 23 17:47:02 crc kubenswrapper[4724]: I0223 17:47:02.551453 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/53c07b37-3110-4930-a495-54ecb6f4e7fd-dns-svc\") pod \"dnsmasq-dns-bf56b5889-h4bb8\" (UID: \"53c07b37-3110-4930-a495-54ecb6f4e7fd\") " pod="openstack/dnsmasq-dns-bf56b5889-h4bb8" Feb 23 17:47:02 crc kubenswrapper[4724]: I0223 17:47:02.551507 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c848c\" (UniqueName: \"kubernetes.io/projected/53c07b37-3110-4930-a495-54ecb6f4e7fd-kube-api-access-c848c\") pod \"dnsmasq-dns-bf56b5889-h4bb8\" (UID: \"53c07b37-3110-4930-a495-54ecb6f4e7fd\") " pod="openstack/dnsmasq-dns-bf56b5889-h4bb8" Feb 23 17:47:02 crc kubenswrapper[4724]: I0223 17:47:02.551765 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78c7f0c4-57ea-4998-a98a-12f2d23d797f-config\") pod \"dnsmasq-dns-784b55c5d9-mvlbh\" (UID: \"78c7f0c4-57ea-4998-a98a-12f2d23d797f\") " pod="openstack/dnsmasq-dns-784b55c5d9-mvlbh" Feb 23 17:47:02 crc kubenswrapper[4724]: I0223 17:47:02.551824 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bnx5\" (UniqueName: \"kubernetes.io/projected/78c7f0c4-57ea-4998-a98a-12f2d23d797f-kube-api-access-5bnx5\") pod \"dnsmasq-dns-784b55c5d9-mvlbh\" (UID: \"78c7f0c4-57ea-4998-a98a-12f2d23d797f\") " pod="openstack/dnsmasq-dns-784b55c5d9-mvlbh" Feb 23 17:47:02 crc kubenswrapper[4724]: I0223 17:47:02.551848 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53c07b37-3110-4930-a495-54ecb6f4e7fd-config\") pod \"dnsmasq-dns-bf56b5889-h4bb8\" (UID: \"53c07b37-3110-4930-a495-54ecb6f4e7fd\") " pod="openstack/dnsmasq-dns-bf56b5889-h4bb8" Feb 23 17:47:02 crc kubenswrapper[4724]: I0223 17:47:02.554999 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bf56b5889-h4bb8"] Feb 23 17:47:02 crc kubenswrapper[4724]: I0223 17:47:02.652649 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/53c07b37-3110-4930-a495-54ecb6f4e7fd-dns-svc\") pod \"dnsmasq-dns-bf56b5889-h4bb8\" (UID: \"53c07b37-3110-4930-a495-54ecb6f4e7fd\") " pod="openstack/dnsmasq-dns-bf56b5889-h4bb8" Feb 23 17:47:02 crc kubenswrapper[4724]: I0223 17:47:02.652708 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c848c\" (UniqueName: \"kubernetes.io/projected/53c07b37-3110-4930-a495-54ecb6f4e7fd-kube-api-access-c848c\") pod \"dnsmasq-dns-bf56b5889-h4bb8\" (UID: \"53c07b37-3110-4930-a495-54ecb6f4e7fd\") " pod="openstack/dnsmasq-dns-bf56b5889-h4bb8" Feb 23 17:47:02 crc kubenswrapper[4724]: I0223 17:47:02.652795 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78c7f0c4-57ea-4998-a98a-12f2d23d797f-config\") pod \"dnsmasq-dns-784b55c5d9-mvlbh\" (UID: \"78c7f0c4-57ea-4998-a98a-12f2d23d797f\") " pod="openstack/dnsmasq-dns-784b55c5d9-mvlbh" Feb 23 17:47:02 crc kubenswrapper[4724]: I0223 17:47:02.652826 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bnx5\" (UniqueName: \"kubernetes.io/projected/78c7f0c4-57ea-4998-a98a-12f2d23d797f-kube-api-access-5bnx5\") pod \"dnsmasq-dns-784b55c5d9-mvlbh\" (UID: \"78c7f0c4-57ea-4998-a98a-12f2d23d797f\") " pod="openstack/dnsmasq-dns-784b55c5d9-mvlbh" Feb 23 17:47:02 crc kubenswrapper[4724]: I0223 17:47:02.652855 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53c07b37-3110-4930-a495-54ecb6f4e7fd-config\") pod \"dnsmasq-dns-bf56b5889-h4bb8\" (UID: \"53c07b37-3110-4930-a495-54ecb6f4e7fd\") " pod="openstack/dnsmasq-dns-bf56b5889-h4bb8" Feb 23 17:47:02 crc kubenswrapper[4724]: I0223 17:47:02.653710 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/53c07b37-3110-4930-a495-54ecb6f4e7fd-dns-svc\") pod \"dnsmasq-dns-bf56b5889-h4bb8\" (UID: \"53c07b37-3110-4930-a495-54ecb6f4e7fd\") " pod="openstack/dnsmasq-dns-bf56b5889-h4bb8" Feb 23 17:47:02 crc kubenswrapper[4724]: I0223 17:47:02.653820 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78c7f0c4-57ea-4998-a98a-12f2d23d797f-config\") pod \"dnsmasq-dns-784b55c5d9-mvlbh\" (UID: \"78c7f0c4-57ea-4998-a98a-12f2d23d797f\") " pod="openstack/dnsmasq-dns-784b55c5d9-mvlbh" Feb 23 17:47:02 crc kubenswrapper[4724]: I0223 17:47:02.653849 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53c07b37-3110-4930-a495-54ecb6f4e7fd-config\") pod \"dnsmasq-dns-bf56b5889-h4bb8\" (UID: \"53c07b37-3110-4930-a495-54ecb6f4e7fd\") " pod="openstack/dnsmasq-dns-bf56b5889-h4bb8" Feb 23 17:47:02 crc kubenswrapper[4724]: I0223 17:47:02.672023 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c848c\" (UniqueName: \"kubernetes.io/projected/53c07b37-3110-4930-a495-54ecb6f4e7fd-kube-api-access-c848c\") pod \"dnsmasq-dns-bf56b5889-h4bb8\" (UID: \"53c07b37-3110-4930-a495-54ecb6f4e7fd\") " pod="openstack/dnsmasq-dns-bf56b5889-h4bb8" Feb 23 17:47:02 crc kubenswrapper[4724]: I0223 17:47:02.672873 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bnx5\" (UniqueName: \"kubernetes.io/projected/78c7f0c4-57ea-4998-a98a-12f2d23d797f-kube-api-access-5bnx5\") pod \"dnsmasq-dns-784b55c5d9-mvlbh\" (UID: \"78c7f0c4-57ea-4998-a98a-12f2d23d797f\") " pod="openstack/dnsmasq-dns-784b55c5d9-mvlbh" Feb 23 17:47:02 crc kubenswrapper[4724]: I0223 17:47:02.800826 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-784b55c5d9-mvlbh" Feb 23 17:47:02 crc kubenswrapper[4724]: I0223 17:47:02.859823 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bf56b5889-h4bb8" Feb 23 17:47:03 crc kubenswrapper[4724]: I0223 17:47:03.061884 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-784b55c5d9-mvlbh"] Feb 23 17:47:03 crc kubenswrapper[4724]: W0223 17:47:03.068628 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78c7f0c4_57ea_4998_a98a_12f2d23d797f.slice/crio-37436c9d12dd67b752809dc4fd3580397b21680a65a50e21998196472fd922c6 WatchSource:0}: Error finding container 37436c9d12dd67b752809dc4fd3580397b21680a65a50e21998196472fd922c6: Status 404 returned error can't find the container with id 37436c9d12dd67b752809dc4fd3580397b21680a65a50e21998196472fd922c6 Feb 23 17:47:03 crc kubenswrapper[4724]: I0223 17:47:03.137102 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bf56b5889-h4bb8"] Feb 23 17:47:03 crc kubenswrapper[4724]: I0223 17:47:03.258525 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-784b55c5d9-mvlbh" event={"ID":"78c7f0c4-57ea-4998-a98a-12f2d23d797f","Type":"ContainerStarted","Data":"37436c9d12dd67b752809dc4fd3580397b21680a65a50e21998196472fd922c6"} Feb 23 17:47:03 crc kubenswrapper[4724]: I0223 17:47:03.259933 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bf56b5889-h4bb8" event={"ID":"53c07b37-3110-4930-a495-54ecb6f4e7fd","Type":"ContainerStarted","Data":"f98132bbe6f26f27c32ac64a7f29476e2f94258ac881f41bedfcacb066ac5f5a"} Feb 23 17:47:06 crc kubenswrapper[4724]: I0223 17:47:06.286467 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-784b55c5d9-mvlbh"] Feb 23 17:47:06 crc kubenswrapper[4724]: I0223 17:47:06.314276 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fc74595bc-c6dvf"] Feb 23 17:47:06 crc kubenswrapper[4724]: I0223 17:47:06.315966 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fc74595bc-c6dvf" Feb 23 17:47:06 crc kubenswrapper[4724]: I0223 17:47:06.333825 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fc74595bc-c6dvf"] Feb 23 17:47:06 crc kubenswrapper[4724]: I0223 17:47:06.417969 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/272b0850-c495-47ac-a514-1483b621a887-config\") pod \"dnsmasq-dns-7fc74595bc-c6dvf\" (UID: \"272b0850-c495-47ac-a514-1483b621a887\") " pod="openstack/dnsmasq-dns-7fc74595bc-c6dvf" Feb 23 17:47:06 crc kubenswrapper[4724]: I0223 17:47:06.418127 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbdmb\" (UniqueName: \"kubernetes.io/projected/272b0850-c495-47ac-a514-1483b621a887-kube-api-access-kbdmb\") pod \"dnsmasq-dns-7fc74595bc-c6dvf\" (UID: \"272b0850-c495-47ac-a514-1483b621a887\") " pod="openstack/dnsmasq-dns-7fc74595bc-c6dvf" Feb 23 17:47:06 crc kubenswrapper[4724]: I0223 17:47:06.418267 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/272b0850-c495-47ac-a514-1483b621a887-dns-svc\") pod \"dnsmasq-dns-7fc74595bc-c6dvf\" (UID: \"272b0850-c495-47ac-a514-1483b621a887\") " pod="openstack/dnsmasq-dns-7fc74595bc-c6dvf" Feb 23 17:47:06 crc kubenswrapper[4724]: I0223 17:47:06.519656 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/272b0850-c495-47ac-a514-1483b621a887-config\") pod \"dnsmasq-dns-7fc74595bc-c6dvf\" (UID: \"272b0850-c495-47ac-a514-1483b621a887\") " pod="openstack/dnsmasq-dns-7fc74595bc-c6dvf" Feb 23 17:47:06 crc kubenswrapper[4724]: I0223 17:47:06.519727 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbdmb\" (UniqueName: \"kubernetes.io/projected/272b0850-c495-47ac-a514-1483b621a887-kube-api-access-kbdmb\") pod \"dnsmasq-dns-7fc74595bc-c6dvf\" (UID: \"272b0850-c495-47ac-a514-1483b621a887\") " pod="openstack/dnsmasq-dns-7fc74595bc-c6dvf" Feb 23 17:47:06 crc kubenswrapper[4724]: I0223 17:47:06.519778 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/272b0850-c495-47ac-a514-1483b621a887-dns-svc\") pod \"dnsmasq-dns-7fc74595bc-c6dvf\" (UID: \"272b0850-c495-47ac-a514-1483b621a887\") " pod="openstack/dnsmasq-dns-7fc74595bc-c6dvf" Feb 23 17:47:06 crc kubenswrapper[4724]: I0223 17:47:06.520908 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/272b0850-c495-47ac-a514-1483b621a887-dns-svc\") pod \"dnsmasq-dns-7fc74595bc-c6dvf\" (UID: \"272b0850-c495-47ac-a514-1483b621a887\") " pod="openstack/dnsmasq-dns-7fc74595bc-c6dvf" Feb 23 17:47:06 crc kubenswrapper[4724]: I0223 17:47:06.521030 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/272b0850-c495-47ac-a514-1483b621a887-config\") pod \"dnsmasq-dns-7fc74595bc-c6dvf\" (UID: \"272b0850-c495-47ac-a514-1483b621a887\") " pod="openstack/dnsmasq-dns-7fc74595bc-c6dvf" Feb 23 17:47:06 crc kubenswrapper[4724]: I0223 17:47:06.548935 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbdmb\" (UniqueName: \"kubernetes.io/projected/272b0850-c495-47ac-a514-1483b621a887-kube-api-access-kbdmb\") pod \"dnsmasq-dns-7fc74595bc-c6dvf\" (UID: \"272b0850-c495-47ac-a514-1483b621a887\") " pod="openstack/dnsmasq-dns-7fc74595bc-c6dvf" Feb 23 17:47:06 crc kubenswrapper[4724]: I0223 17:47:06.613195 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bf56b5889-h4bb8"] Feb 23 17:47:06 crc kubenswrapper[4724]: I0223 17:47:06.635495 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7847d45595-nlkm7"] Feb 23 17:47:06 crc kubenswrapper[4724]: I0223 17:47:06.637436 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7847d45595-nlkm7" Feb 23 17:47:06 crc kubenswrapper[4724]: I0223 17:47:06.641001 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fc74595bc-c6dvf" Feb 23 17:47:06 crc kubenswrapper[4724]: I0223 17:47:06.642537 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7847d45595-nlkm7"] Feb 23 17:47:06 crc kubenswrapper[4724]: I0223 17:47:06.824658 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/990d4b18-ede3-4806-ac28-2a35ea767d3a-config\") pod \"dnsmasq-dns-7847d45595-nlkm7\" (UID: \"990d4b18-ede3-4806-ac28-2a35ea767d3a\") " pod="openstack/dnsmasq-dns-7847d45595-nlkm7" Feb 23 17:47:06 crc kubenswrapper[4724]: I0223 17:47:06.824714 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8dfk\" (UniqueName: \"kubernetes.io/projected/990d4b18-ede3-4806-ac28-2a35ea767d3a-kube-api-access-x8dfk\") pod \"dnsmasq-dns-7847d45595-nlkm7\" (UID: \"990d4b18-ede3-4806-ac28-2a35ea767d3a\") " pod="openstack/dnsmasq-dns-7847d45595-nlkm7" Feb 23 17:47:06 crc kubenswrapper[4724]: I0223 17:47:06.824768 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/990d4b18-ede3-4806-ac28-2a35ea767d3a-dns-svc\") pod \"dnsmasq-dns-7847d45595-nlkm7\" (UID: \"990d4b18-ede3-4806-ac28-2a35ea767d3a\") " pod="openstack/dnsmasq-dns-7847d45595-nlkm7" Feb 23 17:47:06 crc kubenswrapper[4724]: I0223 17:47:06.902036 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7847d45595-nlkm7"] Feb 23 17:47:06 crc kubenswrapper[4724]: E0223 17:47:06.907694 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc kube-api-access-x8dfk], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-7847d45595-nlkm7" podUID="990d4b18-ede3-4806-ac28-2a35ea767d3a" Feb 23 17:47:06 crc kubenswrapper[4724]: I0223 17:47:06.926158 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/990d4b18-ede3-4806-ac28-2a35ea767d3a-dns-svc\") pod \"dnsmasq-dns-7847d45595-nlkm7\" (UID: \"990d4b18-ede3-4806-ac28-2a35ea767d3a\") " pod="openstack/dnsmasq-dns-7847d45595-nlkm7" Feb 23 17:47:06 crc kubenswrapper[4724]: I0223 17:47:06.926262 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/990d4b18-ede3-4806-ac28-2a35ea767d3a-config\") pod \"dnsmasq-dns-7847d45595-nlkm7\" (UID: \"990d4b18-ede3-4806-ac28-2a35ea767d3a\") " pod="openstack/dnsmasq-dns-7847d45595-nlkm7" Feb 23 17:47:06 crc kubenswrapper[4724]: I0223 17:47:06.926289 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8dfk\" (UniqueName: \"kubernetes.io/projected/990d4b18-ede3-4806-ac28-2a35ea767d3a-kube-api-access-x8dfk\") pod \"dnsmasq-dns-7847d45595-nlkm7\" (UID: \"990d4b18-ede3-4806-ac28-2a35ea767d3a\") " pod="openstack/dnsmasq-dns-7847d45595-nlkm7" Feb 23 17:47:06 crc kubenswrapper[4724]: I0223 17:47:06.927269 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/990d4b18-ede3-4806-ac28-2a35ea767d3a-dns-svc\") pod \"dnsmasq-dns-7847d45595-nlkm7\" (UID: \"990d4b18-ede3-4806-ac28-2a35ea767d3a\") " pod="openstack/dnsmasq-dns-7847d45595-nlkm7" Feb 23 17:47:06 crc kubenswrapper[4724]: I0223 17:47:06.927803 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/990d4b18-ede3-4806-ac28-2a35ea767d3a-config\") pod \"dnsmasq-dns-7847d45595-nlkm7\" (UID: \"990d4b18-ede3-4806-ac28-2a35ea767d3a\") " pod="openstack/dnsmasq-dns-7847d45595-nlkm7" Feb 23 17:47:06 crc kubenswrapper[4724]: I0223 17:47:06.930805 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74bcc47849-4gdn2"] Feb 23 17:47:06 crc kubenswrapper[4724]: I0223 17:47:06.932069 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74bcc47849-4gdn2" Feb 23 17:47:06 crc kubenswrapper[4724]: I0223 17:47:06.940991 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74bcc47849-4gdn2"] Feb 23 17:47:06 crc kubenswrapper[4724]: I0223 17:47:06.971491 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8dfk\" (UniqueName: \"kubernetes.io/projected/990d4b18-ede3-4806-ac28-2a35ea767d3a-kube-api-access-x8dfk\") pod \"dnsmasq-dns-7847d45595-nlkm7\" (UID: \"990d4b18-ede3-4806-ac28-2a35ea767d3a\") " pod="openstack/dnsmasq-dns-7847d45595-nlkm7" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.130805 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksfgt\" (UniqueName: \"kubernetes.io/projected/f92d7742-3151-48d7-8493-ff07e6803966-kube-api-access-ksfgt\") pod \"dnsmasq-dns-74bcc47849-4gdn2\" (UID: \"f92d7742-3151-48d7-8493-ff07e6803966\") " pod="openstack/dnsmasq-dns-74bcc47849-4gdn2" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.131265 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f92d7742-3151-48d7-8493-ff07e6803966-config\") pod \"dnsmasq-dns-74bcc47849-4gdn2\" (UID: \"f92d7742-3151-48d7-8493-ff07e6803966\") " pod="openstack/dnsmasq-dns-74bcc47849-4gdn2" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.131617 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f92d7742-3151-48d7-8493-ff07e6803966-dns-svc\") pod \"dnsmasq-dns-74bcc47849-4gdn2\" (UID: \"f92d7742-3151-48d7-8493-ff07e6803966\") " pod="openstack/dnsmasq-dns-74bcc47849-4gdn2" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.234290 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f92d7742-3151-48d7-8493-ff07e6803966-dns-svc\") pod \"dnsmasq-dns-74bcc47849-4gdn2\" (UID: \"f92d7742-3151-48d7-8493-ff07e6803966\") " pod="openstack/dnsmasq-dns-74bcc47849-4gdn2" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.234354 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksfgt\" (UniqueName: \"kubernetes.io/projected/f92d7742-3151-48d7-8493-ff07e6803966-kube-api-access-ksfgt\") pod \"dnsmasq-dns-74bcc47849-4gdn2\" (UID: \"f92d7742-3151-48d7-8493-ff07e6803966\") " pod="openstack/dnsmasq-dns-74bcc47849-4gdn2" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.234426 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f92d7742-3151-48d7-8493-ff07e6803966-config\") pod \"dnsmasq-dns-74bcc47849-4gdn2\" (UID: \"f92d7742-3151-48d7-8493-ff07e6803966\") " pod="openstack/dnsmasq-dns-74bcc47849-4gdn2" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.235480 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f92d7742-3151-48d7-8493-ff07e6803966-config\") pod \"dnsmasq-dns-74bcc47849-4gdn2\" (UID: \"f92d7742-3151-48d7-8493-ff07e6803966\") " pod="openstack/dnsmasq-dns-74bcc47849-4gdn2" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.235545 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f92d7742-3151-48d7-8493-ff07e6803966-dns-svc\") pod \"dnsmasq-dns-74bcc47849-4gdn2\" (UID: \"f92d7742-3151-48d7-8493-ff07e6803966\") " pod="openstack/dnsmasq-dns-74bcc47849-4gdn2" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.252508 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksfgt\" (UniqueName: \"kubernetes.io/projected/f92d7742-3151-48d7-8493-ff07e6803966-kube-api-access-ksfgt\") pod \"dnsmasq-dns-74bcc47849-4gdn2\" (UID: \"f92d7742-3151-48d7-8493-ff07e6803966\") " pod="openstack/dnsmasq-dns-74bcc47849-4gdn2" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.259445 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74bcc47849-4gdn2" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.294737 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7847d45595-nlkm7" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.312857 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7847d45595-nlkm7" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.437596 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8dfk\" (UniqueName: \"kubernetes.io/projected/990d4b18-ede3-4806-ac28-2a35ea767d3a-kube-api-access-x8dfk\") pod \"990d4b18-ede3-4806-ac28-2a35ea767d3a\" (UID: \"990d4b18-ede3-4806-ac28-2a35ea767d3a\") " Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.437764 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/990d4b18-ede3-4806-ac28-2a35ea767d3a-config\") pod \"990d4b18-ede3-4806-ac28-2a35ea767d3a\" (UID: \"990d4b18-ede3-4806-ac28-2a35ea767d3a\") " Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.437824 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/990d4b18-ede3-4806-ac28-2a35ea767d3a-dns-svc\") pod \"990d4b18-ede3-4806-ac28-2a35ea767d3a\" (UID: \"990d4b18-ede3-4806-ac28-2a35ea767d3a\") " Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.438228 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/990d4b18-ede3-4806-ac28-2a35ea767d3a-config" (OuterVolumeSpecName: "config") pod "990d4b18-ede3-4806-ac28-2a35ea767d3a" (UID: "990d4b18-ede3-4806-ac28-2a35ea767d3a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.438306 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/990d4b18-ede3-4806-ac28-2a35ea767d3a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "990d4b18-ede3-4806-ac28-2a35ea767d3a" (UID: "990d4b18-ede3-4806-ac28-2a35ea767d3a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.446587 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/990d4b18-ede3-4806-ac28-2a35ea767d3a-kube-api-access-x8dfk" (OuterVolumeSpecName: "kube-api-access-x8dfk") pod "990d4b18-ede3-4806-ac28-2a35ea767d3a" (UID: "990d4b18-ede3-4806-ac28-2a35ea767d3a"). InnerVolumeSpecName "kube-api-access-x8dfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.474616 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/notifications-rabbitmq-server-0"] Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.476305 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/notifications-rabbitmq-server-0" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.488189 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"notifications-rabbitmq-erlang-cookie" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.488434 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"notifications-rabbitmq-server-dockercfg-pwtqf" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.488582 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"notifications-rabbitmq-config-data" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.488755 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"notifications-rabbitmq-plugins-conf" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.488891 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"notifications-rabbitmq-server-conf" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.491552 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"notifications-rabbitmq-default-user" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.491693 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-notifications-rabbitmq-svc" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.509338 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/notifications-rabbitmq-server-0"] Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.539615 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8dfk\" (UniqueName: \"kubernetes.io/projected/990d4b18-ede3-4806-ac28-2a35ea767d3a-kube-api-access-x8dfk\") on node \"crc\" DevicePath \"\"" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.539658 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/990d4b18-ede3-4806-ac28-2a35ea767d3a-config\") on node \"crc\" DevicePath \"\"" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.539696 4724 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/990d4b18-ede3-4806-ac28-2a35ea767d3a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.641303 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6e165de7-7e1a-47c3-84d2-9fc675a2224a-rabbitmq-confd\") pod \"notifications-rabbitmq-server-0\" (UID: \"6e165de7-7e1a-47c3-84d2-9fc675a2224a\") " pod="openstack/notifications-rabbitmq-server-0" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.641710 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"notifications-rabbitmq-server-0\" (UID: \"6e165de7-7e1a-47c3-84d2-9fc675a2224a\") " pod="openstack/notifications-rabbitmq-server-0" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.641903 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6e165de7-7e1a-47c3-84d2-9fc675a2224a-config-data\") pod \"notifications-rabbitmq-server-0\" (UID: \"6e165de7-7e1a-47c3-84d2-9fc675a2224a\") " pod="openstack/notifications-rabbitmq-server-0" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.641953 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6e165de7-7e1a-47c3-84d2-9fc675a2224a-server-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"6e165de7-7e1a-47c3-84d2-9fc675a2224a\") " pod="openstack/notifications-rabbitmq-server-0" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.641992 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6e165de7-7e1a-47c3-84d2-9fc675a2224a-rabbitmq-erlang-cookie\") pod \"notifications-rabbitmq-server-0\" (UID: \"6e165de7-7e1a-47c3-84d2-9fc675a2224a\") " pod="openstack/notifications-rabbitmq-server-0" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.642040 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6e165de7-7e1a-47c3-84d2-9fc675a2224a-plugins-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"6e165de7-7e1a-47c3-84d2-9fc675a2224a\") " pod="openstack/notifications-rabbitmq-server-0" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.642074 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6e165de7-7e1a-47c3-84d2-9fc675a2224a-rabbitmq-tls\") pod \"notifications-rabbitmq-server-0\" (UID: \"6e165de7-7e1a-47c3-84d2-9fc675a2224a\") " pod="openstack/notifications-rabbitmq-server-0" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.642106 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6e165de7-7e1a-47c3-84d2-9fc675a2224a-erlang-cookie-secret\") pod \"notifications-rabbitmq-server-0\" (UID: \"6e165de7-7e1a-47c3-84d2-9fc675a2224a\") " pod="openstack/notifications-rabbitmq-server-0" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.642128 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6e165de7-7e1a-47c3-84d2-9fc675a2224a-pod-info\") pod \"notifications-rabbitmq-server-0\" (UID: \"6e165de7-7e1a-47c3-84d2-9fc675a2224a\") " pod="openstack/notifications-rabbitmq-server-0" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.642257 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6e165de7-7e1a-47c3-84d2-9fc675a2224a-rabbitmq-plugins\") pod \"notifications-rabbitmq-server-0\" (UID: \"6e165de7-7e1a-47c3-84d2-9fc675a2224a\") " pod="openstack/notifications-rabbitmq-server-0" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.642302 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b96z\" (UniqueName: \"kubernetes.io/projected/6e165de7-7e1a-47c3-84d2-9fc675a2224a-kube-api-access-8b96z\") pod \"notifications-rabbitmq-server-0\" (UID: \"6e165de7-7e1a-47c3-84d2-9fc675a2224a\") " pod="openstack/notifications-rabbitmq-server-0" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.743407 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6e165de7-7e1a-47c3-84d2-9fc675a2224a-rabbitmq-tls\") pod \"notifications-rabbitmq-server-0\" (UID: \"6e165de7-7e1a-47c3-84d2-9fc675a2224a\") " pod="openstack/notifications-rabbitmq-server-0" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.743474 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6e165de7-7e1a-47c3-84d2-9fc675a2224a-erlang-cookie-secret\") pod \"notifications-rabbitmq-server-0\" (UID: \"6e165de7-7e1a-47c3-84d2-9fc675a2224a\") " pod="openstack/notifications-rabbitmq-server-0" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.743495 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6e165de7-7e1a-47c3-84d2-9fc675a2224a-pod-info\") pod \"notifications-rabbitmq-server-0\" (UID: \"6e165de7-7e1a-47c3-84d2-9fc675a2224a\") " pod="openstack/notifications-rabbitmq-server-0" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.743526 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6e165de7-7e1a-47c3-84d2-9fc675a2224a-rabbitmq-plugins\") pod \"notifications-rabbitmq-server-0\" (UID: \"6e165de7-7e1a-47c3-84d2-9fc675a2224a\") " pod="openstack/notifications-rabbitmq-server-0" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.743553 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8b96z\" (UniqueName: \"kubernetes.io/projected/6e165de7-7e1a-47c3-84d2-9fc675a2224a-kube-api-access-8b96z\") pod \"notifications-rabbitmq-server-0\" (UID: \"6e165de7-7e1a-47c3-84d2-9fc675a2224a\") " pod="openstack/notifications-rabbitmq-server-0" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.743574 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6e165de7-7e1a-47c3-84d2-9fc675a2224a-rabbitmq-confd\") pod \"notifications-rabbitmq-server-0\" (UID: \"6e165de7-7e1a-47c3-84d2-9fc675a2224a\") " pod="openstack/notifications-rabbitmq-server-0" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.743597 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"notifications-rabbitmq-server-0\" (UID: \"6e165de7-7e1a-47c3-84d2-9fc675a2224a\") " pod="openstack/notifications-rabbitmq-server-0" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.743618 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6e165de7-7e1a-47c3-84d2-9fc675a2224a-config-data\") pod \"notifications-rabbitmq-server-0\" (UID: \"6e165de7-7e1a-47c3-84d2-9fc675a2224a\") " pod="openstack/notifications-rabbitmq-server-0" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.743641 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6e165de7-7e1a-47c3-84d2-9fc675a2224a-server-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"6e165de7-7e1a-47c3-84d2-9fc675a2224a\") " pod="openstack/notifications-rabbitmq-server-0" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.743659 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6e165de7-7e1a-47c3-84d2-9fc675a2224a-rabbitmq-erlang-cookie\") pod \"notifications-rabbitmq-server-0\" (UID: \"6e165de7-7e1a-47c3-84d2-9fc675a2224a\") " pod="openstack/notifications-rabbitmq-server-0" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.743680 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6e165de7-7e1a-47c3-84d2-9fc675a2224a-plugins-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"6e165de7-7e1a-47c3-84d2-9fc675a2224a\") " pod="openstack/notifications-rabbitmq-server-0" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.744710 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6e165de7-7e1a-47c3-84d2-9fc675a2224a-rabbitmq-plugins\") pod \"notifications-rabbitmq-server-0\" (UID: \"6e165de7-7e1a-47c3-84d2-9fc675a2224a\") " pod="openstack/notifications-rabbitmq-server-0" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.744720 4724 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"notifications-rabbitmq-server-0\" (UID: \"6e165de7-7e1a-47c3-84d2-9fc675a2224a\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/notifications-rabbitmq-server-0" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.744760 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6e165de7-7e1a-47c3-84d2-9fc675a2224a-plugins-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"6e165de7-7e1a-47c3-84d2-9fc675a2224a\") " pod="openstack/notifications-rabbitmq-server-0" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.745048 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6e165de7-7e1a-47c3-84d2-9fc675a2224a-rabbitmq-erlang-cookie\") pod \"notifications-rabbitmq-server-0\" (UID: \"6e165de7-7e1a-47c3-84d2-9fc675a2224a\") " pod="openstack/notifications-rabbitmq-server-0" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.745368 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6e165de7-7e1a-47c3-84d2-9fc675a2224a-config-data\") pod \"notifications-rabbitmq-server-0\" (UID: \"6e165de7-7e1a-47c3-84d2-9fc675a2224a\") " pod="openstack/notifications-rabbitmq-server-0" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.745726 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6e165de7-7e1a-47c3-84d2-9fc675a2224a-server-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"6e165de7-7e1a-47c3-84d2-9fc675a2224a\") " pod="openstack/notifications-rabbitmq-server-0" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.749414 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6e165de7-7e1a-47c3-84d2-9fc675a2224a-rabbitmq-confd\") pod \"notifications-rabbitmq-server-0\" (UID: \"6e165de7-7e1a-47c3-84d2-9fc675a2224a\") " pod="openstack/notifications-rabbitmq-server-0" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.755414 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6e165de7-7e1a-47c3-84d2-9fc675a2224a-rabbitmq-tls\") pod \"notifications-rabbitmq-server-0\" (UID: \"6e165de7-7e1a-47c3-84d2-9fc675a2224a\") " pod="openstack/notifications-rabbitmq-server-0" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.755761 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6e165de7-7e1a-47c3-84d2-9fc675a2224a-erlang-cookie-secret\") pod \"notifications-rabbitmq-server-0\" (UID: \"6e165de7-7e1a-47c3-84d2-9fc675a2224a\") " pod="openstack/notifications-rabbitmq-server-0" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.760890 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b96z\" (UniqueName: \"kubernetes.io/projected/6e165de7-7e1a-47c3-84d2-9fc675a2224a-kube-api-access-8b96z\") pod \"notifications-rabbitmq-server-0\" (UID: \"6e165de7-7e1a-47c3-84d2-9fc675a2224a\") " pod="openstack/notifications-rabbitmq-server-0" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.761571 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6e165de7-7e1a-47c3-84d2-9fc675a2224a-pod-info\") pod \"notifications-rabbitmq-server-0\" (UID: \"6e165de7-7e1a-47c3-84d2-9fc675a2224a\") " pod="openstack/notifications-rabbitmq-server-0" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.767358 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"notifications-rabbitmq-server-0\" (UID: \"6e165de7-7e1a-47c3-84d2-9fc675a2224a\") " pod="openstack/notifications-rabbitmq-server-0" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.773657 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.780150 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.785713 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.786105 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.786237 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.786436 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.786665 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.786840 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.787069 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.794590 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-gpzmg" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.803354 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/notifications-rabbitmq-server-0" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.946279 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dd0498b8-b963-4905-a986-13400917ef41-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"dd0498b8-b963-4905-a986-13400917ef41\") " pod="openstack/rabbitmq-server-0" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.946334 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mtqg\" (UniqueName: \"kubernetes.io/projected/dd0498b8-b963-4905-a986-13400917ef41-kube-api-access-5mtqg\") pod \"rabbitmq-server-0\" (UID: \"dd0498b8-b963-4905-a986-13400917ef41\") " pod="openstack/rabbitmq-server-0" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.946366 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"dd0498b8-b963-4905-a986-13400917ef41\") " pod="openstack/rabbitmq-server-0" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.946405 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dd0498b8-b963-4905-a986-13400917ef41-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"dd0498b8-b963-4905-a986-13400917ef41\") " pod="openstack/rabbitmq-server-0" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.946424 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dd0498b8-b963-4905-a986-13400917ef41-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"dd0498b8-b963-4905-a986-13400917ef41\") " pod="openstack/rabbitmq-server-0" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.946446 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dd0498b8-b963-4905-a986-13400917ef41-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"dd0498b8-b963-4905-a986-13400917ef41\") " pod="openstack/rabbitmq-server-0" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.946596 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dd0498b8-b963-4905-a986-13400917ef41-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"dd0498b8-b963-4905-a986-13400917ef41\") " pod="openstack/rabbitmq-server-0" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.946678 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dd0498b8-b963-4905-a986-13400917ef41-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"dd0498b8-b963-4905-a986-13400917ef41\") " pod="openstack/rabbitmq-server-0" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.946776 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dd0498b8-b963-4905-a986-13400917ef41-server-conf\") pod \"rabbitmq-server-0\" (UID: \"dd0498b8-b963-4905-a986-13400917ef41\") " pod="openstack/rabbitmq-server-0" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.946799 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dd0498b8-b963-4905-a986-13400917ef41-config-data\") pod \"rabbitmq-server-0\" (UID: \"dd0498b8-b963-4905-a986-13400917ef41\") " pod="openstack/rabbitmq-server-0" Feb 23 17:47:07 crc kubenswrapper[4724]: I0223 17:47:07.946827 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dd0498b8-b963-4905-a986-13400917ef41-pod-info\") pod \"rabbitmq-server-0\" (UID: \"dd0498b8-b963-4905-a986-13400917ef41\") " pod="openstack/rabbitmq-server-0" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.048543 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dd0498b8-b963-4905-a986-13400917ef41-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"dd0498b8-b963-4905-a986-13400917ef41\") " pod="openstack/rabbitmq-server-0" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.048590 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mtqg\" (UniqueName: \"kubernetes.io/projected/dd0498b8-b963-4905-a986-13400917ef41-kube-api-access-5mtqg\") pod \"rabbitmq-server-0\" (UID: \"dd0498b8-b963-4905-a986-13400917ef41\") " pod="openstack/rabbitmq-server-0" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.048623 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"dd0498b8-b963-4905-a986-13400917ef41\") " pod="openstack/rabbitmq-server-0" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.048639 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dd0498b8-b963-4905-a986-13400917ef41-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"dd0498b8-b963-4905-a986-13400917ef41\") " pod="openstack/rabbitmq-server-0" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.048658 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dd0498b8-b963-4905-a986-13400917ef41-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"dd0498b8-b963-4905-a986-13400917ef41\") " pod="openstack/rabbitmq-server-0" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.048678 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dd0498b8-b963-4905-a986-13400917ef41-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"dd0498b8-b963-4905-a986-13400917ef41\") " pod="openstack/rabbitmq-server-0" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.048716 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dd0498b8-b963-4905-a986-13400917ef41-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"dd0498b8-b963-4905-a986-13400917ef41\") " pod="openstack/rabbitmq-server-0" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.048743 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dd0498b8-b963-4905-a986-13400917ef41-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"dd0498b8-b963-4905-a986-13400917ef41\") " pod="openstack/rabbitmq-server-0" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.048769 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dd0498b8-b963-4905-a986-13400917ef41-server-conf\") pod \"rabbitmq-server-0\" (UID: \"dd0498b8-b963-4905-a986-13400917ef41\") " pod="openstack/rabbitmq-server-0" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.048785 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dd0498b8-b963-4905-a986-13400917ef41-config-data\") pod \"rabbitmq-server-0\" (UID: \"dd0498b8-b963-4905-a986-13400917ef41\") " pod="openstack/rabbitmq-server-0" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.048801 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dd0498b8-b963-4905-a986-13400917ef41-pod-info\") pod \"rabbitmq-server-0\" (UID: \"dd0498b8-b963-4905-a986-13400917ef41\") " pod="openstack/rabbitmq-server-0" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.049678 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dd0498b8-b963-4905-a986-13400917ef41-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"dd0498b8-b963-4905-a986-13400917ef41\") " pod="openstack/rabbitmq-server-0" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.051873 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dd0498b8-b963-4905-a986-13400917ef41-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"dd0498b8-b963-4905-a986-13400917ef41\") " pod="openstack/rabbitmq-server-0" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.051939 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dd0498b8-b963-4905-a986-13400917ef41-pod-info\") pod \"rabbitmq-server-0\" (UID: \"dd0498b8-b963-4905-a986-13400917ef41\") " pod="openstack/rabbitmq-server-0" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.052499 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dd0498b8-b963-4905-a986-13400917ef41-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"dd0498b8-b963-4905-a986-13400917ef41\") " pod="openstack/rabbitmq-server-0" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.052530 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dd0498b8-b963-4905-a986-13400917ef41-config-data\") pod \"rabbitmq-server-0\" (UID: \"dd0498b8-b963-4905-a986-13400917ef41\") " pod="openstack/rabbitmq-server-0" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.052693 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dd0498b8-b963-4905-a986-13400917ef41-server-conf\") pod \"rabbitmq-server-0\" (UID: \"dd0498b8-b963-4905-a986-13400917ef41\") " pod="openstack/rabbitmq-server-0" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.053023 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dd0498b8-b963-4905-a986-13400917ef41-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"dd0498b8-b963-4905-a986-13400917ef41\") " pod="openstack/rabbitmq-server-0" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.055025 4724 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"dd0498b8-b963-4905-a986-13400917ef41\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-server-0" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.059158 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dd0498b8-b963-4905-a986-13400917ef41-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"dd0498b8-b963-4905-a986-13400917ef41\") " pod="openstack/rabbitmq-server-0" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.071846 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mtqg\" (UniqueName: \"kubernetes.io/projected/dd0498b8-b963-4905-a986-13400917ef41-kube-api-access-5mtqg\") pod \"rabbitmq-server-0\" (UID: \"dd0498b8-b963-4905-a986-13400917ef41\") " pod="openstack/rabbitmq-server-0" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.080361 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dd0498b8-b963-4905-a986-13400917ef41-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"dd0498b8-b963-4905-a986-13400917ef41\") " pod="openstack/rabbitmq-server-0" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.087349 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"dd0498b8-b963-4905-a986-13400917ef41\") " pod="openstack/rabbitmq-server-0" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.108987 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.110323 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.112048 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.112427 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.112719 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-nbsqm" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.112752 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.112860 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.112956 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.114492 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.131269 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.131996 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.252300 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/101a4642-f4c0-4f81-9d5a-7b8d95110eb2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"101a4642-f4c0-4f81-9d5a-7b8d95110eb2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.252349 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/101a4642-f4c0-4f81-9d5a-7b8d95110eb2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"101a4642-f4c0-4f81-9d5a-7b8d95110eb2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.252403 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/101a4642-f4c0-4f81-9d5a-7b8d95110eb2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"101a4642-f4c0-4f81-9d5a-7b8d95110eb2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.252431 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/101a4642-f4c0-4f81-9d5a-7b8d95110eb2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"101a4642-f4c0-4f81-9d5a-7b8d95110eb2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.252465 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/101a4642-f4c0-4f81-9d5a-7b8d95110eb2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"101a4642-f4c0-4f81-9d5a-7b8d95110eb2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.252624 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/101a4642-f4c0-4f81-9d5a-7b8d95110eb2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"101a4642-f4c0-4f81-9d5a-7b8d95110eb2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.252678 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/101a4642-f4c0-4f81-9d5a-7b8d95110eb2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"101a4642-f4c0-4f81-9d5a-7b8d95110eb2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.252735 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/101a4642-f4c0-4f81-9d5a-7b8d95110eb2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"101a4642-f4c0-4f81-9d5a-7b8d95110eb2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.252833 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/101a4642-f4c0-4f81-9d5a-7b8d95110eb2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"101a4642-f4c0-4f81-9d5a-7b8d95110eb2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.252894 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"101a4642-f4c0-4f81-9d5a-7b8d95110eb2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.252985 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq2xb\" (UniqueName: \"kubernetes.io/projected/101a4642-f4c0-4f81-9d5a-7b8d95110eb2-kube-api-access-wq2xb\") pod \"rabbitmq-cell1-server-0\" (UID: \"101a4642-f4c0-4f81-9d5a-7b8d95110eb2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.303872 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7847d45595-nlkm7" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.346791 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7847d45595-nlkm7"] Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.354422 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/101a4642-f4c0-4f81-9d5a-7b8d95110eb2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"101a4642-f4c0-4f81-9d5a-7b8d95110eb2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.354456 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/101a4642-f4c0-4f81-9d5a-7b8d95110eb2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"101a4642-f4c0-4f81-9d5a-7b8d95110eb2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.354484 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/101a4642-f4c0-4f81-9d5a-7b8d95110eb2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"101a4642-f4c0-4f81-9d5a-7b8d95110eb2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.354515 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/101a4642-f4c0-4f81-9d5a-7b8d95110eb2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"101a4642-f4c0-4f81-9d5a-7b8d95110eb2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.354542 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"101a4642-f4c0-4f81-9d5a-7b8d95110eb2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.354573 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wq2xb\" (UniqueName: \"kubernetes.io/projected/101a4642-f4c0-4f81-9d5a-7b8d95110eb2-kube-api-access-wq2xb\") pod \"rabbitmq-cell1-server-0\" (UID: \"101a4642-f4c0-4f81-9d5a-7b8d95110eb2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.354605 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/101a4642-f4c0-4f81-9d5a-7b8d95110eb2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"101a4642-f4c0-4f81-9d5a-7b8d95110eb2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.354626 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/101a4642-f4c0-4f81-9d5a-7b8d95110eb2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"101a4642-f4c0-4f81-9d5a-7b8d95110eb2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.354641 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/101a4642-f4c0-4f81-9d5a-7b8d95110eb2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"101a4642-f4c0-4f81-9d5a-7b8d95110eb2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.354663 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/101a4642-f4c0-4f81-9d5a-7b8d95110eb2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"101a4642-f4c0-4f81-9d5a-7b8d95110eb2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.354692 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/101a4642-f4c0-4f81-9d5a-7b8d95110eb2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"101a4642-f4c0-4f81-9d5a-7b8d95110eb2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.355105 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/101a4642-f4c0-4f81-9d5a-7b8d95110eb2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"101a4642-f4c0-4f81-9d5a-7b8d95110eb2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.356362 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/101a4642-f4c0-4f81-9d5a-7b8d95110eb2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"101a4642-f4c0-4f81-9d5a-7b8d95110eb2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.356696 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/101a4642-f4c0-4f81-9d5a-7b8d95110eb2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"101a4642-f4c0-4f81-9d5a-7b8d95110eb2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.356802 4724 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"101a4642-f4c0-4f81-9d5a-7b8d95110eb2\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.356803 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7847d45595-nlkm7"] Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.357687 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/101a4642-f4c0-4f81-9d5a-7b8d95110eb2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"101a4642-f4c0-4f81-9d5a-7b8d95110eb2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.359017 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/101a4642-f4c0-4f81-9d5a-7b8d95110eb2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"101a4642-f4c0-4f81-9d5a-7b8d95110eb2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.359113 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/101a4642-f4c0-4f81-9d5a-7b8d95110eb2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"101a4642-f4c0-4f81-9d5a-7b8d95110eb2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.360338 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/101a4642-f4c0-4f81-9d5a-7b8d95110eb2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"101a4642-f4c0-4f81-9d5a-7b8d95110eb2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.363012 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/101a4642-f4c0-4f81-9d5a-7b8d95110eb2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"101a4642-f4c0-4f81-9d5a-7b8d95110eb2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.369343 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/101a4642-f4c0-4f81-9d5a-7b8d95110eb2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"101a4642-f4c0-4f81-9d5a-7b8d95110eb2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.375026 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wq2xb\" (UniqueName: \"kubernetes.io/projected/101a4642-f4c0-4f81-9d5a-7b8d95110eb2-kube-api-access-wq2xb\") pod \"rabbitmq-cell1-server-0\" (UID: \"101a4642-f4c0-4f81-9d5a-7b8d95110eb2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.381613 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"101a4642-f4c0-4f81-9d5a-7b8d95110eb2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.437833 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:47:08 crc kubenswrapper[4724]: I0223 17:47:08.958412 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="990d4b18-ede3-4806-ac28-2a35ea767d3a" path="/var/lib/kubelet/pods/990d4b18-ede3-4806-ac28-2a35ea767d3a/volumes" Feb 23 17:47:09 crc kubenswrapper[4724]: I0223 17:47:09.309414 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 23 17:47:09 crc kubenswrapper[4724]: I0223 17:47:09.310957 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 23 17:47:09 crc kubenswrapper[4724]: I0223 17:47:09.319065 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 23 17:47:09 crc kubenswrapper[4724]: I0223 17:47:09.319554 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-stgwx" Feb 23 17:47:09 crc kubenswrapper[4724]: I0223 17:47:09.319579 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 23 17:47:09 crc kubenswrapper[4724]: I0223 17:47:09.319736 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 23 17:47:09 crc kubenswrapper[4724]: I0223 17:47:09.322356 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 23 17:47:09 crc kubenswrapper[4724]: I0223 17:47:09.323744 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 23 17:47:09 crc kubenswrapper[4724]: I0223 17:47:09.473314 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e48a20ad-1863-458a-ba27-6b24cee6df0c-kolla-config\") pod \"openstack-galera-0\" (UID: \"e48a20ad-1863-458a-ba27-6b24cee6df0c\") " pod="openstack/openstack-galera-0" Feb 23 17:47:09 crc kubenswrapper[4724]: I0223 17:47:09.473453 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw6dw\" (UniqueName: \"kubernetes.io/projected/e48a20ad-1863-458a-ba27-6b24cee6df0c-kube-api-access-cw6dw\") pod \"openstack-galera-0\" (UID: \"e48a20ad-1863-458a-ba27-6b24cee6df0c\") " pod="openstack/openstack-galera-0" Feb 23 17:47:09 crc kubenswrapper[4724]: I0223 17:47:09.473500 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"e48a20ad-1863-458a-ba27-6b24cee6df0c\") " pod="openstack/openstack-galera-0" Feb 23 17:47:09 crc kubenswrapper[4724]: I0223 17:47:09.473535 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e48a20ad-1863-458a-ba27-6b24cee6df0c-config-data-default\") pod \"openstack-galera-0\" (UID: \"e48a20ad-1863-458a-ba27-6b24cee6df0c\") " pod="openstack/openstack-galera-0" Feb 23 17:47:09 crc kubenswrapper[4724]: I0223 17:47:09.473605 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e48a20ad-1863-458a-ba27-6b24cee6df0c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"e48a20ad-1863-458a-ba27-6b24cee6df0c\") " pod="openstack/openstack-galera-0" Feb 23 17:47:09 crc kubenswrapper[4724]: I0223 17:47:09.473654 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e48a20ad-1863-458a-ba27-6b24cee6df0c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e48a20ad-1863-458a-ba27-6b24cee6df0c\") " pod="openstack/openstack-galera-0" Feb 23 17:47:09 crc kubenswrapper[4724]: I0223 17:47:09.473769 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e48a20ad-1863-458a-ba27-6b24cee6df0c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e48a20ad-1863-458a-ba27-6b24cee6df0c\") " pod="openstack/openstack-galera-0" Feb 23 17:47:09 crc kubenswrapper[4724]: I0223 17:47:09.473890 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e48a20ad-1863-458a-ba27-6b24cee6df0c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"e48a20ad-1863-458a-ba27-6b24cee6df0c\") " pod="openstack/openstack-galera-0" Feb 23 17:47:09 crc kubenswrapper[4724]: I0223 17:47:09.575112 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e48a20ad-1863-458a-ba27-6b24cee6df0c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"e48a20ad-1863-458a-ba27-6b24cee6df0c\") " pod="openstack/openstack-galera-0" Feb 23 17:47:09 crc kubenswrapper[4724]: I0223 17:47:09.575182 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e48a20ad-1863-458a-ba27-6b24cee6df0c-kolla-config\") pod \"openstack-galera-0\" (UID: \"e48a20ad-1863-458a-ba27-6b24cee6df0c\") " pod="openstack/openstack-galera-0" Feb 23 17:47:09 crc kubenswrapper[4724]: I0223 17:47:09.575231 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cw6dw\" (UniqueName: \"kubernetes.io/projected/e48a20ad-1863-458a-ba27-6b24cee6df0c-kube-api-access-cw6dw\") pod \"openstack-galera-0\" (UID: \"e48a20ad-1863-458a-ba27-6b24cee6df0c\") " pod="openstack/openstack-galera-0" Feb 23 17:47:09 crc kubenswrapper[4724]: I0223 17:47:09.575253 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"e48a20ad-1863-458a-ba27-6b24cee6df0c\") " pod="openstack/openstack-galera-0" Feb 23 17:47:09 crc kubenswrapper[4724]: I0223 17:47:09.575270 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e48a20ad-1863-458a-ba27-6b24cee6df0c-config-data-default\") pod \"openstack-galera-0\" (UID: \"e48a20ad-1863-458a-ba27-6b24cee6df0c\") " pod="openstack/openstack-galera-0" Feb 23 17:47:09 crc kubenswrapper[4724]: I0223 17:47:09.575292 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e48a20ad-1863-458a-ba27-6b24cee6df0c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"e48a20ad-1863-458a-ba27-6b24cee6df0c\") " pod="openstack/openstack-galera-0" Feb 23 17:47:09 crc kubenswrapper[4724]: I0223 17:47:09.575317 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e48a20ad-1863-458a-ba27-6b24cee6df0c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e48a20ad-1863-458a-ba27-6b24cee6df0c\") " pod="openstack/openstack-galera-0" Feb 23 17:47:09 crc kubenswrapper[4724]: I0223 17:47:09.575334 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e48a20ad-1863-458a-ba27-6b24cee6df0c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e48a20ad-1863-458a-ba27-6b24cee6df0c\") " pod="openstack/openstack-galera-0" Feb 23 17:47:09 crc kubenswrapper[4724]: I0223 17:47:09.575768 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e48a20ad-1863-458a-ba27-6b24cee6df0c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e48a20ad-1863-458a-ba27-6b24cee6df0c\") " pod="openstack/openstack-galera-0" Feb 23 17:47:09 crc kubenswrapper[4724]: I0223 17:47:09.576521 4724 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"e48a20ad-1863-458a-ba27-6b24cee6df0c\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/openstack-galera-0" Feb 23 17:47:09 crc kubenswrapper[4724]: I0223 17:47:09.577485 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e48a20ad-1863-458a-ba27-6b24cee6df0c-kolla-config\") pod \"openstack-galera-0\" (UID: \"e48a20ad-1863-458a-ba27-6b24cee6df0c\") " pod="openstack/openstack-galera-0" Feb 23 17:47:09 crc kubenswrapper[4724]: I0223 17:47:09.577530 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e48a20ad-1863-458a-ba27-6b24cee6df0c-config-data-default\") pod \"openstack-galera-0\" (UID: \"e48a20ad-1863-458a-ba27-6b24cee6df0c\") " pod="openstack/openstack-galera-0" Feb 23 17:47:09 crc kubenswrapper[4724]: I0223 17:47:09.578743 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e48a20ad-1863-458a-ba27-6b24cee6df0c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e48a20ad-1863-458a-ba27-6b24cee6df0c\") " pod="openstack/openstack-galera-0" Feb 23 17:47:09 crc kubenswrapper[4724]: I0223 17:47:09.582355 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e48a20ad-1863-458a-ba27-6b24cee6df0c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"e48a20ad-1863-458a-ba27-6b24cee6df0c\") " pod="openstack/openstack-galera-0" Feb 23 17:47:09 crc kubenswrapper[4724]: I0223 17:47:09.584718 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e48a20ad-1863-458a-ba27-6b24cee6df0c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"e48a20ad-1863-458a-ba27-6b24cee6df0c\") " pod="openstack/openstack-galera-0" Feb 23 17:47:09 crc kubenswrapper[4724]: I0223 17:47:09.607063 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cw6dw\" (UniqueName: \"kubernetes.io/projected/e48a20ad-1863-458a-ba27-6b24cee6df0c-kube-api-access-cw6dw\") pod \"openstack-galera-0\" (UID: \"e48a20ad-1863-458a-ba27-6b24cee6df0c\") " pod="openstack/openstack-galera-0" Feb 23 17:47:09 crc kubenswrapper[4724]: I0223 17:47:09.623566 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"e48a20ad-1863-458a-ba27-6b24cee6df0c\") " pod="openstack/openstack-galera-0" Feb 23 17:47:09 crc kubenswrapper[4724]: I0223 17:47:09.636914 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 23 17:47:10 crc kubenswrapper[4724]: I0223 17:47:10.626777 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 23 17:47:10 crc kubenswrapper[4724]: I0223 17:47:10.630937 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 23 17:47:10 crc kubenswrapper[4724]: I0223 17:47:10.633318 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-jqhds" Feb 23 17:47:10 crc kubenswrapper[4724]: I0223 17:47:10.633356 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 23 17:47:10 crc kubenswrapper[4724]: I0223 17:47:10.633578 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 23 17:47:10 crc kubenswrapper[4724]: I0223 17:47:10.637336 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 23 17:47:10 crc kubenswrapper[4724]: I0223 17:47:10.647465 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 23 17:47:10 crc kubenswrapper[4724]: I0223 17:47:10.795517 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c7ad5fb5-517e-4249-9da4-08d99599caf0-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c7ad5fb5-517e-4249-9da4-08d99599caf0\") " pod="openstack/openstack-cell1-galera-0" Feb 23 17:47:10 crc kubenswrapper[4724]: I0223 17:47:10.795611 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c7ad5fb5-517e-4249-9da4-08d99599caf0-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c7ad5fb5-517e-4249-9da4-08d99599caf0\") " pod="openstack/openstack-cell1-galera-0" Feb 23 17:47:10 crc kubenswrapper[4724]: I0223 17:47:10.795730 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"c7ad5fb5-517e-4249-9da4-08d99599caf0\") " pod="openstack/openstack-cell1-galera-0" Feb 23 17:47:10 crc kubenswrapper[4724]: I0223 17:47:10.795753 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7ad5fb5-517e-4249-9da4-08d99599caf0-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c7ad5fb5-517e-4249-9da4-08d99599caf0\") " pod="openstack/openstack-cell1-galera-0" Feb 23 17:47:10 crc kubenswrapper[4724]: I0223 17:47:10.795770 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7ad5fb5-517e-4249-9da4-08d99599caf0-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c7ad5fb5-517e-4249-9da4-08d99599caf0\") " pod="openstack/openstack-cell1-galera-0" Feb 23 17:47:10 crc kubenswrapper[4724]: I0223 17:47:10.795798 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8cck\" (UniqueName: \"kubernetes.io/projected/c7ad5fb5-517e-4249-9da4-08d99599caf0-kube-api-access-l8cck\") pod \"openstack-cell1-galera-0\" (UID: \"c7ad5fb5-517e-4249-9da4-08d99599caf0\") " pod="openstack/openstack-cell1-galera-0" Feb 23 17:47:10 crc kubenswrapper[4724]: I0223 17:47:10.795818 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c7ad5fb5-517e-4249-9da4-08d99599caf0-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c7ad5fb5-517e-4249-9da4-08d99599caf0\") " pod="openstack/openstack-cell1-galera-0" Feb 23 17:47:10 crc kubenswrapper[4724]: I0223 17:47:10.795850 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7ad5fb5-517e-4249-9da4-08d99599caf0-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c7ad5fb5-517e-4249-9da4-08d99599caf0\") " pod="openstack/openstack-cell1-galera-0" Feb 23 17:47:10 crc kubenswrapper[4724]: I0223 17:47:10.898027 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8cck\" (UniqueName: \"kubernetes.io/projected/c7ad5fb5-517e-4249-9da4-08d99599caf0-kube-api-access-l8cck\") pod \"openstack-cell1-galera-0\" (UID: \"c7ad5fb5-517e-4249-9da4-08d99599caf0\") " pod="openstack/openstack-cell1-galera-0" Feb 23 17:47:10 crc kubenswrapper[4724]: I0223 17:47:10.898079 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c7ad5fb5-517e-4249-9da4-08d99599caf0-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c7ad5fb5-517e-4249-9da4-08d99599caf0\") " pod="openstack/openstack-cell1-galera-0" Feb 23 17:47:10 crc kubenswrapper[4724]: I0223 17:47:10.898114 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7ad5fb5-517e-4249-9da4-08d99599caf0-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c7ad5fb5-517e-4249-9da4-08d99599caf0\") " pod="openstack/openstack-cell1-galera-0" Feb 23 17:47:10 crc kubenswrapper[4724]: I0223 17:47:10.898159 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c7ad5fb5-517e-4249-9da4-08d99599caf0-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c7ad5fb5-517e-4249-9da4-08d99599caf0\") " pod="openstack/openstack-cell1-galera-0" Feb 23 17:47:10 crc kubenswrapper[4724]: I0223 17:47:10.898182 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c7ad5fb5-517e-4249-9da4-08d99599caf0-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c7ad5fb5-517e-4249-9da4-08d99599caf0\") " pod="openstack/openstack-cell1-galera-0" Feb 23 17:47:10 crc kubenswrapper[4724]: I0223 17:47:10.898231 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"c7ad5fb5-517e-4249-9da4-08d99599caf0\") " pod="openstack/openstack-cell1-galera-0" Feb 23 17:47:10 crc kubenswrapper[4724]: I0223 17:47:10.898253 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7ad5fb5-517e-4249-9da4-08d99599caf0-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c7ad5fb5-517e-4249-9da4-08d99599caf0\") " pod="openstack/openstack-cell1-galera-0" Feb 23 17:47:10 crc kubenswrapper[4724]: I0223 17:47:10.898306 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7ad5fb5-517e-4249-9da4-08d99599caf0-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c7ad5fb5-517e-4249-9da4-08d99599caf0\") " pod="openstack/openstack-cell1-galera-0" Feb 23 17:47:10 crc kubenswrapper[4724]: I0223 17:47:10.899840 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c7ad5fb5-517e-4249-9da4-08d99599caf0-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c7ad5fb5-517e-4249-9da4-08d99599caf0\") " pod="openstack/openstack-cell1-galera-0" Feb 23 17:47:10 crc kubenswrapper[4724]: I0223 17:47:10.899936 4724 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"c7ad5fb5-517e-4249-9da4-08d99599caf0\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-cell1-galera-0" Feb 23 17:47:10 crc kubenswrapper[4724]: I0223 17:47:10.900344 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c7ad5fb5-517e-4249-9da4-08d99599caf0-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c7ad5fb5-517e-4249-9da4-08d99599caf0\") " pod="openstack/openstack-cell1-galera-0" Feb 23 17:47:10 crc kubenswrapper[4724]: I0223 17:47:10.901541 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c7ad5fb5-517e-4249-9da4-08d99599caf0-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c7ad5fb5-517e-4249-9da4-08d99599caf0\") " pod="openstack/openstack-cell1-galera-0" Feb 23 17:47:10 crc kubenswrapper[4724]: I0223 17:47:10.901718 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7ad5fb5-517e-4249-9da4-08d99599caf0-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c7ad5fb5-517e-4249-9da4-08d99599caf0\") " pod="openstack/openstack-cell1-galera-0" Feb 23 17:47:10 crc kubenswrapper[4724]: I0223 17:47:10.903712 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7ad5fb5-517e-4249-9da4-08d99599caf0-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c7ad5fb5-517e-4249-9da4-08d99599caf0\") " pod="openstack/openstack-cell1-galera-0" Feb 23 17:47:10 crc kubenswrapper[4724]: I0223 17:47:10.919272 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7ad5fb5-517e-4249-9da4-08d99599caf0-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c7ad5fb5-517e-4249-9da4-08d99599caf0\") " pod="openstack/openstack-cell1-galera-0" Feb 23 17:47:10 crc kubenswrapper[4724]: I0223 17:47:10.921437 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8cck\" (UniqueName: \"kubernetes.io/projected/c7ad5fb5-517e-4249-9da4-08d99599caf0-kube-api-access-l8cck\") pod \"openstack-cell1-galera-0\" (UID: \"c7ad5fb5-517e-4249-9da4-08d99599caf0\") " pod="openstack/openstack-cell1-galera-0" Feb 23 17:47:10 crc kubenswrapper[4724]: I0223 17:47:10.927053 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"c7ad5fb5-517e-4249-9da4-08d99599caf0\") " pod="openstack/openstack-cell1-galera-0" Feb 23 17:47:10 crc kubenswrapper[4724]: I0223 17:47:10.966948 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 23 17:47:11 crc kubenswrapper[4724]: I0223 17:47:11.058730 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 23 17:47:11 crc kubenswrapper[4724]: I0223 17:47:11.060000 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 23 17:47:11 crc kubenswrapper[4724]: I0223 17:47:11.064052 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-q4bgj" Feb 23 17:47:11 crc kubenswrapper[4724]: I0223 17:47:11.064158 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 23 17:47:11 crc kubenswrapper[4724]: I0223 17:47:11.068024 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 23 17:47:11 crc kubenswrapper[4724]: I0223 17:47:11.074779 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 23 17:47:11 crc kubenswrapper[4724]: I0223 17:47:11.210354 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eadce7d0-a9bc-4840-919b-a341aba11ca2-config-data\") pod \"memcached-0\" (UID: \"eadce7d0-a9bc-4840-919b-a341aba11ca2\") " pod="openstack/memcached-0" Feb 23 17:47:11 crc kubenswrapper[4724]: I0223 17:47:11.210646 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k4t4\" (UniqueName: \"kubernetes.io/projected/eadce7d0-a9bc-4840-919b-a341aba11ca2-kube-api-access-6k4t4\") pod \"memcached-0\" (UID: \"eadce7d0-a9bc-4840-919b-a341aba11ca2\") " pod="openstack/memcached-0" Feb 23 17:47:11 crc kubenswrapper[4724]: I0223 17:47:11.210752 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/eadce7d0-a9bc-4840-919b-a341aba11ca2-memcached-tls-certs\") pod \"memcached-0\" (UID: \"eadce7d0-a9bc-4840-919b-a341aba11ca2\") " pod="openstack/memcached-0" Feb 23 17:47:11 crc kubenswrapper[4724]: I0223 17:47:11.210843 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/eadce7d0-a9bc-4840-919b-a341aba11ca2-kolla-config\") pod \"memcached-0\" (UID: \"eadce7d0-a9bc-4840-919b-a341aba11ca2\") " pod="openstack/memcached-0" Feb 23 17:47:11 crc kubenswrapper[4724]: I0223 17:47:11.210876 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eadce7d0-a9bc-4840-919b-a341aba11ca2-combined-ca-bundle\") pod \"memcached-0\" (UID: \"eadce7d0-a9bc-4840-919b-a341aba11ca2\") " pod="openstack/memcached-0" Feb 23 17:47:11 crc kubenswrapper[4724]: I0223 17:47:11.312303 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eadce7d0-a9bc-4840-919b-a341aba11ca2-config-data\") pod \"memcached-0\" (UID: \"eadce7d0-a9bc-4840-919b-a341aba11ca2\") " pod="openstack/memcached-0" Feb 23 17:47:11 crc kubenswrapper[4724]: I0223 17:47:11.312420 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6k4t4\" (UniqueName: \"kubernetes.io/projected/eadce7d0-a9bc-4840-919b-a341aba11ca2-kube-api-access-6k4t4\") pod \"memcached-0\" (UID: \"eadce7d0-a9bc-4840-919b-a341aba11ca2\") " pod="openstack/memcached-0" Feb 23 17:47:11 crc kubenswrapper[4724]: I0223 17:47:11.312469 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/eadce7d0-a9bc-4840-919b-a341aba11ca2-memcached-tls-certs\") pod \"memcached-0\" (UID: \"eadce7d0-a9bc-4840-919b-a341aba11ca2\") " pod="openstack/memcached-0" Feb 23 17:47:11 crc kubenswrapper[4724]: I0223 17:47:11.312521 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eadce7d0-a9bc-4840-919b-a341aba11ca2-combined-ca-bundle\") pod \"memcached-0\" (UID: \"eadce7d0-a9bc-4840-919b-a341aba11ca2\") " pod="openstack/memcached-0" Feb 23 17:47:11 crc kubenswrapper[4724]: I0223 17:47:11.312546 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/eadce7d0-a9bc-4840-919b-a341aba11ca2-kolla-config\") pod \"memcached-0\" (UID: \"eadce7d0-a9bc-4840-919b-a341aba11ca2\") " pod="openstack/memcached-0" Feb 23 17:47:11 crc kubenswrapper[4724]: I0223 17:47:11.313486 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eadce7d0-a9bc-4840-919b-a341aba11ca2-config-data\") pod \"memcached-0\" (UID: \"eadce7d0-a9bc-4840-919b-a341aba11ca2\") " pod="openstack/memcached-0" Feb 23 17:47:11 crc kubenswrapper[4724]: I0223 17:47:11.313600 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/eadce7d0-a9bc-4840-919b-a341aba11ca2-kolla-config\") pod \"memcached-0\" (UID: \"eadce7d0-a9bc-4840-919b-a341aba11ca2\") " pod="openstack/memcached-0" Feb 23 17:47:11 crc kubenswrapper[4724]: I0223 17:47:11.323736 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/eadce7d0-a9bc-4840-919b-a341aba11ca2-memcached-tls-certs\") pod \"memcached-0\" (UID: \"eadce7d0-a9bc-4840-919b-a341aba11ca2\") " pod="openstack/memcached-0" Feb 23 17:47:11 crc kubenswrapper[4724]: I0223 17:47:11.324566 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eadce7d0-a9bc-4840-919b-a341aba11ca2-combined-ca-bundle\") pod \"memcached-0\" (UID: \"eadce7d0-a9bc-4840-919b-a341aba11ca2\") " pod="openstack/memcached-0" Feb 23 17:47:11 crc kubenswrapper[4724]: I0223 17:47:11.329551 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6k4t4\" (UniqueName: \"kubernetes.io/projected/eadce7d0-a9bc-4840-919b-a341aba11ca2-kube-api-access-6k4t4\") pod \"memcached-0\" (UID: \"eadce7d0-a9bc-4840-919b-a341aba11ca2\") " pod="openstack/memcached-0" Feb 23 17:47:11 crc kubenswrapper[4724]: I0223 17:47:11.382599 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 23 17:47:13 crc kubenswrapper[4724]: I0223 17:47:13.478009 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 23 17:47:13 crc kubenswrapper[4724]: I0223 17:47:13.480033 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 23 17:47:13 crc kubenswrapper[4724]: I0223 17:47:13.484296 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-tlczf" Feb 23 17:47:13 crc kubenswrapper[4724]: I0223 17:47:13.489282 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 23 17:47:13 crc kubenswrapper[4724]: I0223 17:47:13.644342 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcgvs\" (UniqueName: \"kubernetes.io/projected/5d4d82a5-2e0e-459f-b9f4-bb1ab23b3c52-kube-api-access-fcgvs\") pod \"kube-state-metrics-0\" (UID: \"5d4d82a5-2e0e-459f-b9f4-bb1ab23b3c52\") " pod="openstack/kube-state-metrics-0" Feb 23 17:47:13 crc kubenswrapper[4724]: I0223 17:47:13.745763 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcgvs\" (UniqueName: \"kubernetes.io/projected/5d4d82a5-2e0e-459f-b9f4-bb1ab23b3c52-kube-api-access-fcgvs\") pod \"kube-state-metrics-0\" (UID: \"5d4d82a5-2e0e-459f-b9f4-bb1ab23b3c52\") " pod="openstack/kube-state-metrics-0" Feb 23 17:47:13 crc kubenswrapper[4724]: I0223 17:47:13.775004 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcgvs\" (UniqueName: \"kubernetes.io/projected/5d4d82a5-2e0e-459f-b9f4-bb1ab23b3c52-kube-api-access-fcgvs\") pod \"kube-state-metrics-0\" (UID: \"5d4d82a5-2e0e-459f-b9f4-bb1ab23b3c52\") " pod="openstack/kube-state-metrics-0" Feb 23 17:47:13 crc kubenswrapper[4724]: I0223 17:47:13.796271 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 23 17:47:14 crc kubenswrapper[4724]: I0223 17:47:14.851882 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 23 17:47:14 crc kubenswrapper[4724]: I0223 17:47:14.854100 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 23 17:47:14 crc kubenswrapper[4724]: I0223 17:47:14.856464 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 23 17:47:14 crc kubenswrapper[4724]: I0223 17:47:14.856611 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 23 17:47:14 crc kubenswrapper[4724]: I0223 17:47:14.856734 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 23 17:47:14 crc kubenswrapper[4724]: I0223 17:47:14.856837 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 23 17:47:14 crc kubenswrapper[4724]: I0223 17:47:14.857005 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 23 17:47:14 crc kubenswrapper[4724]: I0223 17:47:14.857113 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 23 17:47:14 crc kubenswrapper[4724]: I0223 17:47:14.857466 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-8mdd8" Feb 23 17:47:14 crc kubenswrapper[4724]: I0223 17:47:14.863779 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 23 17:47:14 crc kubenswrapper[4724]: I0223 17:47:14.878335 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 23 17:47:14 crc kubenswrapper[4724]: I0223 17:47:14.975285 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/ad58a78a-ccdb-4154-852e-8a8984a2a650-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"ad58a78a-ccdb-4154-852e-8a8984a2a650\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:47:14 crc kubenswrapper[4724]: I0223 17:47:14.975341 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-81a74b65-f0fc-4662-b1bf-6ba433e2cb11\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-81a74b65-f0fc-4662-b1bf-6ba433e2cb11\") pod \"prometheus-metric-storage-0\" (UID: \"ad58a78a-ccdb-4154-852e-8a8984a2a650\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:47:14 crc kubenswrapper[4724]: I0223 17:47:14.975383 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ad58a78a-ccdb-4154-852e-8a8984a2a650-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"ad58a78a-ccdb-4154-852e-8a8984a2a650\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:47:14 crc kubenswrapper[4724]: I0223 17:47:14.975510 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmqft\" (UniqueName: \"kubernetes.io/projected/ad58a78a-ccdb-4154-852e-8a8984a2a650-kube-api-access-cmqft\") pod \"prometheus-metric-storage-0\" (UID: \"ad58a78a-ccdb-4154-852e-8a8984a2a650\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:47:14 crc kubenswrapper[4724]: I0223 17:47:14.975606 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ad58a78a-ccdb-4154-852e-8a8984a2a650-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"ad58a78a-ccdb-4154-852e-8a8984a2a650\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:47:14 crc kubenswrapper[4724]: I0223 17:47:14.975706 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/ad58a78a-ccdb-4154-852e-8a8984a2a650-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"ad58a78a-ccdb-4154-852e-8a8984a2a650\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:47:14 crc kubenswrapper[4724]: I0223 17:47:14.975783 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/ad58a78a-ccdb-4154-852e-8a8984a2a650-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"ad58a78a-ccdb-4154-852e-8a8984a2a650\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:47:14 crc kubenswrapper[4724]: I0223 17:47:14.975901 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ad58a78a-ccdb-4154-852e-8a8984a2a650-config\") pod \"prometheus-metric-storage-0\" (UID: \"ad58a78a-ccdb-4154-852e-8a8984a2a650\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:47:14 crc kubenswrapper[4724]: I0223 17:47:14.975936 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ad58a78a-ccdb-4154-852e-8a8984a2a650-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"ad58a78a-ccdb-4154-852e-8a8984a2a650\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:47:14 crc kubenswrapper[4724]: I0223 17:47:14.976033 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/ad58a78a-ccdb-4154-852e-8a8984a2a650-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"ad58a78a-ccdb-4154-852e-8a8984a2a650\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:47:15 crc kubenswrapper[4724]: I0223 17:47:15.077991 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ad58a78a-ccdb-4154-852e-8a8984a2a650-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"ad58a78a-ccdb-4154-852e-8a8984a2a650\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:47:15 crc kubenswrapper[4724]: I0223 17:47:15.078083 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/ad58a78a-ccdb-4154-852e-8a8984a2a650-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"ad58a78a-ccdb-4154-852e-8a8984a2a650\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:47:15 crc kubenswrapper[4724]: I0223 17:47:15.078162 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/ad58a78a-ccdb-4154-852e-8a8984a2a650-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"ad58a78a-ccdb-4154-852e-8a8984a2a650\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:47:15 crc kubenswrapper[4724]: I0223 17:47:15.078232 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ad58a78a-ccdb-4154-852e-8a8984a2a650-config\") pod \"prometheus-metric-storage-0\" (UID: \"ad58a78a-ccdb-4154-852e-8a8984a2a650\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:47:15 crc kubenswrapper[4724]: I0223 17:47:15.078257 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ad58a78a-ccdb-4154-852e-8a8984a2a650-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"ad58a78a-ccdb-4154-852e-8a8984a2a650\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:47:15 crc kubenswrapper[4724]: I0223 17:47:15.078299 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/ad58a78a-ccdb-4154-852e-8a8984a2a650-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"ad58a78a-ccdb-4154-852e-8a8984a2a650\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:47:15 crc kubenswrapper[4724]: I0223 17:47:15.078329 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/ad58a78a-ccdb-4154-852e-8a8984a2a650-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"ad58a78a-ccdb-4154-852e-8a8984a2a650\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:47:15 crc kubenswrapper[4724]: I0223 17:47:15.078352 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-81a74b65-f0fc-4662-b1bf-6ba433e2cb11\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-81a74b65-f0fc-4662-b1bf-6ba433e2cb11\") pod \"prometheus-metric-storage-0\" (UID: \"ad58a78a-ccdb-4154-852e-8a8984a2a650\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:47:15 crc kubenswrapper[4724]: I0223 17:47:15.078419 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ad58a78a-ccdb-4154-852e-8a8984a2a650-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"ad58a78a-ccdb-4154-852e-8a8984a2a650\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:47:15 crc kubenswrapper[4724]: I0223 17:47:15.078499 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmqft\" (UniqueName: \"kubernetes.io/projected/ad58a78a-ccdb-4154-852e-8a8984a2a650-kube-api-access-cmqft\") pod \"prometheus-metric-storage-0\" (UID: \"ad58a78a-ccdb-4154-852e-8a8984a2a650\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:47:15 crc kubenswrapper[4724]: I0223 17:47:15.079087 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/ad58a78a-ccdb-4154-852e-8a8984a2a650-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"ad58a78a-ccdb-4154-852e-8a8984a2a650\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:47:15 crc kubenswrapper[4724]: I0223 17:47:15.079087 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/ad58a78a-ccdb-4154-852e-8a8984a2a650-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"ad58a78a-ccdb-4154-852e-8a8984a2a650\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:47:15 crc kubenswrapper[4724]: I0223 17:47:15.082067 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ad58a78a-ccdb-4154-852e-8a8984a2a650-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"ad58a78a-ccdb-4154-852e-8a8984a2a650\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:47:15 crc kubenswrapper[4724]: I0223 17:47:15.082724 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/ad58a78a-ccdb-4154-852e-8a8984a2a650-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"ad58a78a-ccdb-4154-852e-8a8984a2a650\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:47:15 crc kubenswrapper[4724]: I0223 17:47:15.084796 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/ad58a78a-ccdb-4154-852e-8a8984a2a650-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"ad58a78a-ccdb-4154-852e-8a8984a2a650\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:47:15 crc kubenswrapper[4724]: I0223 17:47:15.085505 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ad58a78a-ccdb-4154-852e-8a8984a2a650-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"ad58a78a-ccdb-4154-852e-8a8984a2a650\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:47:15 crc kubenswrapper[4724]: I0223 17:47:15.086320 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ad58a78a-ccdb-4154-852e-8a8984a2a650-config\") pod \"prometheus-metric-storage-0\" (UID: \"ad58a78a-ccdb-4154-852e-8a8984a2a650\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:47:15 crc kubenswrapper[4724]: I0223 17:47:15.088950 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ad58a78a-ccdb-4154-852e-8a8984a2a650-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"ad58a78a-ccdb-4154-852e-8a8984a2a650\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:47:15 crc kubenswrapper[4724]: I0223 17:47:15.094729 4724 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 23 17:47:15 crc kubenswrapper[4724]: I0223 17:47:15.094766 4724 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-81a74b65-f0fc-4662-b1bf-6ba433e2cb11\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-81a74b65-f0fc-4662-b1bf-6ba433e2cb11\") pod \"prometheus-metric-storage-0\" (UID: \"ad58a78a-ccdb-4154-852e-8a8984a2a650\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/47f183732fd6cce9e8579bb5bdfe275794daae311819ba60fd57e3b1b945523c/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 23 17:47:15 crc kubenswrapper[4724]: I0223 17:47:15.099144 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmqft\" (UniqueName: \"kubernetes.io/projected/ad58a78a-ccdb-4154-852e-8a8984a2a650-kube-api-access-cmqft\") pod \"prometheus-metric-storage-0\" (UID: \"ad58a78a-ccdb-4154-852e-8a8984a2a650\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:47:15 crc kubenswrapper[4724]: I0223 17:47:15.136908 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-81a74b65-f0fc-4662-b1bf-6ba433e2cb11\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-81a74b65-f0fc-4662-b1bf-6ba433e2cb11\") pod \"prometheus-metric-storage-0\" (UID: \"ad58a78a-ccdb-4154-852e-8a8984a2a650\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:47:15 crc kubenswrapper[4724]: I0223 17:47:15.176917 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 23 17:47:16 crc kubenswrapper[4724]: I0223 17:47:16.204871 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-hh76w"] Feb 23 17:47:16 crc kubenswrapper[4724]: I0223 17:47:16.206540 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hh76w" Feb 23 17:47:16 crc kubenswrapper[4724]: I0223 17:47:16.214538 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 23 17:47:16 crc kubenswrapper[4724]: I0223 17:47:16.214639 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 23 17:47:16 crc kubenswrapper[4724]: I0223 17:47:16.214541 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-sfzrj" Feb 23 17:47:16 crc kubenswrapper[4724]: I0223 17:47:16.227032 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-hh76w"] Feb 23 17:47:16 crc kubenswrapper[4724]: I0223 17:47:16.259508 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-lzxrb"] Feb 23 17:47:16 crc kubenswrapper[4724]: I0223 17:47:16.262279 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-lzxrb" Feb 23 17:47:16 crc kubenswrapper[4724]: I0223 17:47:16.273154 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-lzxrb"] Feb 23 17:47:16 crc kubenswrapper[4724]: I0223 17:47:16.304305 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8fd48d48-59c7-4470-9223-c3b3f786c8d9-var-log-ovn\") pod \"ovn-controller-hh76w\" (UID: \"8fd48d48-59c7-4470-9223-c3b3f786c8d9\") " pod="openstack/ovn-controller-hh76w" Feb 23 17:47:16 crc kubenswrapper[4724]: I0223 17:47:16.304373 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8fd48d48-59c7-4470-9223-c3b3f786c8d9-var-run\") pod \"ovn-controller-hh76w\" (UID: \"8fd48d48-59c7-4470-9223-c3b3f786c8d9\") " pod="openstack/ovn-controller-hh76w" Feb 23 17:47:16 crc kubenswrapper[4724]: I0223 17:47:16.304472 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fd48d48-59c7-4470-9223-c3b3f786c8d9-combined-ca-bundle\") pod \"ovn-controller-hh76w\" (UID: \"8fd48d48-59c7-4470-9223-c3b3f786c8d9\") " pod="openstack/ovn-controller-hh76w" Feb 23 17:47:16 crc kubenswrapper[4724]: I0223 17:47:16.304507 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8fd48d48-59c7-4470-9223-c3b3f786c8d9-var-run-ovn\") pod \"ovn-controller-hh76w\" (UID: \"8fd48d48-59c7-4470-9223-c3b3f786c8d9\") " pod="openstack/ovn-controller-hh76w" Feb 23 17:47:16 crc kubenswrapper[4724]: I0223 17:47:16.304575 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq4w6\" (UniqueName: \"kubernetes.io/projected/8fd48d48-59c7-4470-9223-c3b3f786c8d9-kube-api-access-sq4w6\") pod \"ovn-controller-hh76w\" (UID: \"8fd48d48-59c7-4470-9223-c3b3f786c8d9\") " pod="openstack/ovn-controller-hh76w" Feb 23 17:47:16 crc kubenswrapper[4724]: I0223 17:47:16.304616 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8fd48d48-59c7-4470-9223-c3b3f786c8d9-scripts\") pod \"ovn-controller-hh76w\" (UID: \"8fd48d48-59c7-4470-9223-c3b3f786c8d9\") " pod="openstack/ovn-controller-hh76w" Feb 23 17:47:16 crc kubenswrapper[4724]: I0223 17:47:16.304654 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fd48d48-59c7-4470-9223-c3b3f786c8d9-ovn-controller-tls-certs\") pod \"ovn-controller-hh76w\" (UID: \"8fd48d48-59c7-4470-9223-c3b3f786c8d9\") " pod="openstack/ovn-controller-hh76w" Feb 23 17:47:16 crc kubenswrapper[4724]: I0223 17:47:16.405825 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/4f6f0027-7e55-407c-be1d-5dc5f57250a8-var-lib\") pod \"ovn-controller-ovs-lzxrb\" (UID: \"4f6f0027-7e55-407c-be1d-5dc5f57250a8\") " pod="openstack/ovn-controller-ovs-lzxrb" Feb 23 17:47:16 crc kubenswrapper[4724]: I0223 17:47:16.405902 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sq4w6\" (UniqueName: \"kubernetes.io/projected/8fd48d48-59c7-4470-9223-c3b3f786c8d9-kube-api-access-sq4w6\") pod \"ovn-controller-hh76w\" (UID: \"8fd48d48-59c7-4470-9223-c3b3f786c8d9\") " pod="openstack/ovn-controller-hh76w" Feb 23 17:47:16 crc kubenswrapper[4724]: I0223 17:47:16.405933 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8fd48d48-59c7-4470-9223-c3b3f786c8d9-scripts\") pod \"ovn-controller-hh76w\" (UID: \"8fd48d48-59c7-4470-9223-c3b3f786c8d9\") " pod="openstack/ovn-controller-hh76w" Feb 23 17:47:16 crc kubenswrapper[4724]: I0223 17:47:16.405955 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4f6f0027-7e55-407c-be1d-5dc5f57250a8-var-run\") pod \"ovn-controller-ovs-lzxrb\" (UID: \"4f6f0027-7e55-407c-be1d-5dc5f57250a8\") " pod="openstack/ovn-controller-ovs-lzxrb" Feb 23 17:47:16 crc kubenswrapper[4724]: I0223 17:47:16.405987 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fd48d48-59c7-4470-9223-c3b3f786c8d9-ovn-controller-tls-certs\") pod \"ovn-controller-hh76w\" (UID: \"8fd48d48-59c7-4470-9223-c3b3f786c8d9\") " pod="openstack/ovn-controller-hh76w" Feb 23 17:47:16 crc kubenswrapper[4724]: I0223 17:47:16.406017 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8fd48d48-59c7-4470-9223-c3b3f786c8d9-var-log-ovn\") pod \"ovn-controller-hh76w\" (UID: \"8fd48d48-59c7-4470-9223-c3b3f786c8d9\") " pod="openstack/ovn-controller-hh76w" Feb 23 17:47:16 crc kubenswrapper[4724]: I0223 17:47:16.406301 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8fd48d48-59c7-4470-9223-c3b3f786c8d9-var-run\") pod \"ovn-controller-hh76w\" (UID: \"8fd48d48-59c7-4470-9223-c3b3f786c8d9\") " pod="openstack/ovn-controller-hh76w" Feb 23 17:47:16 crc kubenswrapper[4724]: I0223 17:47:16.406336 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fd48d48-59c7-4470-9223-c3b3f786c8d9-combined-ca-bundle\") pod \"ovn-controller-hh76w\" (UID: \"8fd48d48-59c7-4470-9223-c3b3f786c8d9\") " pod="openstack/ovn-controller-hh76w" Feb 23 17:47:16 crc kubenswrapper[4724]: I0223 17:47:16.406364 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/4f6f0027-7e55-407c-be1d-5dc5f57250a8-var-log\") pod \"ovn-controller-ovs-lzxrb\" (UID: \"4f6f0027-7e55-407c-be1d-5dc5f57250a8\") " pod="openstack/ovn-controller-ovs-lzxrb" Feb 23 17:47:16 crc kubenswrapper[4724]: I0223 17:47:16.406396 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/4f6f0027-7e55-407c-be1d-5dc5f57250a8-etc-ovs\") pod \"ovn-controller-ovs-lzxrb\" (UID: \"4f6f0027-7e55-407c-be1d-5dc5f57250a8\") " pod="openstack/ovn-controller-ovs-lzxrb" Feb 23 17:47:16 crc kubenswrapper[4724]: I0223 17:47:16.406414 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f6f0027-7e55-407c-be1d-5dc5f57250a8-scripts\") pod \"ovn-controller-ovs-lzxrb\" (UID: \"4f6f0027-7e55-407c-be1d-5dc5f57250a8\") " pod="openstack/ovn-controller-ovs-lzxrb" Feb 23 17:47:16 crc kubenswrapper[4724]: I0223 17:47:16.406429 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdbmc\" (UniqueName: \"kubernetes.io/projected/4f6f0027-7e55-407c-be1d-5dc5f57250a8-kube-api-access-jdbmc\") pod \"ovn-controller-ovs-lzxrb\" (UID: \"4f6f0027-7e55-407c-be1d-5dc5f57250a8\") " pod="openstack/ovn-controller-ovs-lzxrb" Feb 23 17:47:16 crc kubenswrapper[4724]: I0223 17:47:16.406453 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8fd48d48-59c7-4470-9223-c3b3f786c8d9-var-run-ovn\") pod \"ovn-controller-hh76w\" (UID: \"8fd48d48-59c7-4470-9223-c3b3f786c8d9\") " pod="openstack/ovn-controller-hh76w" Feb 23 17:47:16 crc kubenswrapper[4724]: I0223 17:47:16.407150 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8fd48d48-59c7-4470-9223-c3b3f786c8d9-var-log-ovn\") pod \"ovn-controller-hh76w\" (UID: \"8fd48d48-59c7-4470-9223-c3b3f786c8d9\") " pod="openstack/ovn-controller-hh76w" Feb 23 17:47:16 crc kubenswrapper[4724]: I0223 17:47:16.407265 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8fd48d48-59c7-4470-9223-c3b3f786c8d9-var-run-ovn\") pod \"ovn-controller-hh76w\" (UID: \"8fd48d48-59c7-4470-9223-c3b3f786c8d9\") " pod="openstack/ovn-controller-hh76w" Feb 23 17:47:16 crc kubenswrapper[4724]: I0223 17:47:16.407307 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8fd48d48-59c7-4470-9223-c3b3f786c8d9-var-run\") pod \"ovn-controller-hh76w\" (UID: \"8fd48d48-59c7-4470-9223-c3b3f786c8d9\") " pod="openstack/ovn-controller-hh76w" Feb 23 17:47:16 crc kubenswrapper[4724]: I0223 17:47:16.408559 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8fd48d48-59c7-4470-9223-c3b3f786c8d9-scripts\") pod \"ovn-controller-hh76w\" (UID: \"8fd48d48-59c7-4470-9223-c3b3f786c8d9\") " pod="openstack/ovn-controller-hh76w" Feb 23 17:47:16 crc kubenswrapper[4724]: I0223 17:47:16.412688 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fd48d48-59c7-4470-9223-c3b3f786c8d9-ovn-controller-tls-certs\") pod \"ovn-controller-hh76w\" (UID: \"8fd48d48-59c7-4470-9223-c3b3f786c8d9\") " pod="openstack/ovn-controller-hh76w" Feb 23 17:47:16 crc kubenswrapper[4724]: I0223 17:47:16.412944 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fd48d48-59c7-4470-9223-c3b3f786c8d9-combined-ca-bundle\") pod \"ovn-controller-hh76w\" (UID: \"8fd48d48-59c7-4470-9223-c3b3f786c8d9\") " pod="openstack/ovn-controller-hh76w" Feb 23 17:47:16 crc kubenswrapper[4724]: I0223 17:47:16.423572 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sq4w6\" (UniqueName: \"kubernetes.io/projected/8fd48d48-59c7-4470-9223-c3b3f786c8d9-kube-api-access-sq4w6\") pod \"ovn-controller-hh76w\" (UID: \"8fd48d48-59c7-4470-9223-c3b3f786c8d9\") " pod="openstack/ovn-controller-hh76w" Feb 23 17:47:16 crc kubenswrapper[4724]: I0223 17:47:16.508038 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/4f6f0027-7e55-407c-be1d-5dc5f57250a8-var-log\") pod \"ovn-controller-ovs-lzxrb\" (UID: \"4f6f0027-7e55-407c-be1d-5dc5f57250a8\") " pod="openstack/ovn-controller-ovs-lzxrb" Feb 23 17:47:16 crc kubenswrapper[4724]: I0223 17:47:16.508081 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/4f6f0027-7e55-407c-be1d-5dc5f57250a8-etc-ovs\") pod \"ovn-controller-ovs-lzxrb\" (UID: \"4f6f0027-7e55-407c-be1d-5dc5f57250a8\") " pod="openstack/ovn-controller-ovs-lzxrb" Feb 23 17:47:16 crc kubenswrapper[4724]: I0223 17:47:16.508097 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f6f0027-7e55-407c-be1d-5dc5f57250a8-scripts\") pod \"ovn-controller-ovs-lzxrb\" (UID: \"4f6f0027-7e55-407c-be1d-5dc5f57250a8\") " pod="openstack/ovn-controller-ovs-lzxrb" Feb 23 17:47:16 crc kubenswrapper[4724]: I0223 17:47:16.508114 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdbmc\" (UniqueName: \"kubernetes.io/projected/4f6f0027-7e55-407c-be1d-5dc5f57250a8-kube-api-access-jdbmc\") pod \"ovn-controller-ovs-lzxrb\" (UID: \"4f6f0027-7e55-407c-be1d-5dc5f57250a8\") " pod="openstack/ovn-controller-ovs-lzxrb" Feb 23 17:47:16 crc kubenswrapper[4724]: I0223 17:47:16.508152 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/4f6f0027-7e55-407c-be1d-5dc5f57250a8-var-lib\") pod \"ovn-controller-ovs-lzxrb\" (UID: \"4f6f0027-7e55-407c-be1d-5dc5f57250a8\") " pod="openstack/ovn-controller-ovs-lzxrb" Feb 23 17:47:16 crc kubenswrapper[4724]: I0223 17:47:16.508204 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4f6f0027-7e55-407c-be1d-5dc5f57250a8-var-run\") pod \"ovn-controller-ovs-lzxrb\" (UID: \"4f6f0027-7e55-407c-be1d-5dc5f57250a8\") " pod="openstack/ovn-controller-ovs-lzxrb" Feb 23 17:47:16 crc kubenswrapper[4724]: I0223 17:47:16.508369 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4f6f0027-7e55-407c-be1d-5dc5f57250a8-var-run\") pod \"ovn-controller-ovs-lzxrb\" (UID: \"4f6f0027-7e55-407c-be1d-5dc5f57250a8\") " pod="openstack/ovn-controller-ovs-lzxrb" Feb 23 17:47:16 crc kubenswrapper[4724]: I0223 17:47:16.508428 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/4f6f0027-7e55-407c-be1d-5dc5f57250a8-var-log\") pod \"ovn-controller-ovs-lzxrb\" (UID: \"4f6f0027-7e55-407c-be1d-5dc5f57250a8\") " pod="openstack/ovn-controller-ovs-lzxrb" Feb 23 17:47:16 crc kubenswrapper[4724]: I0223 17:47:16.508661 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/4f6f0027-7e55-407c-be1d-5dc5f57250a8-var-lib\") pod \"ovn-controller-ovs-lzxrb\" (UID: \"4f6f0027-7e55-407c-be1d-5dc5f57250a8\") " pod="openstack/ovn-controller-ovs-lzxrb" Feb 23 17:47:16 crc kubenswrapper[4724]: I0223 17:47:16.508820 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/4f6f0027-7e55-407c-be1d-5dc5f57250a8-etc-ovs\") pod \"ovn-controller-ovs-lzxrb\" (UID: \"4f6f0027-7e55-407c-be1d-5dc5f57250a8\") " pod="openstack/ovn-controller-ovs-lzxrb" Feb 23 17:47:16 crc kubenswrapper[4724]: I0223 17:47:16.510198 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f6f0027-7e55-407c-be1d-5dc5f57250a8-scripts\") pod \"ovn-controller-ovs-lzxrb\" (UID: \"4f6f0027-7e55-407c-be1d-5dc5f57250a8\") " pod="openstack/ovn-controller-ovs-lzxrb" Feb 23 17:47:16 crc kubenswrapper[4724]: I0223 17:47:16.531352 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdbmc\" (UniqueName: \"kubernetes.io/projected/4f6f0027-7e55-407c-be1d-5dc5f57250a8-kube-api-access-jdbmc\") pod \"ovn-controller-ovs-lzxrb\" (UID: \"4f6f0027-7e55-407c-be1d-5dc5f57250a8\") " pod="openstack/ovn-controller-ovs-lzxrb" Feb 23 17:47:16 crc kubenswrapper[4724]: I0223 17:47:16.536889 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hh76w" Feb 23 17:47:16 crc kubenswrapper[4724]: I0223 17:47:16.584289 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-lzxrb" Feb 23 17:47:17 crc kubenswrapper[4724]: I0223 17:47:17.409233 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 23 17:47:17 crc kubenswrapper[4724]: I0223 17:47:17.416187 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 23 17:47:17 crc kubenswrapper[4724]: I0223 17:47:17.418955 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 23 17:47:17 crc kubenswrapper[4724]: I0223 17:47:17.419001 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 23 17:47:17 crc kubenswrapper[4724]: I0223 17:47:17.419277 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 23 17:47:17 crc kubenswrapper[4724]: I0223 17:47:17.419375 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-qn42w" Feb 23 17:47:17 crc kubenswrapper[4724]: I0223 17:47:17.422879 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 23 17:47:17 crc kubenswrapper[4724]: I0223 17:47:17.423978 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 23 17:47:17 crc kubenswrapper[4724]: I0223 17:47:17.526309 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba834afe-088c-4b0c-97f5-7986f8f9c988-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"ba834afe-088c-4b0c-97f5-7986f8f9c988\") " pod="openstack/ovsdbserver-nb-0" Feb 23 17:47:17 crc kubenswrapper[4724]: I0223 17:47:17.526414 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba834afe-088c-4b0c-97f5-7986f8f9c988-config\") pod \"ovsdbserver-nb-0\" (UID: \"ba834afe-088c-4b0c-97f5-7986f8f9c988\") " pod="openstack/ovsdbserver-nb-0" Feb 23 17:47:17 crc kubenswrapper[4724]: I0223 17:47:17.526522 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rh4s\" (UniqueName: \"kubernetes.io/projected/ba834afe-088c-4b0c-97f5-7986f8f9c988-kube-api-access-8rh4s\") pod \"ovsdbserver-nb-0\" (UID: \"ba834afe-088c-4b0c-97f5-7986f8f9c988\") " pod="openstack/ovsdbserver-nb-0" Feb 23 17:47:17 crc kubenswrapper[4724]: I0223 17:47:17.526564 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba834afe-088c-4b0c-97f5-7986f8f9c988-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ba834afe-088c-4b0c-97f5-7986f8f9c988\") " pod="openstack/ovsdbserver-nb-0" Feb 23 17:47:17 crc kubenswrapper[4724]: I0223 17:47:17.526592 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"ba834afe-088c-4b0c-97f5-7986f8f9c988\") " pod="openstack/ovsdbserver-nb-0" Feb 23 17:47:17 crc kubenswrapper[4724]: I0223 17:47:17.526611 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba834afe-088c-4b0c-97f5-7986f8f9c988-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ba834afe-088c-4b0c-97f5-7986f8f9c988\") " pod="openstack/ovsdbserver-nb-0" Feb 23 17:47:17 crc kubenswrapper[4724]: I0223 17:47:17.526704 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ba834afe-088c-4b0c-97f5-7986f8f9c988-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"ba834afe-088c-4b0c-97f5-7986f8f9c988\") " pod="openstack/ovsdbserver-nb-0" Feb 23 17:47:17 crc kubenswrapper[4724]: I0223 17:47:17.526774 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ba834afe-088c-4b0c-97f5-7986f8f9c988-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"ba834afe-088c-4b0c-97f5-7986f8f9c988\") " pod="openstack/ovsdbserver-nb-0" Feb 23 17:47:17 crc kubenswrapper[4724]: I0223 17:47:17.628485 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ba834afe-088c-4b0c-97f5-7986f8f9c988-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"ba834afe-088c-4b0c-97f5-7986f8f9c988\") " pod="openstack/ovsdbserver-nb-0" Feb 23 17:47:17 crc kubenswrapper[4724]: I0223 17:47:17.628547 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ba834afe-088c-4b0c-97f5-7986f8f9c988-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"ba834afe-088c-4b0c-97f5-7986f8f9c988\") " pod="openstack/ovsdbserver-nb-0" Feb 23 17:47:17 crc kubenswrapper[4724]: I0223 17:47:17.628599 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba834afe-088c-4b0c-97f5-7986f8f9c988-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"ba834afe-088c-4b0c-97f5-7986f8f9c988\") " pod="openstack/ovsdbserver-nb-0" Feb 23 17:47:17 crc kubenswrapper[4724]: I0223 17:47:17.628641 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba834afe-088c-4b0c-97f5-7986f8f9c988-config\") pod \"ovsdbserver-nb-0\" (UID: \"ba834afe-088c-4b0c-97f5-7986f8f9c988\") " pod="openstack/ovsdbserver-nb-0" Feb 23 17:47:17 crc kubenswrapper[4724]: I0223 17:47:17.628668 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rh4s\" (UniqueName: \"kubernetes.io/projected/ba834afe-088c-4b0c-97f5-7986f8f9c988-kube-api-access-8rh4s\") pod \"ovsdbserver-nb-0\" (UID: \"ba834afe-088c-4b0c-97f5-7986f8f9c988\") " pod="openstack/ovsdbserver-nb-0" Feb 23 17:47:17 crc kubenswrapper[4724]: I0223 17:47:17.628687 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba834afe-088c-4b0c-97f5-7986f8f9c988-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ba834afe-088c-4b0c-97f5-7986f8f9c988\") " pod="openstack/ovsdbserver-nb-0" Feb 23 17:47:17 crc kubenswrapper[4724]: I0223 17:47:17.628701 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba834afe-088c-4b0c-97f5-7986f8f9c988-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ba834afe-088c-4b0c-97f5-7986f8f9c988\") " pod="openstack/ovsdbserver-nb-0" Feb 23 17:47:17 crc kubenswrapper[4724]: I0223 17:47:17.628719 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"ba834afe-088c-4b0c-97f5-7986f8f9c988\") " pod="openstack/ovsdbserver-nb-0" Feb 23 17:47:17 crc kubenswrapper[4724]: I0223 17:47:17.629037 4724 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"ba834afe-088c-4b0c-97f5-7986f8f9c988\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/ovsdbserver-nb-0" Feb 23 17:47:17 crc kubenswrapper[4724]: I0223 17:47:17.630351 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba834afe-088c-4b0c-97f5-7986f8f9c988-config\") pod \"ovsdbserver-nb-0\" (UID: \"ba834afe-088c-4b0c-97f5-7986f8f9c988\") " pod="openstack/ovsdbserver-nb-0" Feb 23 17:47:17 crc kubenswrapper[4724]: I0223 17:47:17.630402 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ba834afe-088c-4b0c-97f5-7986f8f9c988-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"ba834afe-088c-4b0c-97f5-7986f8f9c988\") " pod="openstack/ovsdbserver-nb-0" Feb 23 17:47:17 crc kubenswrapper[4724]: I0223 17:47:17.630869 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ba834afe-088c-4b0c-97f5-7986f8f9c988-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"ba834afe-088c-4b0c-97f5-7986f8f9c988\") " pod="openstack/ovsdbserver-nb-0" Feb 23 17:47:17 crc kubenswrapper[4724]: I0223 17:47:17.635100 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba834afe-088c-4b0c-97f5-7986f8f9c988-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"ba834afe-088c-4b0c-97f5-7986f8f9c988\") " pod="openstack/ovsdbserver-nb-0" Feb 23 17:47:17 crc kubenswrapper[4724]: I0223 17:47:17.636196 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba834afe-088c-4b0c-97f5-7986f8f9c988-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ba834afe-088c-4b0c-97f5-7986f8f9c988\") " pod="openstack/ovsdbserver-nb-0" Feb 23 17:47:17 crc kubenswrapper[4724]: I0223 17:47:17.639034 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba834afe-088c-4b0c-97f5-7986f8f9c988-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ba834afe-088c-4b0c-97f5-7986f8f9c988\") " pod="openstack/ovsdbserver-nb-0" Feb 23 17:47:17 crc kubenswrapper[4724]: I0223 17:47:17.647611 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rh4s\" (UniqueName: \"kubernetes.io/projected/ba834afe-088c-4b0c-97f5-7986f8f9c988-kube-api-access-8rh4s\") pod \"ovsdbserver-nb-0\" (UID: \"ba834afe-088c-4b0c-97f5-7986f8f9c988\") " pod="openstack/ovsdbserver-nb-0" Feb 23 17:47:17 crc kubenswrapper[4724]: I0223 17:47:17.669632 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"ba834afe-088c-4b0c-97f5-7986f8f9c988\") " pod="openstack/ovsdbserver-nb-0" Feb 23 17:47:17 crc kubenswrapper[4724]: I0223 17:47:17.739578 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 23 17:47:19 crc kubenswrapper[4724]: I0223 17:47:19.037186 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fc74595bc-c6dvf"] Feb 23 17:47:20 crc kubenswrapper[4724]: I0223 17:47:20.271142 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 23 17:47:20 crc kubenswrapper[4724]: I0223 17:47:20.273514 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 23 17:47:20 crc kubenswrapper[4724]: I0223 17:47:20.275828 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 23 17:47:20 crc kubenswrapper[4724]: I0223 17:47:20.276005 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 23 17:47:20 crc kubenswrapper[4724]: I0223 17:47:20.276168 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 23 17:47:20 crc kubenswrapper[4724]: I0223 17:47:20.276363 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-wdl28" Feb 23 17:47:20 crc kubenswrapper[4724]: I0223 17:47:20.281036 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 23 17:47:20 crc kubenswrapper[4724]: I0223 17:47:20.391371 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/02d0b5c7-a3f7-47d6-a52f-cff5a0946cea-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"02d0b5c7-a3f7-47d6-a52f-cff5a0946cea\") " pod="openstack/ovsdbserver-sb-0" Feb 23 17:47:20 crc kubenswrapper[4724]: I0223 17:47:20.391728 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/02d0b5c7-a3f7-47d6-a52f-cff5a0946cea-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"02d0b5c7-a3f7-47d6-a52f-cff5a0946cea\") " pod="openstack/ovsdbserver-sb-0" Feb 23 17:47:20 crc kubenswrapper[4724]: I0223 17:47:20.391768 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/02d0b5c7-a3f7-47d6-a52f-cff5a0946cea-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"02d0b5c7-a3f7-47d6-a52f-cff5a0946cea\") " pod="openstack/ovsdbserver-sb-0" Feb 23 17:47:20 crc kubenswrapper[4724]: I0223 17:47:20.391811 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02d0b5c7-a3f7-47d6-a52f-cff5a0946cea-config\") pod \"ovsdbserver-sb-0\" (UID: \"02d0b5c7-a3f7-47d6-a52f-cff5a0946cea\") " pod="openstack/ovsdbserver-sb-0" Feb 23 17:47:20 crc kubenswrapper[4724]: I0223 17:47:20.391832 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02d0b5c7-a3f7-47d6-a52f-cff5a0946cea-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"02d0b5c7-a3f7-47d6-a52f-cff5a0946cea\") " pod="openstack/ovsdbserver-sb-0" Feb 23 17:47:20 crc kubenswrapper[4724]: I0223 17:47:20.391958 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"02d0b5c7-a3f7-47d6-a52f-cff5a0946cea\") " pod="openstack/ovsdbserver-sb-0" Feb 23 17:47:20 crc kubenswrapper[4724]: I0223 17:47:20.392050 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02d0b5c7-a3f7-47d6-a52f-cff5a0946cea-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"02d0b5c7-a3f7-47d6-a52f-cff5a0946cea\") " pod="openstack/ovsdbserver-sb-0" Feb 23 17:47:20 crc kubenswrapper[4724]: I0223 17:47:20.392116 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhm6v\" (UniqueName: \"kubernetes.io/projected/02d0b5c7-a3f7-47d6-a52f-cff5a0946cea-kube-api-access-nhm6v\") pod \"ovsdbserver-sb-0\" (UID: \"02d0b5c7-a3f7-47d6-a52f-cff5a0946cea\") " pod="openstack/ovsdbserver-sb-0" Feb 23 17:47:20 crc kubenswrapper[4724]: I0223 17:47:20.493525 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02d0b5c7-a3f7-47d6-a52f-cff5a0946cea-config\") pod \"ovsdbserver-sb-0\" (UID: \"02d0b5c7-a3f7-47d6-a52f-cff5a0946cea\") " pod="openstack/ovsdbserver-sb-0" Feb 23 17:47:20 crc kubenswrapper[4724]: I0223 17:47:20.493804 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02d0b5c7-a3f7-47d6-a52f-cff5a0946cea-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"02d0b5c7-a3f7-47d6-a52f-cff5a0946cea\") " pod="openstack/ovsdbserver-sb-0" Feb 23 17:47:20 crc kubenswrapper[4724]: I0223 17:47:20.493945 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"02d0b5c7-a3f7-47d6-a52f-cff5a0946cea\") " pod="openstack/ovsdbserver-sb-0" Feb 23 17:47:20 crc kubenswrapper[4724]: I0223 17:47:20.494053 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02d0b5c7-a3f7-47d6-a52f-cff5a0946cea-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"02d0b5c7-a3f7-47d6-a52f-cff5a0946cea\") " pod="openstack/ovsdbserver-sb-0" Feb 23 17:47:20 crc kubenswrapper[4724]: I0223 17:47:20.494588 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02d0b5c7-a3f7-47d6-a52f-cff5a0946cea-config\") pod \"ovsdbserver-sb-0\" (UID: \"02d0b5c7-a3f7-47d6-a52f-cff5a0946cea\") " pod="openstack/ovsdbserver-sb-0" Feb 23 17:47:20 crc kubenswrapper[4724]: I0223 17:47:20.494608 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhm6v\" (UniqueName: \"kubernetes.io/projected/02d0b5c7-a3f7-47d6-a52f-cff5a0946cea-kube-api-access-nhm6v\") pod \"ovsdbserver-sb-0\" (UID: \"02d0b5c7-a3f7-47d6-a52f-cff5a0946cea\") " pod="openstack/ovsdbserver-sb-0" Feb 23 17:47:20 crc kubenswrapper[4724]: I0223 17:47:20.494769 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/02d0b5c7-a3f7-47d6-a52f-cff5a0946cea-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"02d0b5c7-a3f7-47d6-a52f-cff5a0946cea\") " pod="openstack/ovsdbserver-sb-0" Feb 23 17:47:20 crc kubenswrapper[4724]: I0223 17:47:20.494260 4724 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"02d0b5c7-a3f7-47d6-a52f-cff5a0946cea\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/ovsdbserver-sb-0" Feb 23 17:47:20 crc kubenswrapper[4724]: I0223 17:47:20.495292 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/02d0b5c7-a3f7-47d6-a52f-cff5a0946cea-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"02d0b5c7-a3f7-47d6-a52f-cff5a0946cea\") " pod="openstack/ovsdbserver-sb-0" Feb 23 17:47:20 crc kubenswrapper[4724]: I0223 17:47:20.495410 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/02d0b5c7-a3f7-47d6-a52f-cff5a0946cea-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"02d0b5c7-a3f7-47d6-a52f-cff5a0946cea\") " pod="openstack/ovsdbserver-sb-0" Feb 23 17:47:20 crc kubenswrapper[4724]: I0223 17:47:20.496718 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/02d0b5c7-a3f7-47d6-a52f-cff5a0946cea-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"02d0b5c7-a3f7-47d6-a52f-cff5a0946cea\") " pod="openstack/ovsdbserver-sb-0" Feb 23 17:47:20 crc kubenswrapper[4724]: I0223 17:47:20.497250 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02d0b5c7-a3f7-47d6-a52f-cff5a0946cea-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"02d0b5c7-a3f7-47d6-a52f-cff5a0946cea\") " pod="openstack/ovsdbserver-sb-0" Feb 23 17:47:20 crc kubenswrapper[4724]: I0223 17:47:20.499528 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02d0b5c7-a3f7-47d6-a52f-cff5a0946cea-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"02d0b5c7-a3f7-47d6-a52f-cff5a0946cea\") " pod="openstack/ovsdbserver-sb-0" Feb 23 17:47:20 crc kubenswrapper[4724]: I0223 17:47:20.499716 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/02d0b5c7-a3f7-47d6-a52f-cff5a0946cea-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"02d0b5c7-a3f7-47d6-a52f-cff5a0946cea\") " pod="openstack/ovsdbserver-sb-0" Feb 23 17:47:20 crc kubenswrapper[4724]: I0223 17:47:20.507062 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/02d0b5c7-a3f7-47d6-a52f-cff5a0946cea-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"02d0b5c7-a3f7-47d6-a52f-cff5a0946cea\") " pod="openstack/ovsdbserver-sb-0" Feb 23 17:47:20 crc kubenswrapper[4724]: I0223 17:47:20.511052 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhm6v\" (UniqueName: \"kubernetes.io/projected/02d0b5c7-a3f7-47d6-a52f-cff5a0946cea-kube-api-access-nhm6v\") pod \"ovsdbserver-sb-0\" (UID: \"02d0b5c7-a3f7-47d6-a52f-cff5a0946cea\") " pod="openstack/ovsdbserver-sb-0" Feb 23 17:47:20 crc kubenswrapper[4724]: I0223 17:47:20.514705 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"02d0b5c7-a3f7-47d6-a52f-cff5a0946cea\") " pod="openstack/ovsdbserver-sb-0" Feb 23 17:47:20 crc kubenswrapper[4724]: I0223 17:47:20.594976 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 23 17:47:22 crc kubenswrapper[4724]: E0223 17:47:22.259705 4724 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.147:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Feb 23 17:47:22 crc kubenswrapper[4724]: E0223 17:47:22.259790 4724 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.147:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Feb 23 17:47:22 crc kubenswrapper[4724]: E0223 17:47:22.259929 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.147:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c848c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-bf56b5889-h4bb8_openstack(53c07b37-3110-4930-a495-54ecb6f4e7fd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 17:47:22 crc kubenswrapper[4724]: E0223 17:47:22.261289 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-bf56b5889-h4bb8" podUID="53c07b37-3110-4930-a495-54ecb6f4e7fd" Feb 23 17:47:22 crc kubenswrapper[4724]: W0223 17:47:22.274862 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod272b0850_c495_47ac_a514_1483b621a887.slice/crio-cfef8bc30c7893fbd53d94259e16e85b87708855417396667dc03e3cfad0b182 WatchSource:0}: Error finding container cfef8bc30c7893fbd53d94259e16e85b87708855417396667dc03e3cfad0b182: Status 404 returned error can't find the container with id cfef8bc30c7893fbd53d94259e16e85b87708855417396667dc03e3cfad0b182 Feb 23 17:47:22 crc kubenswrapper[4724]: E0223 17:47:22.277030 4724 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.147:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Feb 23 17:47:22 crc kubenswrapper[4724]: E0223 17:47:22.277061 4724 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.147:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Feb 23 17:47:22 crc kubenswrapper[4724]: E0223 17:47:22.277149 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.147:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5bnx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-784b55c5d9-mvlbh_openstack(78c7f0c4-57ea-4998-a98a-12f2d23d797f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 17:47:22 crc kubenswrapper[4724]: E0223 17:47:22.278792 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-784b55c5d9-mvlbh" podUID="78c7f0c4-57ea-4998-a98a-12f2d23d797f" Feb 23 17:47:22 crc kubenswrapper[4724]: I0223 17:47:22.486771 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fc74595bc-c6dvf" event={"ID":"272b0850-c495-47ac-a514-1483b621a887","Type":"ContainerStarted","Data":"cfef8bc30c7893fbd53d94259e16e85b87708855417396667dc03e3cfad0b182"} Feb 23 17:47:23 crc kubenswrapper[4724]: I0223 17:47:23.240576 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 23 17:47:23 crc kubenswrapper[4724]: I0223 17:47:23.350992 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bf56b5889-h4bb8" Feb 23 17:47:23 crc kubenswrapper[4724]: I0223 17:47:23.353074 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-784b55c5d9-mvlbh" Feb 23 17:47:23 crc kubenswrapper[4724]: I0223 17:47:23.432936 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 23 17:47:23 crc kubenswrapper[4724]: I0223 17:47:23.451137 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 23 17:47:23 crc kubenswrapper[4724]: W0223 17:47:23.462574 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf92d7742_3151_48d7_8493_ff07e6803966.slice/crio-41618f9c9025ec6ea3d96c89d9b63cd1334fed084b721032764f9072ea349e07 WatchSource:0}: Error finding container 41618f9c9025ec6ea3d96c89d9b63cd1334fed084b721032764f9072ea349e07: Status 404 returned error can't find the container with id 41618f9c9025ec6ea3d96c89d9b63cd1334fed084b721032764f9072ea349e07 Feb 23 17:47:23 crc kubenswrapper[4724]: I0223 17:47:23.464788 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74bcc47849-4gdn2"] Feb 23 17:47:23 crc kubenswrapper[4724]: I0223 17:47:23.473917 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/53c07b37-3110-4930-a495-54ecb6f4e7fd-dns-svc\") pod \"53c07b37-3110-4930-a495-54ecb6f4e7fd\" (UID: \"53c07b37-3110-4930-a495-54ecb6f4e7fd\") " Feb 23 17:47:23 crc kubenswrapper[4724]: I0223 17:47:23.474028 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bnx5\" (UniqueName: \"kubernetes.io/projected/78c7f0c4-57ea-4998-a98a-12f2d23d797f-kube-api-access-5bnx5\") pod \"78c7f0c4-57ea-4998-a98a-12f2d23d797f\" (UID: \"78c7f0c4-57ea-4998-a98a-12f2d23d797f\") " Feb 23 17:47:23 crc kubenswrapper[4724]: I0223 17:47:23.474083 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53c07b37-3110-4930-a495-54ecb6f4e7fd-config\") pod \"53c07b37-3110-4930-a495-54ecb6f4e7fd\" (UID: \"53c07b37-3110-4930-a495-54ecb6f4e7fd\") " Feb 23 17:47:23 crc kubenswrapper[4724]: I0223 17:47:23.474112 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c848c\" (UniqueName: \"kubernetes.io/projected/53c07b37-3110-4930-a495-54ecb6f4e7fd-kube-api-access-c848c\") pod \"53c07b37-3110-4930-a495-54ecb6f4e7fd\" (UID: \"53c07b37-3110-4930-a495-54ecb6f4e7fd\") " Feb 23 17:47:23 crc kubenswrapper[4724]: I0223 17:47:23.474250 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78c7f0c4-57ea-4998-a98a-12f2d23d797f-config\") pod \"78c7f0c4-57ea-4998-a98a-12f2d23d797f\" (UID: \"78c7f0c4-57ea-4998-a98a-12f2d23d797f\") " Feb 23 17:47:23 crc kubenswrapper[4724]: I0223 17:47:23.474482 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53c07b37-3110-4930-a495-54ecb6f4e7fd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "53c07b37-3110-4930-a495-54ecb6f4e7fd" (UID: "53c07b37-3110-4930-a495-54ecb6f4e7fd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:47:23 crc kubenswrapper[4724]: I0223 17:47:23.474826 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53c07b37-3110-4930-a495-54ecb6f4e7fd-config" (OuterVolumeSpecName: "config") pod "53c07b37-3110-4930-a495-54ecb6f4e7fd" (UID: "53c07b37-3110-4930-a495-54ecb6f4e7fd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:47:23 crc kubenswrapper[4724]: I0223 17:47:23.476052 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78c7f0c4-57ea-4998-a98a-12f2d23d797f-config" (OuterVolumeSpecName: "config") pod "78c7f0c4-57ea-4998-a98a-12f2d23d797f" (UID: "78c7f0c4-57ea-4998-a98a-12f2d23d797f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:47:23 crc kubenswrapper[4724]: I0223 17:47:23.477008 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53c07b37-3110-4930-a495-54ecb6f4e7fd-config\") on node \"crc\" DevicePath \"\"" Feb 23 17:47:23 crc kubenswrapper[4724]: I0223 17:47:23.477032 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78c7f0c4-57ea-4998-a98a-12f2d23d797f-config\") on node \"crc\" DevicePath \"\"" Feb 23 17:47:23 crc kubenswrapper[4724]: I0223 17:47:23.477042 4724 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/53c07b37-3110-4930-a495-54ecb6f4e7fd-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 17:47:23 crc kubenswrapper[4724]: I0223 17:47:23.478621 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78c7f0c4-57ea-4998-a98a-12f2d23d797f-kube-api-access-5bnx5" (OuterVolumeSpecName: "kube-api-access-5bnx5") pod "78c7f0c4-57ea-4998-a98a-12f2d23d797f" (UID: "78c7f0c4-57ea-4998-a98a-12f2d23d797f"). InnerVolumeSpecName "kube-api-access-5bnx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:47:23 crc kubenswrapper[4724]: I0223 17:47:23.478677 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53c07b37-3110-4930-a495-54ecb6f4e7fd-kube-api-access-c848c" (OuterVolumeSpecName: "kube-api-access-c848c") pod "53c07b37-3110-4930-a495-54ecb6f4e7fd" (UID: "53c07b37-3110-4930-a495-54ecb6f4e7fd"). InnerVolumeSpecName "kube-api-access-c848c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:47:23 crc kubenswrapper[4724]: W0223 17:47:23.486935 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeadce7d0_a9bc_4840_919b_a341aba11ca2.slice/crio-50f81f88fb4ff4114c2e1d9817d59f89849f63d667b566d48999964136fd8e05 WatchSource:0}: Error finding container 50f81f88fb4ff4114c2e1d9817d59f89849f63d667b566d48999964136fd8e05: Status 404 returned error can't find the container with id 50f81f88fb4ff4114c2e1d9817d59f89849f63d667b566d48999964136fd8e05 Feb 23 17:47:23 crc kubenswrapper[4724]: W0223 17:47:23.488150 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e165de7_7e1a_47c3_84d2_9fc675a2224a.slice/crio-726947e3fbb42d916aec98c206580061c5df9d2326df840ce8f9fa7613f6aa0c WatchSource:0}: Error finding container 726947e3fbb42d916aec98c206580061c5df9d2326df840ce8f9fa7613f6aa0c: Status 404 returned error can't find the container with id 726947e3fbb42d916aec98c206580061c5df9d2326df840ce8f9fa7613f6aa0c Feb 23 17:47:23 crc kubenswrapper[4724]: I0223 17:47:23.497601 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"dd0498b8-b963-4905-a986-13400917ef41","Type":"ContainerStarted","Data":"ed255c1c0ab48d58025725d9eadfae031b53d909557b119d63c1fc643f97dab3"} Feb 23 17:47:23 crc kubenswrapper[4724]: I0223 17:47:23.500275 4724 generic.go:334] "Generic (PLEG): container finished" podID="272b0850-c495-47ac-a514-1483b621a887" containerID="e6161b84f732b56c3ba840bb98ab14819b2577f1fea6af7cad95773d61607009" exitCode=0 Feb 23 17:47:23 crc kubenswrapper[4724]: I0223 17:47:23.500328 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fc74595bc-c6dvf" event={"ID":"272b0850-c495-47ac-a514-1483b621a887","Type":"ContainerDied","Data":"e6161b84f732b56c3ba840bb98ab14819b2577f1fea6af7cad95773d61607009"} Feb 23 17:47:23 crc kubenswrapper[4724]: I0223 17:47:23.501792 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-784b55c5d9-mvlbh" event={"ID":"78c7f0c4-57ea-4998-a98a-12f2d23d797f","Type":"ContainerDied","Data":"37436c9d12dd67b752809dc4fd3580397b21680a65a50e21998196472fd922c6"} Feb 23 17:47:23 crc kubenswrapper[4724]: I0223 17:47:23.501866 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-784b55c5d9-mvlbh" Feb 23 17:47:23 crc kubenswrapper[4724]: I0223 17:47:23.502272 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/notifications-rabbitmq-server-0"] Feb 23 17:47:23 crc kubenswrapper[4724]: I0223 17:47:23.503641 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74bcc47849-4gdn2" event={"ID":"f92d7742-3151-48d7-8493-ff07e6803966","Type":"ContainerStarted","Data":"41618f9c9025ec6ea3d96c89d9b63cd1334fed084b721032764f9072ea349e07"} Feb 23 17:47:23 crc kubenswrapper[4724]: I0223 17:47:23.506205 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"ba834afe-088c-4b0c-97f5-7986f8f9c988","Type":"ContainerStarted","Data":"29e2e7744b89a7eb1314d50efe688be90a5205da883d3d57622b6840bf56ab27"} Feb 23 17:47:23 crc kubenswrapper[4724]: I0223 17:47:23.508101 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bf56b5889-h4bb8" event={"ID":"53c07b37-3110-4930-a495-54ecb6f4e7fd","Type":"ContainerDied","Data":"f98132bbe6f26f27c32ac64a7f29476e2f94258ac881f41bedfcacb066ac5f5a"} Feb 23 17:47:23 crc kubenswrapper[4724]: I0223 17:47:23.508158 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bf56b5889-h4bb8" Feb 23 17:47:23 crc kubenswrapper[4724]: I0223 17:47:23.510213 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e48a20ad-1863-458a-ba27-6b24cee6df0c","Type":"ContainerStarted","Data":"26696ace8760f1783cd6583d4942203997b3c7846c555863eca9d2406e6d27c3"} Feb 23 17:47:23 crc kubenswrapper[4724]: I0223 17:47:23.515238 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 23 17:47:23 crc kubenswrapper[4724]: I0223 17:47:23.528893 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 23 17:47:23 crc kubenswrapper[4724]: I0223 17:47:23.581991 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bnx5\" (UniqueName: \"kubernetes.io/projected/78c7f0c4-57ea-4998-a98a-12f2d23d797f-kube-api-access-5bnx5\") on node \"crc\" DevicePath \"\"" Feb 23 17:47:23 crc kubenswrapper[4724]: I0223 17:47:23.582666 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c848c\" (UniqueName: \"kubernetes.io/projected/53c07b37-3110-4930-a495-54ecb6f4e7fd-kube-api-access-c848c\") on node \"crc\" DevicePath \"\"" Feb 23 17:47:23 crc kubenswrapper[4724]: I0223 17:47:23.585551 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-784b55c5d9-mvlbh"] Feb 23 17:47:23 crc kubenswrapper[4724]: I0223 17:47:23.591691 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-784b55c5d9-mvlbh"] Feb 23 17:47:23 crc kubenswrapper[4724]: I0223 17:47:23.638296 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bf56b5889-h4bb8"] Feb 23 17:47:23 crc kubenswrapper[4724]: I0223 17:47:23.655901 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bf56b5889-h4bb8"] Feb 23 17:47:23 crc kubenswrapper[4724]: I0223 17:47:23.677630 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 23 17:47:23 crc kubenswrapper[4724]: W0223 17:47:23.692642 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7ad5fb5_517e_4249_9da4_08d99599caf0.slice/crio-72faf685f236cae38b231437c6245ad62c04ad22f575a8776bcce8ad4f7295fd WatchSource:0}: Error finding container 72faf685f236cae38b231437c6245ad62c04ad22f575a8776bcce8ad4f7295fd: Status 404 returned error can't find the container with id 72faf685f236cae38b231437c6245ad62c04ad22f575a8776bcce8ad4f7295fd Feb 23 17:47:23 crc kubenswrapper[4724]: I0223 17:47:23.696479 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-hh76w"] Feb 23 17:47:23 crc kubenswrapper[4724]: I0223 17:47:23.706137 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 23 17:47:23 crc kubenswrapper[4724]: I0223 17:47:23.726527 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 23 17:47:23 crc kubenswrapper[4724]: I0223 17:47:23.781350 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 23 17:47:23 crc kubenswrapper[4724]: W0223 17:47:23.856004 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02d0b5c7_a3f7_47d6_a52f_cff5a0946cea.slice/crio-76abaefe32d53b7489f7f6a3d150edbee3da0a5c309b0259913026ec962ec9a5 WatchSource:0}: Error finding container 76abaefe32d53b7489f7f6a3d150edbee3da0a5c309b0259913026ec962ec9a5: Status 404 returned error can't find the container with id 76abaefe32d53b7489f7f6a3d150edbee3da0a5c309b0259913026ec962ec9a5 Feb 23 17:47:23 crc kubenswrapper[4724]: I0223 17:47:23.868661 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-lzxrb"] Feb 23 17:47:23 crc kubenswrapper[4724]: E0223 17:47:23.903673 4724 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Feb 23 17:47:23 crc kubenswrapper[4724]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/272b0850-c495-47ac-a514-1483b621a887/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 23 17:47:23 crc kubenswrapper[4724]: > podSandboxID="cfef8bc30c7893fbd53d94259e16e85b87708855417396667dc03e3cfad0b182" Feb 23 17:47:23 crc kubenswrapper[4724]: E0223 17:47:23.903888 4724 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 23 17:47:23 crc kubenswrapper[4724]: container &Container{Name:dnsmasq-dns,Image:38.102.83.147:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n684h65fh56h6fh87h85h57h76h5b7h94hffh649hfbh8ch5bch56fh5c5hbh86hf9h99h5dch95h66hd5h555h566h646h546h79h9dh55dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kbdmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-7fc74595bc-c6dvf_openstack(272b0850-c495-47ac-a514-1483b621a887): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/272b0850-c495-47ac-a514-1483b621a887/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 23 17:47:23 crc kubenswrapper[4724]: > logger="UnhandledError" Feb 23 17:47:23 crc kubenswrapper[4724]: E0223 17:47:23.905542 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/272b0850-c495-47ac-a514-1483b621a887/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-7fc74595bc-c6dvf" podUID="272b0850-c495-47ac-a514-1483b621a887" Feb 23 17:47:23 crc kubenswrapper[4724]: W0223 17:47:23.957312 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f6f0027_7e55_407c_be1d_5dc5f57250a8.slice/crio-52fbdae1d3e6fb736a433d0427525fe1d72e7f38d0019ab3a8645aa2b8f49043 WatchSource:0}: Error finding container 52fbdae1d3e6fb736a433d0427525fe1d72e7f38d0019ab3a8645aa2b8f49043: Status 404 returned error can't find the container with id 52fbdae1d3e6fb736a433d0427525fe1d72e7f38d0019ab3a8645aa2b8f49043 Feb 23 17:47:24 crc kubenswrapper[4724]: I0223 17:47:24.521062 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ad58a78a-ccdb-4154-852e-8a8984a2a650","Type":"ContainerStarted","Data":"bb42c41d5efb541cd096a5a897f5371ccbf7bcc91b1abb85e8ffc52104b8cc7b"} Feb 23 17:47:24 crc kubenswrapper[4724]: I0223 17:47:24.523068 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"eadce7d0-a9bc-4840-919b-a341aba11ca2","Type":"ContainerStarted","Data":"50f81f88fb4ff4114c2e1d9817d59f89849f63d667b566d48999964136fd8e05"} Feb 23 17:47:24 crc kubenswrapper[4724]: I0223 17:47:24.525577 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/notifications-rabbitmq-server-0" event={"ID":"6e165de7-7e1a-47c3-84d2-9fc675a2224a","Type":"ContainerStarted","Data":"726947e3fbb42d916aec98c206580061c5df9d2326df840ce8f9fa7613f6aa0c"} Feb 23 17:47:24 crc kubenswrapper[4724]: I0223 17:47:24.532198 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"02d0b5c7-a3f7-47d6-a52f-cff5a0946cea","Type":"ContainerStarted","Data":"76abaefe32d53b7489f7f6a3d150edbee3da0a5c309b0259913026ec962ec9a5"} Feb 23 17:47:24 crc kubenswrapper[4724]: I0223 17:47:24.533979 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"5d4d82a5-2e0e-459f-b9f4-bb1ab23b3c52","Type":"ContainerStarted","Data":"d093a0de64a221c2d8e46d488793acc49c2d668c942b59f2c8acf9c8d616944a"} Feb 23 17:47:24 crc kubenswrapper[4724]: I0223 17:47:24.538202 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hh76w" event={"ID":"8fd48d48-59c7-4470-9223-c3b3f786c8d9","Type":"ContainerStarted","Data":"7958944026fcd82d69665b04e20a9ac10371244bf12557e885ae46bf090a59d6"} Feb 23 17:47:24 crc kubenswrapper[4724]: I0223 17:47:24.543018 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"101a4642-f4c0-4f81-9d5a-7b8d95110eb2","Type":"ContainerStarted","Data":"c906f60d0417ea8d391bd8861d6707719f1bdcfe9a80c923c1852403bf706889"} Feb 23 17:47:24 crc kubenswrapper[4724]: I0223 17:47:24.545295 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c7ad5fb5-517e-4249-9da4-08d99599caf0","Type":"ContainerStarted","Data":"72faf685f236cae38b231437c6245ad62c04ad22f575a8776bcce8ad4f7295fd"} Feb 23 17:47:24 crc kubenswrapper[4724]: I0223 17:47:24.547298 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-lzxrb" event={"ID":"4f6f0027-7e55-407c-be1d-5dc5f57250a8","Type":"ContainerStarted","Data":"52fbdae1d3e6fb736a433d0427525fe1d72e7f38d0019ab3a8645aa2b8f49043"} Feb 23 17:47:24 crc kubenswrapper[4724]: I0223 17:47:24.550372 4724 generic.go:334] "Generic (PLEG): container finished" podID="f92d7742-3151-48d7-8493-ff07e6803966" containerID="b49bd863857d2d200e65ce8a19823c0111ab1a3ee4e7a82b58bdb77647345899" exitCode=0 Feb 23 17:47:24 crc kubenswrapper[4724]: I0223 17:47:24.550435 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74bcc47849-4gdn2" event={"ID":"f92d7742-3151-48d7-8493-ff07e6803966","Type":"ContainerDied","Data":"b49bd863857d2d200e65ce8a19823c0111ab1a3ee4e7a82b58bdb77647345899"} Feb 23 17:47:24 crc kubenswrapper[4724]: I0223 17:47:24.968082 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53c07b37-3110-4930-a495-54ecb6f4e7fd" path="/var/lib/kubelet/pods/53c07b37-3110-4930-a495-54ecb6f4e7fd/volumes" Feb 23 17:47:24 crc kubenswrapper[4724]: I0223 17:47:24.969737 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78c7f0c4-57ea-4998-a98a-12f2d23d797f" path="/var/lib/kubelet/pods/78c7f0c4-57ea-4998-a98a-12f2d23d797f/volumes" Feb 23 17:47:31 crc kubenswrapper[4724]: I0223 17:47:31.620788 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fc74595bc-c6dvf" event={"ID":"272b0850-c495-47ac-a514-1483b621a887","Type":"ContainerStarted","Data":"000dbea1d0c2f638aac70927ceb2336b8f1a2a73fac16ba921beb769c5dfcb2c"} Feb 23 17:47:31 crc kubenswrapper[4724]: I0223 17:47:31.621605 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fc74595bc-c6dvf" Feb 23 17:47:31 crc kubenswrapper[4724]: I0223 17:47:31.626538 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74bcc47849-4gdn2" event={"ID":"f92d7742-3151-48d7-8493-ff07e6803966","Type":"ContainerStarted","Data":"925e90b22081f3b5d2ac56da2f541030986352340e4160a8634ae05c35c20a73"} Feb 23 17:47:31 crc kubenswrapper[4724]: I0223 17:47:31.626755 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74bcc47849-4gdn2" Feb 23 17:47:31 crc kubenswrapper[4724]: I0223 17:47:31.649242 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fc74595bc-c6dvf" podStartSLOduration=25.388455017 podStartE2EDuration="25.649223891s" podCreationTimestamp="2026-02-23 17:47:06 +0000 UTC" firstStartedPulling="2026-02-23 17:47:22.279180749 +0000 UTC m=+998.095380369" lastFinishedPulling="2026-02-23 17:47:22.539949633 +0000 UTC m=+998.356149243" observedRunningTime="2026-02-23 17:47:31.637216972 +0000 UTC m=+1007.453416572" watchObservedRunningTime="2026-02-23 17:47:31.649223891 +0000 UTC m=+1007.465423491" Feb 23 17:47:31 crc kubenswrapper[4724]: I0223 17:47:31.658681 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74bcc47849-4gdn2" podStartSLOduration=25.658665163 podStartE2EDuration="25.658665163s" podCreationTimestamp="2026-02-23 17:47:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:47:31.654093296 +0000 UTC m=+1007.470292916" watchObservedRunningTime="2026-02-23 17:47:31.658665163 +0000 UTC m=+1007.474864763" Feb 23 17:47:36 crc kubenswrapper[4724]: E0223 17:47:36.183050 4724 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.147:5001/podified-master-centos10/openstack-ovn-nb-db-server:watcher_latest" Feb 23 17:47:36 crc kubenswrapper[4724]: E0223 17:47:36.183117 4724 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.147:5001/podified-master-centos10/openstack-ovn-nb-db-server:watcher_latest" Feb 23 17:47:36 crc kubenswrapper[4724]: E0223 17:47:36.183288 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-nb,Image:38.102.83.147:5001/podified-master-centos10/openstack-ovn-nb-db-server:watcher_latest,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5f9h555h56h676h66fhb9h57h586hdbh584h56h65bh5d8h657h74hc5h584h686h66dh77h56h56fh76h596h68hd9h587h76h74h5b7h68h6cq,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-nb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8rh4s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-nb-0_openstack(ba834afe-088c-4b0c-97f5-7986f8f9c988): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 17:47:36 crc kubenswrapper[4724]: I0223 17:47:36.644498 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fc74595bc-c6dvf" Feb 23 17:47:36 crc kubenswrapper[4724]: E0223 17:47:36.833651 4724 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.147:5001/podified-master-centos10/openstack-ovn-base:watcher_latest" Feb 23 17:47:36 crc kubenswrapper[4724]: E0223 17:47:36.833754 4724 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.147:5001/podified-master-centos10/openstack-ovn-base:watcher_latest" Feb 23 17:47:36 crc kubenswrapper[4724]: E0223 17:47:36.833903 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:ovsdb-server-init,Image:38.102.83.147:5001/podified-master-centos10/openstack-ovn-base:watcher_latest,Command:[/usr/local/bin/container-scripts/init-ovsdb-server.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5dch5fh597h674h685h59chd5h5dch66h5ffh5f8h89h56bh88h586h5d5hf6h59ch545h96h8fh548h55bh66dh56bh78h59dh696hch5fbh55fh5ccq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-ovs,ReadOnly:false,MountPath:/etc/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log,ReadOnly:false,MountPath:/var/log/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib,ReadOnly:false,MountPath:/var/lib/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jdbmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-ovs-lzxrb_openstack(4f6f0027-7e55-407c-be1d-5dc5f57250a8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 17:47:36 crc kubenswrapper[4724]: E0223 17:47:36.835109 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-ovs-lzxrb" podUID="4f6f0027-7e55-407c-be1d-5dc5f57250a8" Feb 23 17:47:36 crc kubenswrapper[4724]: E0223 17:47:36.886858 4724 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.147:5001/podified-master-centos10/openstack-mariadb:watcher_latest" Feb 23 17:47:36 crc kubenswrapper[4724]: E0223 17:47:36.886940 4724 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.147:5001/podified-master-centos10/openstack-mariadb:watcher_latest" Feb 23 17:47:36 crc kubenswrapper[4724]: E0223 17:47:36.887121 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:38.102.83.147:5001/podified-master-centos10/openstack-mariadb:watcher_latest,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l8cck,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(c7ad5fb5-517e-4249-9da4-08d99599caf0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 17:47:36 crc kubenswrapper[4724]: E0223 17:47:36.892239 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="c7ad5fb5-517e-4249-9da4-08d99599caf0" Feb 23 17:47:37 crc kubenswrapper[4724]: E0223 17:47:37.092069 4724 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.147:5001/podified-master-centos10/openstack-memcached:watcher_latest" Feb 23 17:47:37 crc kubenswrapper[4724]: E0223 17:47:37.092143 4724 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.147:5001/podified-master-centos10/openstack-memcached:watcher_latest" Feb 23 17:47:37 crc kubenswrapper[4724]: E0223 17:47:37.092349 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:38.102.83.147:5001/podified-master-centos10/openstack-memcached:watcher_latest,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n5ddh5f4h654h58ch5b8h586h546h589h54fh65ch5d5h5b5h646h5c6h554h75h56ch55hc8h55hb4h66ch94h9dh8fh5bh5dchfh656h5fhfbhf4q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6k4t4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(eadce7d0-a9bc-4840-919b-a341aba11ca2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 17:47:37 crc kubenswrapper[4724]: E0223 17:47:37.093787 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="eadce7d0-a9bc-4840-919b-a341aba11ca2" Feb 23 17:47:37 crc kubenswrapper[4724]: I0223 17:47:37.261675 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-74bcc47849-4gdn2" Feb 23 17:47:37 crc kubenswrapper[4724]: I0223 17:47:37.324190 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fc74595bc-c6dvf"] Feb 23 17:47:37 crc kubenswrapper[4724]: I0223 17:47:37.324420 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fc74595bc-c6dvf" podUID="272b0850-c495-47ac-a514-1483b621a887" containerName="dnsmasq-dns" containerID="cri-o://000dbea1d0c2f638aac70927ceb2336b8f1a2a73fac16ba921beb769c5dfcb2c" gracePeriod=10 Feb 23 17:47:37 crc kubenswrapper[4724]: E0223 17:47:37.679340 4724 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.147:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest" Feb 23 17:47:37 crc kubenswrapper[4724]: E0223 17:47:37.679964 4724 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.147:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest" Feb 23 17:47:37 crc kubenswrapper[4724]: E0223 17:47:37.680094 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-controller,Image:38.102.83.147:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest,Command:[ovn-controller --pidfile unix:/run/openvswitch/db.sock --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5dch5fh597h674h685h59chd5h5dch66h5ffh5f8h89h56bh88h586h5d5hf6h59ch545h96h8fh548h55bh66dh56bh78h59dh696hch5fbh55fh5ccq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-ovn,ReadOnly:false,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log-ovn,ReadOnly:false,MountPath:/var/log/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sq4w6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_liveness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_readiness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/share/ovn/scripts/ovn-ctl stop_controller],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-hh76w_openstack(8fd48d48-59c7-4470-9223-c3b3f786c8d9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 17:47:37 crc kubenswrapper[4724]: E0223 17:47:37.681369 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-hh76w" podUID="8fd48d48-59c7-4470-9223-c3b3f786c8d9" Feb 23 17:47:37 crc kubenswrapper[4724]: I0223 17:47:37.698947 4724 generic.go:334] "Generic (PLEG): container finished" podID="272b0850-c495-47ac-a514-1483b621a887" containerID="000dbea1d0c2f638aac70927ceb2336b8f1a2a73fac16ba921beb769c5dfcb2c" exitCode=0 Feb 23 17:47:37 crc kubenswrapper[4724]: I0223 17:47:37.699551 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fc74595bc-c6dvf" event={"ID":"272b0850-c495-47ac-a514-1483b621a887","Type":"ContainerDied","Data":"000dbea1d0c2f638aac70927ceb2336b8f1a2a73fac16ba921beb769c5dfcb2c"} Feb 23 17:47:37 crc kubenswrapper[4724]: E0223 17:47:37.700145 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.147:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest\\\"\"" pod="openstack/ovn-controller-hh76w" podUID="8fd48d48-59c7-4470-9223-c3b3f786c8d9" Feb 23 17:47:37 crc kubenswrapper[4724]: E0223 17:47:37.701749 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.147:5001/podified-master-centos10/openstack-memcached:watcher_latest\\\"\"" pod="openstack/memcached-0" podUID="eadce7d0-a9bc-4840-919b-a341aba11ca2" Feb 23 17:47:37 crc kubenswrapper[4724]: E0223 17:47:37.701873 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.147:5001/podified-master-centos10/openstack-ovn-base:watcher_latest\\\"\"" pod="openstack/ovn-controller-ovs-lzxrb" podUID="4f6f0027-7e55-407c-be1d-5dc5f57250a8" Feb 23 17:47:37 crc kubenswrapper[4724]: E0223 17:47:37.953204 4724 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.147:5001/podified-master-centos10/openstack-ovn-sb-db-server:watcher_latest" Feb 23 17:47:37 crc kubenswrapper[4724]: E0223 17:47:37.953269 4724 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.147:5001/podified-master-centos10/openstack-ovn-sb-db-server:watcher_latest" Feb 23 17:47:37 crc kubenswrapper[4724]: E0223 17:47:37.953459 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-sb,Image:38.102.83.147:5001/podified-master-centos10/openstack-ovn-sb-db-server:watcher_latest,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n65bh668hd8h5c5h97h5fh87h5d8h585h7fhb7h5c7h558h548h5bfh679h675h77h66hfdh59ch65dh557h659h668h559h548h679h76h5cfh677h5f4q,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-sb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nhm6v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-sb-0_openstack(02d0b5c7-a3f7-47d6-a52f-cff5a0946cea): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 17:47:38 crc kubenswrapper[4724]: I0223 17:47:38.519239 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fc74595bc-c6dvf" Feb 23 17:47:38 crc kubenswrapper[4724]: I0223 17:47:38.637826 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/272b0850-c495-47ac-a514-1483b621a887-dns-svc\") pod \"272b0850-c495-47ac-a514-1483b621a887\" (UID: \"272b0850-c495-47ac-a514-1483b621a887\") " Feb 23 17:47:38 crc kubenswrapper[4724]: I0223 17:47:38.637976 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/272b0850-c495-47ac-a514-1483b621a887-config\") pod \"272b0850-c495-47ac-a514-1483b621a887\" (UID: \"272b0850-c495-47ac-a514-1483b621a887\") " Feb 23 17:47:38 crc kubenswrapper[4724]: I0223 17:47:38.638006 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbdmb\" (UniqueName: \"kubernetes.io/projected/272b0850-c495-47ac-a514-1483b621a887-kube-api-access-kbdmb\") pod \"272b0850-c495-47ac-a514-1483b621a887\" (UID: \"272b0850-c495-47ac-a514-1483b621a887\") " Feb 23 17:47:38 crc kubenswrapper[4724]: I0223 17:47:38.642203 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/272b0850-c495-47ac-a514-1483b621a887-kube-api-access-kbdmb" (OuterVolumeSpecName: "kube-api-access-kbdmb") pod "272b0850-c495-47ac-a514-1483b621a887" (UID: "272b0850-c495-47ac-a514-1483b621a887"). InnerVolumeSpecName "kube-api-access-kbdmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:47:38 crc kubenswrapper[4724]: I0223 17:47:38.671195 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/272b0850-c495-47ac-a514-1483b621a887-config" (OuterVolumeSpecName: "config") pod "272b0850-c495-47ac-a514-1483b621a887" (UID: "272b0850-c495-47ac-a514-1483b621a887"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:47:38 crc kubenswrapper[4724]: I0223 17:47:38.680535 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/272b0850-c495-47ac-a514-1483b621a887-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "272b0850-c495-47ac-a514-1483b621a887" (UID: "272b0850-c495-47ac-a514-1483b621a887"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:47:38 crc kubenswrapper[4724]: I0223 17:47:38.707233 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fc74595bc-c6dvf" event={"ID":"272b0850-c495-47ac-a514-1483b621a887","Type":"ContainerDied","Data":"cfef8bc30c7893fbd53d94259e16e85b87708855417396667dc03e3cfad0b182"} Feb 23 17:47:38 crc kubenswrapper[4724]: I0223 17:47:38.707289 4724 scope.go:117] "RemoveContainer" containerID="000dbea1d0c2f638aac70927ceb2336b8f1a2a73fac16ba921beb769c5dfcb2c" Feb 23 17:47:38 crc kubenswrapper[4724]: I0223 17:47:38.707309 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fc74595bc-c6dvf" Feb 23 17:47:38 crc kubenswrapper[4724]: I0223 17:47:38.739955 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/272b0850-c495-47ac-a514-1483b621a887-config\") on node \"crc\" DevicePath \"\"" Feb 23 17:47:38 crc kubenswrapper[4724]: I0223 17:47:38.739989 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbdmb\" (UniqueName: \"kubernetes.io/projected/272b0850-c495-47ac-a514-1483b621a887-kube-api-access-kbdmb\") on node \"crc\" DevicePath \"\"" Feb 23 17:47:38 crc kubenswrapper[4724]: I0223 17:47:38.740001 4724 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/272b0850-c495-47ac-a514-1483b621a887-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 17:47:38 crc kubenswrapper[4724]: I0223 17:47:38.740576 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fc74595bc-c6dvf"] Feb 23 17:47:38 crc kubenswrapper[4724]: I0223 17:47:38.748411 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fc74595bc-c6dvf"] Feb 23 17:47:38 crc kubenswrapper[4724]: I0223 17:47:38.972227 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="272b0850-c495-47ac-a514-1483b621a887" path="/var/lib/kubelet/pods/272b0850-c495-47ac-a514-1483b621a887/volumes" Feb 23 17:47:39 crc kubenswrapper[4724]: I0223 17:47:39.859680 4724 scope.go:117] "RemoveContainer" containerID="e6161b84f732b56c3ba840bb98ab14819b2577f1fea6af7cad95773d61607009" Feb 23 17:47:41 crc kubenswrapper[4724]: E0223 17:47:41.596701 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-nb-0" podUID="ba834afe-088c-4b0c-97f5-7986f8f9c988" Feb 23 17:47:41 crc kubenswrapper[4724]: E0223 17:47:41.633314 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-sb-0" podUID="02d0b5c7-a3f7-47d6-a52f-cff5a0946cea" Feb 23 17:47:41 crc kubenswrapper[4724]: I0223 17:47:41.732579 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"5d4d82a5-2e0e-459f-b9f4-bb1ab23b3c52","Type":"ContainerStarted","Data":"54bddba8b4506cc3fb1debe40f7cc99f224f4b7b488855fee980efb175119e2a"} Feb 23 17:47:41 crc kubenswrapper[4724]: I0223 17:47:41.732941 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 23 17:47:41 crc kubenswrapper[4724]: I0223 17:47:41.734471 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"02d0b5c7-a3f7-47d6-a52f-cff5a0946cea","Type":"ContainerStarted","Data":"593fe1b7ff88667b0bcb5285ccda4df44d8771cf9d043b510223e01601d639c7"} Feb 23 17:47:41 crc kubenswrapper[4724]: E0223 17:47:41.735837 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.147:5001/podified-master-centos10/openstack-ovn-sb-db-server:watcher_latest\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="02d0b5c7-a3f7-47d6-a52f-cff5a0946cea" Feb 23 17:47:41 crc kubenswrapper[4724]: I0223 17:47:41.736331 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c7ad5fb5-517e-4249-9da4-08d99599caf0","Type":"ContainerStarted","Data":"0ce9e7c4ee1753d23876eb05e29a3b567634832279ec1dfea20be92fb21b26ce"} Feb 23 17:47:41 crc kubenswrapper[4724]: I0223 17:47:41.739245 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"ba834afe-088c-4b0c-97f5-7986f8f9c988","Type":"ContainerStarted","Data":"0825ac6ce98cdf6e72640c66502a834ed5c38ea9d3df4a97df37e5806a070f89"} Feb 23 17:47:41 crc kubenswrapper[4724]: E0223 17:47:41.740580 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.147:5001/podified-master-centos10/openstack-ovn-nb-db-server:watcher_latest\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="ba834afe-088c-4b0c-97f5-7986f8f9c988" Feb 23 17:47:41 crc kubenswrapper[4724]: I0223 17:47:41.740628 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e48a20ad-1863-458a-ba27-6b24cee6df0c","Type":"ContainerStarted","Data":"c9731289d0e3b7a4dcc37f6cb577d670e1f82683e55aa15a75d85db6f76aefa3"} Feb 23 17:47:41 crc kubenswrapper[4724]: I0223 17:47:41.756288 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=11.77639996 podStartE2EDuration="28.756267053s" podCreationTimestamp="2026-02-23 17:47:13 +0000 UTC" firstStartedPulling="2026-02-23 17:47:23.785802473 +0000 UTC m=+999.602002073" lastFinishedPulling="2026-02-23 17:47:40.765669576 +0000 UTC m=+1016.581869166" observedRunningTime="2026-02-23 17:47:41.746516012 +0000 UTC m=+1017.562715622" watchObservedRunningTime="2026-02-23 17:47:41.756267053 +0000 UTC m=+1017.572466663" Feb 23 17:47:42 crc kubenswrapper[4724]: I0223 17:47:42.750830 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"101a4642-f4c0-4f81-9d5a-7b8d95110eb2","Type":"ContainerStarted","Data":"36f992eb2a80fa7d9c5dc03c57cc4e0fea68ee7732caaaca5b79a90820bb87b5"} Feb 23 17:47:42 crc kubenswrapper[4724]: I0223 17:47:42.752297 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"dd0498b8-b963-4905-a986-13400917ef41","Type":"ContainerStarted","Data":"06bd6ecb286b49b9c2e55b06a2075b277273fffc283ff6e9c4e46883dc206c68"} Feb 23 17:47:42 crc kubenswrapper[4724]: I0223 17:47:42.753843 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/notifications-rabbitmq-server-0" event={"ID":"6e165de7-7e1a-47c3-84d2-9fc675a2224a","Type":"ContainerStarted","Data":"c9d18c20c6962db8499f2253d01a6c3230882bdfa279614ae35a397ad51ddb04"} Feb 23 17:47:42 crc kubenswrapper[4724]: E0223 17:47:42.756001 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.147:5001/podified-master-centos10/openstack-ovn-nb-db-server:watcher_latest\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="ba834afe-088c-4b0c-97f5-7986f8f9c988" Feb 23 17:47:42 crc kubenswrapper[4724]: E0223 17:47:42.756622 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.147:5001/podified-master-centos10/openstack-ovn-sb-db-server:watcher_latest\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="02d0b5c7-a3f7-47d6-a52f-cff5a0946cea" Feb 23 17:47:43 crc kubenswrapper[4724]: I0223 17:47:43.763417 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ad58a78a-ccdb-4154-852e-8a8984a2a650","Type":"ContainerStarted","Data":"49f0c72fb4911ec8aa2fc7339f8af96f7a36570471b463b8b8c3bf494fe72670"} Feb 23 17:47:48 crc kubenswrapper[4724]: I0223 17:47:48.800197 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"eadce7d0-a9bc-4840-919b-a341aba11ca2","Type":"ContainerStarted","Data":"e61d0d1424c5699e3b1b05ec0a0a982faebf85e5235bf1da8f62a779a65e5d74"} Feb 23 17:47:48 crc kubenswrapper[4724]: I0223 17:47:48.800773 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 23 17:47:48 crc kubenswrapper[4724]: I0223 17:47:48.803572 4724 generic.go:334] "Generic (PLEG): container finished" podID="4f6f0027-7e55-407c-be1d-5dc5f57250a8" containerID="11426a32e9c92e50770d473fa3bf508c4aad1ae2dc9483cb875b1e96d46caed4" exitCode=0 Feb 23 17:47:48 crc kubenswrapper[4724]: I0223 17:47:48.803633 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-lzxrb" event={"ID":"4f6f0027-7e55-407c-be1d-5dc5f57250a8","Type":"ContainerDied","Data":"11426a32e9c92e50770d473fa3bf508c4aad1ae2dc9483cb875b1e96d46caed4"} Feb 23 17:47:48 crc kubenswrapper[4724]: I0223 17:47:48.820465 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=13.271292174 podStartE2EDuration="37.820447086s" podCreationTimestamp="2026-02-23 17:47:11 +0000 UTC" firstStartedPulling="2026-02-23 17:47:23.489523866 +0000 UTC m=+999.305723466" lastFinishedPulling="2026-02-23 17:47:48.038678778 +0000 UTC m=+1023.854878378" observedRunningTime="2026-02-23 17:47:48.819112182 +0000 UTC m=+1024.635311772" watchObservedRunningTime="2026-02-23 17:47:48.820447086 +0000 UTC m=+1024.636646686" Feb 23 17:47:49 crc kubenswrapper[4724]: I0223 17:47:49.815023 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-lzxrb" event={"ID":"4f6f0027-7e55-407c-be1d-5dc5f57250a8","Type":"ContainerStarted","Data":"200eb1d9807c98c61a870fc63f664d862e2458b7d08e13cc98ff56af696e92db"} Feb 23 17:47:49 crc kubenswrapper[4724]: I0223 17:47:49.816631 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-lzxrb" event={"ID":"4f6f0027-7e55-407c-be1d-5dc5f57250a8","Type":"ContainerStarted","Data":"cea28a82fe64d3e5003a6e3bc2b29c413fd9a467dc6a7bcfea1aacd89ed6e227"} Feb 23 17:47:49 crc kubenswrapper[4724]: I0223 17:47:49.816909 4724 generic.go:334] "Generic (PLEG): container finished" podID="c7ad5fb5-517e-4249-9da4-08d99599caf0" containerID="0ce9e7c4ee1753d23876eb05e29a3b567634832279ec1dfea20be92fb21b26ce" exitCode=0 Feb 23 17:47:49 crc kubenswrapper[4724]: I0223 17:47:49.817067 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-lzxrb" Feb 23 17:47:49 crc kubenswrapper[4724]: I0223 17:47:49.817098 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c7ad5fb5-517e-4249-9da4-08d99599caf0","Type":"ContainerDied","Data":"0ce9e7c4ee1753d23876eb05e29a3b567634832279ec1dfea20be92fb21b26ce"} Feb 23 17:47:49 crc kubenswrapper[4724]: I0223 17:47:49.817489 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-lzxrb" Feb 23 17:47:49 crc kubenswrapper[4724]: I0223 17:47:49.846651 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-lzxrb" podStartSLOduration=9.768765124 podStartE2EDuration="33.846627618s" podCreationTimestamp="2026-02-23 17:47:16 +0000 UTC" firstStartedPulling="2026-02-23 17:47:23.95991779 +0000 UTC m=+999.776117390" lastFinishedPulling="2026-02-23 17:47:48.037780284 +0000 UTC m=+1023.853979884" observedRunningTime="2026-02-23 17:47:49.844096163 +0000 UTC m=+1025.660295773" watchObservedRunningTime="2026-02-23 17:47:49.846627618 +0000 UTC m=+1025.662827218" Feb 23 17:47:50 crc kubenswrapper[4724]: I0223 17:47:50.837714 4724 generic.go:334] "Generic (PLEG): container finished" podID="e48a20ad-1863-458a-ba27-6b24cee6df0c" containerID="c9731289d0e3b7a4dcc37f6cb577d670e1f82683e55aa15a75d85db6f76aefa3" exitCode=0 Feb 23 17:47:50 crc kubenswrapper[4724]: I0223 17:47:50.837773 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e48a20ad-1863-458a-ba27-6b24cee6df0c","Type":"ContainerDied","Data":"c9731289d0e3b7a4dcc37f6cb577d670e1f82683e55aa15a75d85db6f76aefa3"} Feb 23 17:47:50 crc kubenswrapper[4724]: I0223 17:47:50.841566 4724 generic.go:334] "Generic (PLEG): container finished" podID="ad58a78a-ccdb-4154-852e-8a8984a2a650" containerID="49f0c72fb4911ec8aa2fc7339f8af96f7a36570471b463b8b8c3bf494fe72670" exitCode=0 Feb 23 17:47:50 crc kubenswrapper[4724]: I0223 17:47:50.841632 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ad58a78a-ccdb-4154-852e-8a8984a2a650","Type":"ContainerDied","Data":"49f0c72fb4911ec8aa2fc7339f8af96f7a36570471b463b8b8c3bf494fe72670"} Feb 23 17:47:50 crc kubenswrapper[4724]: I0223 17:47:50.846747 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c7ad5fb5-517e-4249-9da4-08d99599caf0","Type":"ContainerStarted","Data":"c43ecbe8c79e540c49d6f55e1431d2272b0bf6bf49fa3352d351ad825e9daba9"} Feb 23 17:47:50 crc kubenswrapper[4724]: I0223 17:47:50.931556 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=-9223371994.923248 podStartE2EDuration="41.931526609s" podCreationTimestamp="2026-02-23 17:47:09 +0000 UTC" firstStartedPulling="2026-02-23 17:47:23.69466364 +0000 UTC m=+999.510863240" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:47:50.926846409 +0000 UTC m=+1026.743046019" watchObservedRunningTime="2026-02-23 17:47:50.931526609 +0000 UTC m=+1026.747726209" Feb 23 17:47:50 crc kubenswrapper[4724]: I0223 17:47:50.967174 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 23 17:47:50 crc kubenswrapper[4724]: I0223 17:47:50.967237 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 23 17:47:51 crc kubenswrapper[4724]: I0223 17:47:51.859755 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e48a20ad-1863-458a-ba27-6b24cee6df0c","Type":"ContainerStarted","Data":"eb85dd1e4294ee378b257861dc7553a703c3937741aefcab04c8b9accd5199b4"} Feb 23 17:47:51 crc kubenswrapper[4724]: I0223 17:47:51.886612 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=28.931693330999998 podStartE2EDuration="43.886593674s" podCreationTimestamp="2026-02-23 17:47:08 +0000 UTC" firstStartedPulling="2026-02-23 17:47:23.466586226 +0000 UTC m=+999.282785816" lastFinishedPulling="2026-02-23 17:47:38.421486569 +0000 UTC m=+1014.237686159" observedRunningTime="2026-02-23 17:47:51.883927345 +0000 UTC m=+1027.700126945" watchObservedRunningTime="2026-02-23 17:47:51.886593674 +0000 UTC m=+1027.702793274" Feb 23 17:47:52 crc kubenswrapper[4724]: I0223 17:47:52.866498 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hh76w" event={"ID":"8fd48d48-59c7-4470-9223-c3b3f786c8d9","Type":"ContainerStarted","Data":"dea21182bb82f9e9c102636bb34c469db3cf7febee145bda7787692ae6033811"} Feb 23 17:47:52 crc kubenswrapper[4724]: I0223 17:47:52.866923 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-hh76w" Feb 23 17:47:52 crc kubenswrapper[4724]: I0223 17:47:52.889552 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-hh76w" podStartSLOduration=8.621584151 podStartE2EDuration="36.889533159s" podCreationTimestamp="2026-02-23 17:47:16 +0000 UTC" firstStartedPulling="2026-02-23 17:47:23.75455299 +0000 UTC m=+999.570752590" lastFinishedPulling="2026-02-23 17:47:52.022501998 +0000 UTC m=+1027.838701598" observedRunningTime="2026-02-23 17:47:52.883464343 +0000 UTC m=+1028.699663943" watchObservedRunningTime="2026-02-23 17:47:52.889533159 +0000 UTC m=+1028.705732759" Feb 23 17:47:53 crc kubenswrapper[4724]: I0223 17:47:53.800333 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 23 17:47:55 crc kubenswrapper[4724]: I0223 17:47:55.078524 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 23 17:47:55 crc kubenswrapper[4724]: I0223 17:47:55.188334 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 23 17:47:56 crc kubenswrapper[4724]: I0223 17:47:56.384032 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 23 17:47:59 crc kubenswrapper[4724]: I0223 17:47:59.637523 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 23 17:47:59 crc kubenswrapper[4724]: I0223 17:47:59.637866 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 23 17:48:03 crc kubenswrapper[4724]: I0223 17:48:03.926012 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6fb56974c-zxzzb"] Feb 23 17:48:03 crc kubenswrapper[4724]: E0223 17:48:03.926418 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="272b0850-c495-47ac-a514-1483b621a887" containerName="init" Feb 23 17:48:03 crc kubenswrapper[4724]: I0223 17:48:03.926433 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="272b0850-c495-47ac-a514-1483b621a887" containerName="init" Feb 23 17:48:03 crc kubenswrapper[4724]: E0223 17:48:03.926471 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="272b0850-c495-47ac-a514-1483b621a887" containerName="dnsmasq-dns" Feb 23 17:48:03 crc kubenswrapper[4724]: I0223 17:48:03.926478 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="272b0850-c495-47ac-a514-1483b621a887" containerName="dnsmasq-dns" Feb 23 17:48:03 crc kubenswrapper[4724]: I0223 17:48:03.926645 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="272b0850-c495-47ac-a514-1483b621a887" containerName="dnsmasq-dns" Feb 23 17:48:03 crc kubenswrapper[4724]: I0223 17:48:03.927453 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fb56974c-zxzzb" Feb 23 17:48:03 crc kubenswrapper[4724]: I0223 17:48:03.980554 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff748f9f-f0de-4ca4-ab7a-487cc4f74311-dns-svc\") pod \"dnsmasq-dns-6fb56974c-zxzzb\" (UID: \"ff748f9f-f0de-4ca4-ab7a-487cc4f74311\") " pod="openstack/dnsmasq-dns-6fb56974c-zxzzb" Feb 23 17:48:03 crc kubenswrapper[4724]: I0223 17:48:03.980897 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbhwv\" (UniqueName: \"kubernetes.io/projected/ff748f9f-f0de-4ca4-ab7a-487cc4f74311-kube-api-access-qbhwv\") pod \"dnsmasq-dns-6fb56974c-zxzzb\" (UID: \"ff748f9f-f0de-4ca4-ab7a-487cc4f74311\") " pod="openstack/dnsmasq-dns-6fb56974c-zxzzb" Feb 23 17:48:03 crc kubenswrapper[4724]: I0223 17:48:03.980941 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff748f9f-f0de-4ca4-ab7a-487cc4f74311-config\") pod \"dnsmasq-dns-6fb56974c-zxzzb\" (UID: \"ff748f9f-f0de-4ca4-ab7a-487cc4f74311\") " pod="openstack/dnsmasq-dns-6fb56974c-zxzzb" Feb 23 17:48:03 crc kubenswrapper[4724]: I0223 17:48:03.985113 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6fb56974c-zxzzb"] Feb 23 17:48:04 crc kubenswrapper[4724]: I0223 17:48:04.082101 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbhwv\" (UniqueName: \"kubernetes.io/projected/ff748f9f-f0de-4ca4-ab7a-487cc4f74311-kube-api-access-qbhwv\") pod \"dnsmasq-dns-6fb56974c-zxzzb\" (UID: \"ff748f9f-f0de-4ca4-ab7a-487cc4f74311\") " pod="openstack/dnsmasq-dns-6fb56974c-zxzzb" Feb 23 17:48:04 crc kubenswrapper[4724]: I0223 17:48:04.082203 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff748f9f-f0de-4ca4-ab7a-487cc4f74311-config\") pod \"dnsmasq-dns-6fb56974c-zxzzb\" (UID: \"ff748f9f-f0de-4ca4-ab7a-487cc4f74311\") " pod="openstack/dnsmasq-dns-6fb56974c-zxzzb" Feb 23 17:48:04 crc kubenswrapper[4724]: I0223 17:48:04.082282 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff748f9f-f0de-4ca4-ab7a-487cc4f74311-dns-svc\") pod \"dnsmasq-dns-6fb56974c-zxzzb\" (UID: \"ff748f9f-f0de-4ca4-ab7a-487cc4f74311\") " pod="openstack/dnsmasq-dns-6fb56974c-zxzzb" Feb 23 17:48:04 crc kubenswrapper[4724]: I0223 17:48:04.083708 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff748f9f-f0de-4ca4-ab7a-487cc4f74311-config\") pod \"dnsmasq-dns-6fb56974c-zxzzb\" (UID: \"ff748f9f-f0de-4ca4-ab7a-487cc4f74311\") " pod="openstack/dnsmasq-dns-6fb56974c-zxzzb" Feb 23 17:48:04 crc kubenswrapper[4724]: I0223 17:48:04.085306 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff748f9f-f0de-4ca4-ab7a-487cc4f74311-dns-svc\") pod \"dnsmasq-dns-6fb56974c-zxzzb\" (UID: \"ff748f9f-f0de-4ca4-ab7a-487cc4f74311\") " pod="openstack/dnsmasq-dns-6fb56974c-zxzzb" Feb 23 17:48:04 crc kubenswrapper[4724]: I0223 17:48:04.108155 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbhwv\" (UniqueName: \"kubernetes.io/projected/ff748f9f-f0de-4ca4-ab7a-487cc4f74311-kube-api-access-qbhwv\") pod \"dnsmasq-dns-6fb56974c-zxzzb\" (UID: \"ff748f9f-f0de-4ca4-ab7a-487cc4f74311\") " pod="openstack/dnsmasq-dns-6fb56974c-zxzzb" Feb 23 17:48:04 crc kubenswrapper[4724]: I0223 17:48:04.244589 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fb56974c-zxzzb" Feb 23 17:48:04 crc kubenswrapper[4724]: E0223 17:48:04.728999 4724 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:1b555e21bba7c609111ace4380382a696d9aceeb6e9816bf9023b8f689b6c741" Feb 23 17:48:04 crc kubenswrapper[4724]: E0223 17:48:04.729558 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus,Image:registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:1b555e21bba7c609111ace4380382a696d9aceeb6e9816bf9023b8f689b6c741,Command:[],Args:[--config.file=/etc/prometheus/config_out/prometheus.env.yaml --web.enable-lifecycle --web.enable-remote-write-receiver --web.route-prefix=/ --storage.tsdb.retention.time=24h --storage.tsdb.path=/prometheus --web.config.file=/etc/prometheus/web_config/web-config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:web,HostPort:0,ContainerPort:9090,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-out,ReadOnly:true,MountPath:/etc/prometheus/config_out,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tls-assets,ReadOnly:true,MountPath:/etc/prometheus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-db,ReadOnly:false,MountPath:/prometheus,SubPath:prometheus-db,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-0,ReadOnly:true,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-1,ReadOnly:true,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-1,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-2,ReadOnly:true,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-2,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:web-config,ReadOnly:true,MountPath:/etc/prometheus/web_config/web-config.yaml,SubPath:web-config.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cmqft,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/healthy,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/ready,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/ready,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:15,SuccessThreshold:1,FailureThreshold:60,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-metric-storage-0_openstack(ad58a78a-ccdb-4154-852e-8a8984a2a650): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 23 17:48:05 crc kubenswrapper[4724]: I0223 17:48:05.048320 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6fb56974c-zxzzb"] Feb 23 17:48:05 crc kubenswrapper[4724]: I0223 17:48:05.058702 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 23 17:48:05 crc kubenswrapper[4724]: I0223 17:48:05.064290 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 23 17:48:05 crc kubenswrapper[4724]: I0223 17:48:05.066927 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 23 17:48:05 crc kubenswrapper[4724]: I0223 17:48:05.067731 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 23 17:48:05 crc kubenswrapper[4724]: I0223 17:48:05.067771 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 23 17:48:05 crc kubenswrapper[4724]: I0223 17:48:05.068892 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-77btq" Feb 23 17:48:05 crc kubenswrapper[4724]: I0223 17:48:05.078836 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 23 17:48:05 crc kubenswrapper[4724]: I0223 17:48:05.200196 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3946025b-c492-4f1b-a3c3-62d2fa658586-etc-swift\") pod \"swift-storage-0\" (UID: \"3946025b-c492-4f1b-a3c3-62d2fa658586\") " pod="openstack/swift-storage-0" Feb 23 17:48:05 crc kubenswrapper[4724]: I0223 17:48:05.200350 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/3946025b-c492-4f1b-a3c3-62d2fa658586-lock\") pod \"swift-storage-0\" (UID: \"3946025b-c492-4f1b-a3c3-62d2fa658586\") " pod="openstack/swift-storage-0" Feb 23 17:48:05 crc kubenswrapper[4724]: I0223 17:48:05.200428 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9mvp\" (UniqueName: \"kubernetes.io/projected/3946025b-c492-4f1b-a3c3-62d2fa658586-kube-api-access-f9mvp\") pod \"swift-storage-0\" (UID: \"3946025b-c492-4f1b-a3c3-62d2fa658586\") " pod="openstack/swift-storage-0" Feb 23 17:48:05 crc kubenswrapper[4724]: I0223 17:48:05.200448 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/3946025b-c492-4f1b-a3c3-62d2fa658586-cache\") pod \"swift-storage-0\" (UID: \"3946025b-c492-4f1b-a3c3-62d2fa658586\") " pod="openstack/swift-storage-0" Feb 23 17:48:05 crc kubenswrapper[4724]: I0223 17:48:05.200467 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"3946025b-c492-4f1b-a3c3-62d2fa658586\") " pod="openstack/swift-storage-0" Feb 23 17:48:05 crc kubenswrapper[4724]: I0223 17:48:05.200564 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3946025b-c492-4f1b-a3c3-62d2fa658586-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"3946025b-c492-4f1b-a3c3-62d2fa658586\") " pod="openstack/swift-storage-0" Feb 23 17:48:05 crc kubenswrapper[4724]: I0223 17:48:05.302612 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3946025b-c492-4f1b-a3c3-62d2fa658586-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"3946025b-c492-4f1b-a3c3-62d2fa658586\") " pod="openstack/swift-storage-0" Feb 23 17:48:05 crc kubenswrapper[4724]: I0223 17:48:05.302736 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3946025b-c492-4f1b-a3c3-62d2fa658586-etc-swift\") pod \"swift-storage-0\" (UID: \"3946025b-c492-4f1b-a3c3-62d2fa658586\") " pod="openstack/swift-storage-0" Feb 23 17:48:05 crc kubenswrapper[4724]: I0223 17:48:05.302763 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/3946025b-c492-4f1b-a3c3-62d2fa658586-lock\") pod \"swift-storage-0\" (UID: \"3946025b-c492-4f1b-a3c3-62d2fa658586\") " pod="openstack/swift-storage-0" Feb 23 17:48:05 crc kubenswrapper[4724]: I0223 17:48:05.302802 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9mvp\" (UniqueName: \"kubernetes.io/projected/3946025b-c492-4f1b-a3c3-62d2fa658586-kube-api-access-f9mvp\") pod \"swift-storage-0\" (UID: \"3946025b-c492-4f1b-a3c3-62d2fa658586\") " pod="openstack/swift-storage-0" Feb 23 17:48:05 crc kubenswrapper[4724]: I0223 17:48:05.302820 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/3946025b-c492-4f1b-a3c3-62d2fa658586-cache\") pod \"swift-storage-0\" (UID: \"3946025b-c492-4f1b-a3c3-62d2fa658586\") " pod="openstack/swift-storage-0" Feb 23 17:48:05 crc kubenswrapper[4724]: I0223 17:48:05.302849 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"3946025b-c492-4f1b-a3c3-62d2fa658586\") " pod="openstack/swift-storage-0" Feb 23 17:48:05 crc kubenswrapper[4724]: I0223 17:48:05.303371 4724 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"3946025b-c492-4f1b-a3c3-62d2fa658586\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/swift-storage-0" Feb 23 17:48:05 crc kubenswrapper[4724]: I0223 17:48:05.303623 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/3946025b-c492-4f1b-a3c3-62d2fa658586-lock\") pod \"swift-storage-0\" (UID: \"3946025b-c492-4f1b-a3c3-62d2fa658586\") " pod="openstack/swift-storage-0" Feb 23 17:48:05 crc kubenswrapper[4724]: E0223 17:48:05.303797 4724 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 23 17:48:05 crc kubenswrapper[4724]: E0223 17:48:05.303848 4724 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 23 17:48:05 crc kubenswrapper[4724]: E0223 17:48:05.303919 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3946025b-c492-4f1b-a3c3-62d2fa658586-etc-swift podName:3946025b-c492-4f1b-a3c3-62d2fa658586 nodeName:}" failed. No retries permitted until 2026-02-23 17:48:05.80388624 +0000 UTC m=+1041.620085840 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3946025b-c492-4f1b-a3c3-62d2fa658586-etc-swift") pod "swift-storage-0" (UID: "3946025b-c492-4f1b-a3c3-62d2fa658586") : configmap "swift-ring-files" not found Feb 23 17:48:05 crc kubenswrapper[4724]: I0223 17:48:05.304509 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/3946025b-c492-4f1b-a3c3-62d2fa658586-cache\") pod \"swift-storage-0\" (UID: \"3946025b-c492-4f1b-a3c3-62d2fa658586\") " pod="openstack/swift-storage-0" Feb 23 17:48:05 crc kubenswrapper[4724]: I0223 17:48:05.324353 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3946025b-c492-4f1b-a3c3-62d2fa658586-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"3946025b-c492-4f1b-a3c3-62d2fa658586\") " pod="openstack/swift-storage-0" Feb 23 17:48:05 crc kubenswrapper[4724]: I0223 17:48:05.326113 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"3946025b-c492-4f1b-a3c3-62d2fa658586\") " pod="openstack/swift-storage-0" Feb 23 17:48:05 crc kubenswrapper[4724]: I0223 17:48:05.332162 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9mvp\" (UniqueName: \"kubernetes.io/projected/3946025b-c492-4f1b-a3c3-62d2fa658586-kube-api-access-f9mvp\") pod \"swift-storage-0\" (UID: \"3946025b-c492-4f1b-a3c3-62d2fa658586\") " pod="openstack/swift-storage-0" Feb 23 17:48:05 crc kubenswrapper[4724]: I0223 17:48:05.810344 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3946025b-c492-4f1b-a3c3-62d2fa658586-etc-swift\") pod \"swift-storage-0\" (UID: \"3946025b-c492-4f1b-a3c3-62d2fa658586\") " pod="openstack/swift-storage-0" Feb 23 17:48:05 crc kubenswrapper[4724]: E0223 17:48:05.810886 4724 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 23 17:48:05 crc kubenswrapper[4724]: E0223 17:48:05.810903 4724 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 23 17:48:05 crc kubenswrapper[4724]: E0223 17:48:05.810946 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3946025b-c492-4f1b-a3c3-62d2fa658586-etc-swift podName:3946025b-c492-4f1b-a3c3-62d2fa658586 nodeName:}" failed. No retries permitted until 2026-02-23 17:48:06.810931927 +0000 UTC m=+1042.627131527 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3946025b-c492-4f1b-a3c3-62d2fa658586-etc-swift") pod "swift-storage-0" (UID: "3946025b-c492-4f1b-a3c3-62d2fa658586") : configmap "swift-ring-files" not found Feb 23 17:48:05 crc kubenswrapper[4724]: I0223 17:48:05.971243 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"02d0b5c7-a3f7-47d6-a52f-cff5a0946cea","Type":"ContainerStarted","Data":"1fb971e0c01cdbec9771694e91cbc3650a946a3acd1f50d65422de8854f29529"} Feb 23 17:48:05 crc kubenswrapper[4724]: I0223 17:48:05.977438 4724 generic.go:334] "Generic (PLEG): container finished" podID="ff748f9f-f0de-4ca4-ab7a-487cc4f74311" containerID="a062ad1ec735cba3a0d126f8127dfc3f5a37e81d32df1de26f4c504e2d5e33fb" exitCode=0 Feb 23 17:48:05 crc kubenswrapper[4724]: I0223 17:48:05.977486 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fb56974c-zxzzb" event={"ID":"ff748f9f-f0de-4ca4-ab7a-487cc4f74311","Type":"ContainerDied","Data":"a062ad1ec735cba3a0d126f8127dfc3f5a37e81d32df1de26f4c504e2d5e33fb"} Feb 23 17:48:05 crc kubenswrapper[4724]: I0223 17:48:05.977813 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fb56974c-zxzzb" event={"ID":"ff748f9f-f0de-4ca4-ab7a-487cc4f74311","Type":"ContainerStarted","Data":"bd2967a38ed75b8fbb2de3e1723caa9a322dbbf05b51cf9263b2fc23fcf71633"} Feb 23 17:48:05 crc kubenswrapper[4724]: I0223 17:48:05.980550 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"ba834afe-088c-4b0c-97f5-7986f8f9c988","Type":"ContainerStarted","Data":"ddd08092409de12beb790376bfb9e48ae71331ff823a1140789c5ac716c374ad"} Feb 23 17:48:06 crc kubenswrapper[4724]: I0223 17:48:06.031765 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=5.558036809 podStartE2EDuration="47.031744364s" podCreationTimestamp="2026-02-23 17:47:19 +0000 UTC" firstStartedPulling="2026-02-23 17:47:23.861617122 +0000 UTC m=+999.677816722" lastFinishedPulling="2026-02-23 17:48:05.335324677 +0000 UTC m=+1041.151524277" observedRunningTime="2026-02-23 17:48:06.011119523 +0000 UTC m=+1041.827319123" watchObservedRunningTime="2026-02-23 17:48:06.031744364 +0000 UTC m=+1041.847943964" Feb 23 17:48:06 crc kubenswrapper[4724]: I0223 17:48:06.063078 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=7.980042462 podStartE2EDuration="50.063059179s" podCreationTimestamp="2026-02-23 17:47:16 +0000 UTC" firstStartedPulling="2026-02-23 17:47:23.252519173 +0000 UTC m=+999.068718763" lastFinishedPulling="2026-02-23 17:48:05.33553588 +0000 UTC m=+1041.151735480" observedRunningTime="2026-02-23 17:48:06.055072866 +0000 UTC m=+1041.871272466" watchObservedRunningTime="2026-02-23 17:48:06.063059179 +0000 UTC m=+1041.879258769" Feb 23 17:48:06 crc kubenswrapper[4724]: I0223 17:48:06.701520 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 23 17:48:06 crc kubenswrapper[4724]: I0223 17:48:06.824688 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3946025b-c492-4f1b-a3c3-62d2fa658586-etc-swift\") pod \"swift-storage-0\" (UID: \"3946025b-c492-4f1b-a3c3-62d2fa658586\") " pod="openstack/swift-storage-0" Feb 23 17:48:06 crc kubenswrapper[4724]: E0223 17:48:06.824957 4724 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 23 17:48:06 crc kubenswrapper[4724]: E0223 17:48:06.825004 4724 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 23 17:48:06 crc kubenswrapper[4724]: E0223 17:48:06.825083 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3946025b-c492-4f1b-a3c3-62d2fa658586-etc-swift podName:3946025b-c492-4f1b-a3c3-62d2fa658586 nodeName:}" failed. No retries permitted until 2026-02-23 17:48:08.825060391 +0000 UTC m=+1044.641260011 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3946025b-c492-4f1b-a3c3-62d2fa658586-etc-swift") pod "swift-storage-0" (UID: "3946025b-c492-4f1b-a3c3-62d2fa658586") : configmap "swift-ring-files" not found Feb 23 17:48:06 crc kubenswrapper[4724]: I0223 17:48:06.895414 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="e48a20ad-1863-458a-ba27-6b24cee6df0c" containerName="galera" probeResult="failure" output=< Feb 23 17:48:06 crc kubenswrapper[4724]: wsrep_local_state_comment (Joined) differs from Synced Feb 23 17:48:06 crc kubenswrapper[4724]: > Feb 23 17:48:06 crc kubenswrapper[4724]: I0223 17:48:06.988870 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fb56974c-zxzzb" event={"ID":"ff748f9f-f0de-4ca4-ab7a-487cc4f74311","Type":"ContainerStarted","Data":"86be1ba7bb8290be58f1926afecd1a9bff3a568630f28ce0200b31792c0e74eb"} Feb 23 17:48:06 crc kubenswrapper[4724]: I0223 17:48:06.989020 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6fb56974c-zxzzb" Feb 23 17:48:07 crc kubenswrapper[4724]: I0223 17:48:07.007996 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6fb56974c-zxzzb" podStartSLOduration=4.007980276 podStartE2EDuration="4.007980276s" podCreationTimestamp="2026-02-23 17:48:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:48:07.002064997 +0000 UTC m=+1042.818264597" watchObservedRunningTime="2026-02-23 17:48:07.007980276 +0000 UTC m=+1042.824179876" Feb 23 17:48:07 crc kubenswrapper[4724]: I0223 17:48:07.740554 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 23 17:48:07 crc kubenswrapper[4724]: I0223 17:48:07.999165 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ad58a78a-ccdb-4154-852e-8a8984a2a650","Type":"ContainerStarted","Data":"f5bf59b649fa98c30ad816168fb029be1cc7c10b7b9f0e5f43d7540ba180fb00"} Feb 23 17:48:08 crc kubenswrapper[4724]: I0223 17:48:08.596060 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 23 17:48:08 crc kubenswrapper[4724]: I0223 17:48:08.640981 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 23 17:48:08 crc kubenswrapper[4724]: I0223 17:48:08.740793 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 23 17:48:08 crc kubenswrapper[4724]: I0223 17:48:08.798142 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 23 17:48:08 crc kubenswrapper[4724]: I0223 17:48:08.870580 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3946025b-c492-4f1b-a3c3-62d2fa658586-etc-swift\") pod \"swift-storage-0\" (UID: \"3946025b-c492-4f1b-a3c3-62d2fa658586\") " pod="openstack/swift-storage-0" Feb 23 17:48:08 crc kubenswrapper[4724]: E0223 17:48:08.872001 4724 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 23 17:48:08 crc kubenswrapper[4724]: E0223 17:48:08.872036 4724 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 23 17:48:08 crc kubenswrapper[4724]: E0223 17:48:08.872096 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3946025b-c492-4f1b-a3c3-62d2fa658586-etc-swift podName:3946025b-c492-4f1b-a3c3-62d2fa658586 nodeName:}" failed. No retries permitted until 2026-02-23 17:48:12.872072277 +0000 UTC m=+1048.688271887 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3946025b-c492-4f1b-a3c3-62d2fa658586-etc-swift") pod "swift-storage-0" (UID: "3946025b-c492-4f1b-a3c3-62d2fa658586") : configmap "swift-ring-files" not found Feb 23 17:48:09 crc kubenswrapper[4724]: I0223 17:48:09.050324 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 23 17:48:09 crc kubenswrapper[4724]: I0223 17:48:09.069700 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-9nzl2"] Feb 23 17:48:09 crc kubenswrapper[4724]: I0223 17:48:09.071094 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-9nzl2" Feb 23 17:48:09 crc kubenswrapper[4724]: I0223 17:48:09.079609 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 23 17:48:09 crc kubenswrapper[4724]: I0223 17:48:09.079855 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 23 17:48:09 crc kubenswrapper[4724]: I0223 17:48:09.081662 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 23 17:48:09 crc kubenswrapper[4724]: I0223 17:48:09.086559 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-9nzl2"] Feb 23 17:48:09 crc kubenswrapper[4724]: I0223 17:48:09.106774 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-9nzl2"] Feb 23 17:48:09 crc kubenswrapper[4724]: E0223 17:48:09.146490 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-scqws ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-scqws ring-data-devices scripts swiftconf]: context canceled" pod="openstack/swift-ring-rebalance-9nzl2" podUID="21b49929-1a79-4138-aae4-4b5ec923bd3f" Feb 23 17:48:09 crc kubenswrapper[4724]: I0223 17:48:09.195643 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21b49929-1a79-4138-aae4-4b5ec923bd3f-combined-ca-bundle\") pod \"swift-ring-rebalance-9nzl2\" (UID: \"21b49929-1a79-4138-aae4-4b5ec923bd3f\") " pod="openstack/swift-ring-rebalance-9nzl2" Feb 23 17:48:09 crc kubenswrapper[4724]: I0223 17:48:09.195779 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/21b49929-1a79-4138-aae4-4b5ec923bd3f-ring-data-devices\") pod \"swift-ring-rebalance-9nzl2\" (UID: \"21b49929-1a79-4138-aae4-4b5ec923bd3f\") " pod="openstack/swift-ring-rebalance-9nzl2" Feb 23 17:48:09 crc kubenswrapper[4724]: I0223 17:48:09.195878 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scqws\" (UniqueName: \"kubernetes.io/projected/21b49929-1a79-4138-aae4-4b5ec923bd3f-kube-api-access-scqws\") pod \"swift-ring-rebalance-9nzl2\" (UID: \"21b49929-1a79-4138-aae4-4b5ec923bd3f\") " pod="openstack/swift-ring-rebalance-9nzl2" Feb 23 17:48:09 crc kubenswrapper[4724]: I0223 17:48:09.196002 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/21b49929-1a79-4138-aae4-4b5ec923bd3f-etc-swift\") pod \"swift-ring-rebalance-9nzl2\" (UID: \"21b49929-1a79-4138-aae4-4b5ec923bd3f\") " pod="openstack/swift-ring-rebalance-9nzl2" Feb 23 17:48:09 crc kubenswrapper[4724]: I0223 17:48:09.196036 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/21b49929-1a79-4138-aae4-4b5ec923bd3f-dispersionconf\") pod \"swift-ring-rebalance-9nzl2\" (UID: \"21b49929-1a79-4138-aae4-4b5ec923bd3f\") " pod="openstack/swift-ring-rebalance-9nzl2" Feb 23 17:48:09 crc kubenswrapper[4724]: I0223 17:48:09.196059 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/21b49929-1a79-4138-aae4-4b5ec923bd3f-swiftconf\") pod \"swift-ring-rebalance-9nzl2\" (UID: \"21b49929-1a79-4138-aae4-4b5ec923bd3f\") " pod="openstack/swift-ring-rebalance-9nzl2" Feb 23 17:48:09 crc kubenswrapper[4724]: I0223 17:48:09.196108 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/21b49929-1a79-4138-aae4-4b5ec923bd3f-scripts\") pod \"swift-ring-rebalance-9nzl2\" (UID: \"21b49929-1a79-4138-aae4-4b5ec923bd3f\") " pod="openstack/swift-ring-rebalance-9nzl2" Feb 23 17:48:09 crc kubenswrapper[4724]: I0223 17:48:09.203143 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-w2vrd"] Feb 23 17:48:09 crc kubenswrapper[4724]: I0223 17:48:09.204499 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-w2vrd" Feb 23 17:48:09 crc kubenswrapper[4724]: I0223 17:48:09.213020 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-w2vrd"] Feb 23 17:48:09 crc kubenswrapper[4724]: I0223 17:48:09.297867 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/21b49929-1a79-4138-aae4-4b5ec923bd3f-scripts\") pod \"swift-ring-rebalance-9nzl2\" (UID: \"21b49929-1a79-4138-aae4-4b5ec923bd3f\") " pod="openstack/swift-ring-rebalance-9nzl2" Feb 23 17:48:09 crc kubenswrapper[4724]: I0223 17:48:09.297952 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bc3d191e-4725-42ef-90af-16b57d7bf649-scripts\") pod \"swift-ring-rebalance-w2vrd\" (UID: \"bc3d191e-4725-42ef-90af-16b57d7bf649\") " pod="openstack/swift-ring-rebalance-w2vrd" Feb 23 17:48:09 crc kubenswrapper[4724]: I0223 17:48:09.298002 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21b49929-1a79-4138-aae4-4b5ec923bd3f-combined-ca-bundle\") pod \"swift-ring-rebalance-9nzl2\" (UID: \"21b49929-1a79-4138-aae4-4b5ec923bd3f\") " pod="openstack/swift-ring-rebalance-9nzl2" Feb 23 17:48:09 crc kubenswrapper[4724]: I0223 17:48:09.298057 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bc3d191e-4725-42ef-90af-16b57d7bf649-dispersionconf\") pod \"swift-ring-rebalance-w2vrd\" (UID: \"bc3d191e-4725-42ef-90af-16b57d7bf649\") " pod="openstack/swift-ring-rebalance-w2vrd" Feb 23 17:48:09 crc kubenswrapper[4724]: I0223 17:48:09.298090 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/21b49929-1a79-4138-aae4-4b5ec923bd3f-ring-data-devices\") pod \"swift-ring-rebalance-9nzl2\" (UID: \"21b49929-1a79-4138-aae4-4b5ec923bd3f\") " pod="openstack/swift-ring-rebalance-9nzl2" Feb 23 17:48:09 crc kubenswrapper[4724]: I0223 17:48:09.298120 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bc3d191e-4725-42ef-90af-16b57d7bf649-swiftconf\") pod \"swift-ring-rebalance-w2vrd\" (UID: \"bc3d191e-4725-42ef-90af-16b57d7bf649\") " pod="openstack/swift-ring-rebalance-w2vrd" Feb 23 17:48:09 crc kubenswrapper[4724]: I0223 17:48:09.298144 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc3d191e-4725-42ef-90af-16b57d7bf649-combined-ca-bundle\") pod \"swift-ring-rebalance-w2vrd\" (UID: \"bc3d191e-4725-42ef-90af-16b57d7bf649\") " pod="openstack/swift-ring-rebalance-w2vrd" Feb 23 17:48:09 crc kubenswrapper[4724]: I0223 17:48:09.298193 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgzmw\" (UniqueName: \"kubernetes.io/projected/bc3d191e-4725-42ef-90af-16b57d7bf649-kube-api-access-hgzmw\") pod \"swift-ring-rebalance-w2vrd\" (UID: \"bc3d191e-4725-42ef-90af-16b57d7bf649\") " pod="openstack/swift-ring-rebalance-w2vrd" Feb 23 17:48:09 crc kubenswrapper[4724]: I0223 17:48:09.298259 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scqws\" (UniqueName: \"kubernetes.io/projected/21b49929-1a79-4138-aae4-4b5ec923bd3f-kube-api-access-scqws\") pod \"swift-ring-rebalance-9nzl2\" (UID: \"21b49929-1a79-4138-aae4-4b5ec923bd3f\") " pod="openstack/swift-ring-rebalance-9nzl2" Feb 23 17:48:09 crc kubenswrapper[4724]: I0223 17:48:09.298282 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bc3d191e-4725-42ef-90af-16b57d7bf649-ring-data-devices\") pod \"swift-ring-rebalance-w2vrd\" (UID: \"bc3d191e-4725-42ef-90af-16b57d7bf649\") " pod="openstack/swift-ring-rebalance-w2vrd" Feb 23 17:48:09 crc kubenswrapper[4724]: I0223 17:48:09.298446 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bc3d191e-4725-42ef-90af-16b57d7bf649-etc-swift\") pod \"swift-ring-rebalance-w2vrd\" (UID: \"bc3d191e-4725-42ef-90af-16b57d7bf649\") " pod="openstack/swift-ring-rebalance-w2vrd" Feb 23 17:48:09 crc kubenswrapper[4724]: I0223 17:48:09.298522 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/21b49929-1a79-4138-aae4-4b5ec923bd3f-etc-swift\") pod \"swift-ring-rebalance-9nzl2\" (UID: \"21b49929-1a79-4138-aae4-4b5ec923bd3f\") " pod="openstack/swift-ring-rebalance-9nzl2" Feb 23 17:48:09 crc kubenswrapper[4724]: I0223 17:48:09.298560 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/21b49929-1a79-4138-aae4-4b5ec923bd3f-dispersionconf\") pod \"swift-ring-rebalance-9nzl2\" (UID: \"21b49929-1a79-4138-aae4-4b5ec923bd3f\") " pod="openstack/swift-ring-rebalance-9nzl2" Feb 23 17:48:09 crc kubenswrapper[4724]: I0223 17:48:09.298637 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/21b49929-1a79-4138-aae4-4b5ec923bd3f-swiftconf\") pod \"swift-ring-rebalance-9nzl2\" (UID: \"21b49929-1a79-4138-aae4-4b5ec923bd3f\") " pod="openstack/swift-ring-rebalance-9nzl2" Feb 23 17:48:09 crc kubenswrapper[4724]: I0223 17:48:09.298958 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/21b49929-1a79-4138-aae4-4b5ec923bd3f-etc-swift\") pod \"swift-ring-rebalance-9nzl2\" (UID: \"21b49929-1a79-4138-aae4-4b5ec923bd3f\") " pod="openstack/swift-ring-rebalance-9nzl2" Feb 23 17:48:09 crc kubenswrapper[4724]: I0223 17:48:09.299234 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/21b49929-1a79-4138-aae4-4b5ec923bd3f-ring-data-devices\") pod \"swift-ring-rebalance-9nzl2\" (UID: \"21b49929-1a79-4138-aae4-4b5ec923bd3f\") " pod="openstack/swift-ring-rebalance-9nzl2" Feb 23 17:48:09 crc kubenswrapper[4724]: I0223 17:48:09.299413 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/21b49929-1a79-4138-aae4-4b5ec923bd3f-scripts\") pod \"swift-ring-rebalance-9nzl2\" (UID: \"21b49929-1a79-4138-aae4-4b5ec923bd3f\") " pod="openstack/swift-ring-rebalance-9nzl2" Feb 23 17:48:09 crc kubenswrapper[4724]: I0223 17:48:09.304071 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/21b49929-1a79-4138-aae4-4b5ec923bd3f-dispersionconf\") pod \"swift-ring-rebalance-9nzl2\" (UID: \"21b49929-1a79-4138-aae4-4b5ec923bd3f\") " pod="openstack/swift-ring-rebalance-9nzl2" Feb 23 17:48:09 crc kubenswrapper[4724]: I0223 17:48:09.305282 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21b49929-1a79-4138-aae4-4b5ec923bd3f-combined-ca-bundle\") pod \"swift-ring-rebalance-9nzl2\" (UID: \"21b49929-1a79-4138-aae4-4b5ec923bd3f\") " pod="openstack/swift-ring-rebalance-9nzl2" Feb 23 17:48:09 crc kubenswrapper[4724]: I0223 17:48:09.310553 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/21b49929-1a79-4138-aae4-4b5ec923bd3f-swiftconf\") pod \"swift-ring-rebalance-9nzl2\" (UID: \"21b49929-1a79-4138-aae4-4b5ec923bd3f\") " pod="openstack/swift-ring-rebalance-9nzl2" Feb 23 17:48:09 crc kubenswrapper[4724]: I0223 17:48:09.321061 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scqws\" (UniqueName: \"kubernetes.io/projected/21b49929-1a79-4138-aae4-4b5ec923bd3f-kube-api-access-scqws\") pod \"swift-ring-rebalance-9nzl2\" (UID: \"21b49929-1a79-4138-aae4-4b5ec923bd3f\") " pod="openstack/swift-ring-rebalance-9nzl2" Feb 23 17:48:09 crc kubenswrapper[4724]: I0223 17:48:09.400802 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bc3d191e-4725-42ef-90af-16b57d7bf649-dispersionconf\") pod \"swift-ring-rebalance-w2vrd\" (UID: \"bc3d191e-4725-42ef-90af-16b57d7bf649\") " pod="openstack/swift-ring-rebalance-w2vrd" Feb 23 17:48:09 crc kubenswrapper[4724]: I0223 17:48:09.400882 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bc3d191e-4725-42ef-90af-16b57d7bf649-swiftconf\") pod \"swift-ring-rebalance-w2vrd\" (UID: \"bc3d191e-4725-42ef-90af-16b57d7bf649\") " pod="openstack/swift-ring-rebalance-w2vrd" Feb 23 17:48:09 crc kubenswrapper[4724]: I0223 17:48:09.400903 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc3d191e-4725-42ef-90af-16b57d7bf649-combined-ca-bundle\") pod \"swift-ring-rebalance-w2vrd\" (UID: \"bc3d191e-4725-42ef-90af-16b57d7bf649\") " pod="openstack/swift-ring-rebalance-w2vrd" Feb 23 17:48:09 crc kubenswrapper[4724]: I0223 17:48:09.400922 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgzmw\" (UniqueName: \"kubernetes.io/projected/bc3d191e-4725-42ef-90af-16b57d7bf649-kube-api-access-hgzmw\") pod \"swift-ring-rebalance-w2vrd\" (UID: \"bc3d191e-4725-42ef-90af-16b57d7bf649\") " pod="openstack/swift-ring-rebalance-w2vrd" Feb 23 17:48:09 crc kubenswrapper[4724]: I0223 17:48:09.400955 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bc3d191e-4725-42ef-90af-16b57d7bf649-ring-data-devices\") pod \"swift-ring-rebalance-w2vrd\" (UID: \"bc3d191e-4725-42ef-90af-16b57d7bf649\") " pod="openstack/swift-ring-rebalance-w2vrd" Feb 23 17:48:09 crc kubenswrapper[4724]: I0223 17:48:09.401012 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bc3d191e-4725-42ef-90af-16b57d7bf649-etc-swift\") pod \"swift-ring-rebalance-w2vrd\" (UID: \"bc3d191e-4725-42ef-90af-16b57d7bf649\") " pod="openstack/swift-ring-rebalance-w2vrd" Feb 23 17:48:09 crc kubenswrapper[4724]: I0223 17:48:09.401104 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bc3d191e-4725-42ef-90af-16b57d7bf649-scripts\") pod \"swift-ring-rebalance-w2vrd\" (UID: \"bc3d191e-4725-42ef-90af-16b57d7bf649\") " pod="openstack/swift-ring-rebalance-w2vrd" Feb 23 17:48:09 crc kubenswrapper[4724]: I0223 17:48:09.401906 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bc3d191e-4725-42ef-90af-16b57d7bf649-scripts\") pod \"swift-ring-rebalance-w2vrd\" (UID: \"bc3d191e-4725-42ef-90af-16b57d7bf649\") " pod="openstack/swift-ring-rebalance-w2vrd" Feb 23 17:48:09 crc kubenswrapper[4724]: I0223 17:48:09.401978 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bc3d191e-4725-42ef-90af-16b57d7bf649-ring-data-devices\") pod \"swift-ring-rebalance-w2vrd\" (UID: \"bc3d191e-4725-42ef-90af-16b57d7bf649\") " pod="openstack/swift-ring-rebalance-w2vrd" Feb 23 17:48:09 crc kubenswrapper[4724]: I0223 17:48:09.402552 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bc3d191e-4725-42ef-90af-16b57d7bf649-etc-swift\") pod \"swift-ring-rebalance-w2vrd\" (UID: \"bc3d191e-4725-42ef-90af-16b57d7bf649\") " pod="openstack/swift-ring-rebalance-w2vrd" Feb 23 17:48:09 crc kubenswrapper[4724]: I0223 17:48:09.404446 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc3d191e-4725-42ef-90af-16b57d7bf649-combined-ca-bundle\") pod \"swift-ring-rebalance-w2vrd\" (UID: \"bc3d191e-4725-42ef-90af-16b57d7bf649\") " pod="openstack/swift-ring-rebalance-w2vrd" Feb 23 17:48:09 crc kubenswrapper[4724]: I0223 17:48:09.405448 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bc3d191e-4725-42ef-90af-16b57d7bf649-dispersionconf\") pod \"swift-ring-rebalance-w2vrd\" (UID: \"bc3d191e-4725-42ef-90af-16b57d7bf649\") " pod="openstack/swift-ring-rebalance-w2vrd" Feb 23 17:48:09 crc kubenswrapper[4724]: I0223 17:48:09.419468 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bc3d191e-4725-42ef-90af-16b57d7bf649-swiftconf\") pod \"swift-ring-rebalance-w2vrd\" (UID: \"bc3d191e-4725-42ef-90af-16b57d7bf649\") " pod="openstack/swift-ring-rebalance-w2vrd" Feb 23 17:48:09 crc kubenswrapper[4724]: I0223 17:48:09.422694 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgzmw\" (UniqueName: \"kubernetes.io/projected/bc3d191e-4725-42ef-90af-16b57d7bf649-kube-api-access-hgzmw\") pod \"swift-ring-rebalance-w2vrd\" (UID: \"bc3d191e-4725-42ef-90af-16b57d7bf649\") " pod="openstack/swift-ring-rebalance-w2vrd" Feb 23 17:48:09 crc kubenswrapper[4724]: I0223 17:48:09.553513 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-w2vrd" Feb 23 17:48:09 crc kubenswrapper[4724]: I0223 17:48:09.752450 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-j8wwt"] Feb 23 17:48:09 crc kubenswrapper[4724]: I0223 17:48:09.753870 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-j8wwt" Feb 23 17:48:09 crc kubenswrapper[4724]: I0223 17:48:09.765352 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-j8wwt"] Feb 23 17:48:09 crc kubenswrapper[4724]: I0223 17:48:09.773133 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 23 17:48:09 crc kubenswrapper[4724]: I0223 17:48:09.786903 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 23 17:48:09 crc kubenswrapper[4724]: I0223 17:48:09.910280 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a67fc661-9de0-49e6-80d6-a87cb0e17e76-operator-scripts\") pod \"root-account-create-update-j8wwt\" (UID: \"a67fc661-9de0-49e6-80d6-a87cb0e17e76\") " pod="openstack/root-account-create-update-j8wwt" Feb 23 17:48:09 crc kubenswrapper[4724]: I0223 17:48:09.910512 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4n2f\" (UniqueName: \"kubernetes.io/projected/a67fc661-9de0-49e6-80d6-a87cb0e17e76-kube-api-access-d4n2f\") pod \"root-account-create-update-j8wwt\" (UID: \"a67fc661-9de0-49e6-80d6-a87cb0e17e76\") " pod="openstack/root-account-create-update-j8wwt" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.012203 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a67fc661-9de0-49e6-80d6-a87cb0e17e76-operator-scripts\") pod \"root-account-create-update-j8wwt\" (UID: \"a67fc661-9de0-49e6-80d6-a87cb0e17e76\") " pod="openstack/root-account-create-update-j8wwt" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.012345 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4n2f\" (UniqueName: \"kubernetes.io/projected/a67fc661-9de0-49e6-80d6-a87cb0e17e76-kube-api-access-d4n2f\") pod \"root-account-create-update-j8wwt\" (UID: \"a67fc661-9de0-49e6-80d6-a87cb0e17e76\") " pod="openstack/root-account-create-update-j8wwt" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.014172 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a67fc661-9de0-49e6-80d6-a87cb0e17e76-operator-scripts\") pod \"root-account-create-update-j8wwt\" (UID: \"a67fc661-9de0-49e6-80d6-a87cb0e17e76\") " pod="openstack/root-account-create-update-j8wwt" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.031447 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4n2f\" (UniqueName: \"kubernetes.io/projected/a67fc661-9de0-49e6-80d6-a87cb0e17e76-kube-api-access-d4n2f\") pod \"root-account-create-update-j8wwt\" (UID: \"a67fc661-9de0-49e6-80d6-a87cb0e17e76\") " pod="openstack/root-account-create-update-j8wwt" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.057121 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-9nzl2" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.077178 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-9nzl2" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.083242 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-j8wwt" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.102528 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.218122 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/21b49929-1a79-4138-aae4-4b5ec923bd3f-etc-swift\") pod \"21b49929-1a79-4138-aae4-4b5ec923bd3f\" (UID: \"21b49929-1a79-4138-aae4-4b5ec923bd3f\") " Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.218245 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/21b49929-1a79-4138-aae4-4b5ec923bd3f-swiftconf\") pod \"21b49929-1a79-4138-aae4-4b5ec923bd3f\" (UID: \"21b49929-1a79-4138-aae4-4b5ec923bd3f\") " Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.218289 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21b49929-1a79-4138-aae4-4b5ec923bd3f-combined-ca-bundle\") pod \"21b49929-1a79-4138-aae4-4b5ec923bd3f\" (UID: \"21b49929-1a79-4138-aae4-4b5ec923bd3f\") " Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.218310 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/21b49929-1a79-4138-aae4-4b5ec923bd3f-ring-data-devices\") pod \"21b49929-1a79-4138-aae4-4b5ec923bd3f\" (UID: \"21b49929-1a79-4138-aae4-4b5ec923bd3f\") " Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.218350 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scqws\" (UniqueName: \"kubernetes.io/projected/21b49929-1a79-4138-aae4-4b5ec923bd3f-kube-api-access-scqws\") pod \"21b49929-1a79-4138-aae4-4b5ec923bd3f\" (UID: \"21b49929-1a79-4138-aae4-4b5ec923bd3f\") " Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.218366 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/21b49929-1a79-4138-aae4-4b5ec923bd3f-dispersionconf\") pod \"21b49929-1a79-4138-aae4-4b5ec923bd3f\" (UID: \"21b49929-1a79-4138-aae4-4b5ec923bd3f\") " Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.218409 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/21b49929-1a79-4138-aae4-4b5ec923bd3f-scripts\") pod \"21b49929-1a79-4138-aae4-4b5ec923bd3f\" (UID: \"21b49929-1a79-4138-aae4-4b5ec923bd3f\") " Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.219116 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21b49929-1a79-4138-aae4-4b5ec923bd3f-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "21b49929-1a79-4138-aae4-4b5ec923bd3f" (UID: "21b49929-1a79-4138-aae4-4b5ec923bd3f"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.219147 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21b49929-1a79-4138-aae4-4b5ec923bd3f-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "21b49929-1a79-4138-aae4-4b5ec923bd3f" (UID: "21b49929-1a79-4138-aae4-4b5ec923bd3f"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.219225 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21b49929-1a79-4138-aae4-4b5ec923bd3f-scripts" (OuterVolumeSpecName: "scripts") pod "21b49929-1a79-4138-aae4-4b5ec923bd3f" (UID: "21b49929-1a79-4138-aae4-4b5ec923bd3f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.222565 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21b49929-1a79-4138-aae4-4b5ec923bd3f-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "21b49929-1a79-4138-aae4-4b5ec923bd3f" (UID: "21b49929-1a79-4138-aae4-4b5ec923bd3f"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.222688 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21b49929-1a79-4138-aae4-4b5ec923bd3f-kube-api-access-scqws" (OuterVolumeSpecName: "kube-api-access-scqws") pod "21b49929-1a79-4138-aae4-4b5ec923bd3f" (UID: "21b49929-1a79-4138-aae4-4b5ec923bd3f"). InnerVolumeSpecName "kube-api-access-scqws". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.222934 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21b49929-1a79-4138-aae4-4b5ec923bd3f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "21b49929-1a79-4138-aae4-4b5ec923bd3f" (UID: "21b49929-1a79-4138-aae4-4b5ec923bd3f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.223060 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21b49929-1a79-4138-aae4-4b5ec923bd3f-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "21b49929-1a79-4138-aae4-4b5ec923bd3f" (UID: "21b49929-1a79-4138-aae4-4b5ec923bd3f"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.320033 4724 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/21b49929-1a79-4138-aae4-4b5ec923bd3f-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.320070 4724 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/21b49929-1a79-4138-aae4-4b5ec923bd3f-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.320081 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21b49929-1a79-4138-aae4-4b5ec923bd3f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.320092 4724 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/21b49929-1a79-4138-aae4-4b5ec923bd3f-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.320105 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-scqws\" (UniqueName: \"kubernetes.io/projected/21b49929-1a79-4138-aae4-4b5ec923bd3f-kube-api-access-scqws\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.320115 4724 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/21b49929-1a79-4138-aae4-4b5ec923bd3f-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.320125 4724 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/21b49929-1a79-4138-aae4-4b5ec923bd3f-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.356904 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6fb56974c-zxzzb"] Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.357121 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6fb56974c-zxzzb" podUID="ff748f9f-f0de-4ca4-ab7a-487cc4f74311" containerName="dnsmasq-dns" containerID="cri-o://86be1ba7bb8290be58f1926afecd1a9bff3a568630f28ce0200b31792c0e74eb" gracePeriod=10 Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.391234 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-65f4f97889-g2nhz"] Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.392928 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65f4f97889-g2nhz" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.394960 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.403664 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65f4f97889-g2nhz"] Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.417676 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-8b9ks"] Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.439198 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-8b9ks" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.447879 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.456083 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-8b9ks"] Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.523526 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkgcr\" (UniqueName: \"kubernetes.io/projected/0371ce0f-1e0f-4b9f-a5aa-971ae7d19279-kube-api-access-tkgcr\") pod \"ovn-controller-metrics-8b9ks\" (UID: \"0371ce0f-1e0f-4b9f-a5aa-971ae7d19279\") " pod="openstack/ovn-controller-metrics-8b9ks" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.523573 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0371ce0f-1e0f-4b9f-a5aa-971ae7d19279-combined-ca-bundle\") pod \"ovn-controller-metrics-8b9ks\" (UID: \"0371ce0f-1e0f-4b9f-a5aa-971ae7d19279\") " pod="openstack/ovn-controller-metrics-8b9ks" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.523595 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c89ffbc7-ee0a-42d2-b7b5-f6faad823620-config\") pod \"dnsmasq-dns-65f4f97889-g2nhz\" (UID: \"c89ffbc7-ee0a-42d2-b7b5-f6faad823620\") " pod="openstack/dnsmasq-dns-65f4f97889-g2nhz" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.523630 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/0371ce0f-1e0f-4b9f-a5aa-971ae7d19279-ovn-rundir\") pod \"ovn-controller-metrics-8b9ks\" (UID: \"0371ce0f-1e0f-4b9f-a5aa-971ae7d19279\") " pod="openstack/ovn-controller-metrics-8b9ks" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.523665 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c89ffbc7-ee0a-42d2-b7b5-f6faad823620-ovsdbserver-sb\") pod \"dnsmasq-dns-65f4f97889-g2nhz\" (UID: \"c89ffbc7-ee0a-42d2-b7b5-f6faad823620\") " pod="openstack/dnsmasq-dns-65f4f97889-g2nhz" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.523689 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/0371ce0f-1e0f-4b9f-a5aa-971ae7d19279-ovs-rundir\") pod \"ovn-controller-metrics-8b9ks\" (UID: \"0371ce0f-1e0f-4b9f-a5aa-971ae7d19279\") " pod="openstack/ovn-controller-metrics-8b9ks" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.523707 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0371ce0f-1e0f-4b9f-a5aa-971ae7d19279-config\") pod \"ovn-controller-metrics-8b9ks\" (UID: \"0371ce0f-1e0f-4b9f-a5aa-971ae7d19279\") " pod="openstack/ovn-controller-metrics-8b9ks" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.523727 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fdlw\" (UniqueName: \"kubernetes.io/projected/c89ffbc7-ee0a-42d2-b7b5-f6faad823620-kube-api-access-9fdlw\") pod \"dnsmasq-dns-65f4f97889-g2nhz\" (UID: \"c89ffbc7-ee0a-42d2-b7b5-f6faad823620\") " pod="openstack/dnsmasq-dns-65f4f97889-g2nhz" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.523764 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c89ffbc7-ee0a-42d2-b7b5-f6faad823620-dns-svc\") pod \"dnsmasq-dns-65f4f97889-g2nhz\" (UID: \"c89ffbc7-ee0a-42d2-b7b5-f6faad823620\") " pod="openstack/dnsmasq-dns-65f4f97889-g2nhz" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.523802 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0371ce0f-1e0f-4b9f-a5aa-971ae7d19279-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-8b9ks\" (UID: \"0371ce0f-1e0f-4b9f-a5aa-971ae7d19279\") " pod="openstack/ovn-controller-metrics-8b9ks" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.624847 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/0371ce0f-1e0f-4b9f-a5aa-971ae7d19279-ovn-rundir\") pod \"ovn-controller-metrics-8b9ks\" (UID: \"0371ce0f-1e0f-4b9f-a5aa-971ae7d19279\") " pod="openstack/ovn-controller-metrics-8b9ks" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.624909 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c89ffbc7-ee0a-42d2-b7b5-f6faad823620-ovsdbserver-sb\") pod \"dnsmasq-dns-65f4f97889-g2nhz\" (UID: \"c89ffbc7-ee0a-42d2-b7b5-f6faad823620\") " pod="openstack/dnsmasq-dns-65f4f97889-g2nhz" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.624938 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/0371ce0f-1e0f-4b9f-a5aa-971ae7d19279-ovs-rundir\") pod \"ovn-controller-metrics-8b9ks\" (UID: \"0371ce0f-1e0f-4b9f-a5aa-971ae7d19279\") " pod="openstack/ovn-controller-metrics-8b9ks" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.624957 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0371ce0f-1e0f-4b9f-a5aa-971ae7d19279-config\") pod \"ovn-controller-metrics-8b9ks\" (UID: \"0371ce0f-1e0f-4b9f-a5aa-971ae7d19279\") " pod="openstack/ovn-controller-metrics-8b9ks" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.624980 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fdlw\" (UniqueName: \"kubernetes.io/projected/c89ffbc7-ee0a-42d2-b7b5-f6faad823620-kube-api-access-9fdlw\") pod \"dnsmasq-dns-65f4f97889-g2nhz\" (UID: \"c89ffbc7-ee0a-42d2-b7b5-f6faad823620\") " pod="openstack/dnsmasq-dns-65f4f97889-g2nhz" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.625016 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c89ffbc7-ee0a-42d2-b7b5-f6faad823620-dns-svc\") pod \"dnsmasq-dns-65f4f97889-g2nhz\" (UID: \"c89ffbc7-ee0a-42d2-b7b5-f6faad823620\") " pod="openstack/dnsmasq-dns-65f4f97889-g2nhz" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.625055 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0371ce0f-1e0f-4b9f-a5aa-971ae7d19279-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-8b9ks\" (UID: \"0371ce0f-1e0f-4b9f-a5aa-971ae7d19279\") " pod="openstack/ovn-controller-metrics-8b9ks" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.625091 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkgcr\" (UniqueName: \"kubernetes.io/projected/0371ce0f-1e0f-4b9f-a5aa-971ae7d19279-kube-api-access-tkgcr\") pod \"ovn-controller-metrics-8b9ks\" (UID: \"0371ce0f-1e0f-4b9f-a5aa-971ae7d19279\") " pod="openstack/ovn-controller-metrics-8b9ks" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.625112 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c89ffbc7-ee0a-42d2-b7b5-f6faad823620-config\") pod \"dnsmasq-dns-65f4f97889-g2nhz\" (UID: \"c89ffbc7-ee0a-42d2-b7b5-f6faad823620\") " pod="openstack/dnsmasq-dns-65f4f97889-g2nhz" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.625126 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0371ce0f-1e0f-4b9f-a5aa-971ae7d19279-combined-ca-bundle\") pod \"ovn-controller-metrics-8b9ks\" (UID: \"0371ce0f-1e0f-4b9f-a5aa-971ae7d19279\") " pod="openstack/ovn-controller-metrics-8b9ks" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.626232 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/0371ce0f-1e0f-4b9f-a5aa-971ae7d19279-ovn-rundir\") pod \"ovn-controller-metrics-8b9ks\" (UID: \"0371ce0f-1e0f-4b9f-a5aa-971ae7d19279\") " pod="openstack/ovn-controller-metrics-8b9ks" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.632105 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/0371ce0f-1e0f-4b9f-a5aa-971ae7d19279-ovs-rundir\") pod \"ovn-controller-metrics-8b9ks\" (UID: \"0371ce0f-1e0f-4b9f-a5aa-971ae7d19279\") " pod="openstack/ovn-controller-metrics-8b9ks" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.635128 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0371ce0f-1e0f-4b9f-a5aa-971ae7d19279-config\") pod \"ovn-controller-metrics-8b9ks\" (UID: \"0371ce0f-1e0f-4b9f-a5aa-971ae7d19279\") " pod="openstack/ovn-controller-metrics-8b9ks" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.637697 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c89ffbc7-ee0a-42d2-b7b5-f6faad823620-dns-svc\") pod \"dnsmasq-dns-65f4f97889-g2nhz\" (UID: \"c89ffbc7-ee0a-42d2-b7b5-f6faad823620\") " pod="openstack/dnsmasq-dns-65f4f97889-g2nhz" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.640188 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c89ffbc7-ee0a-42d2-b7b5-f6faad823620-ovsdbserver-sb\") pod \"dnsmasq-dns-65f4f97889-g2nhz\" (UID: \"c89ffbc7-ee0a-42d2-b7b5-f6faad823620\") " pod="openstack/dnsmasq-dns-65f4f97889-g2nhz" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.641301 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0371ce0f-1e0f-4b9f-a5aa-971ae7d19279-combined-ca-bundle\") pod \"ovn-controller-metrics-8b9ks\" (UID: \"0371ce0f-1e0f-4b9f-a5aa-971ae7d19279\") " pod="openstack/ovn-controller-metrics-8b9ks" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.641539 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c89ffbc7-ee0a-42d2-b7b5-f6faad823620-config\") pod \"dnsmasq-dns-65f4f97889-g2nhz\" (UID: \"c89ffbc7-ee0a-42d2-b7b5-f6faad823620\") " pod="openstack/dnsmasq-dns-65f4f97889-g2nhz" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.650321 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0371ce0f-1e0f-4b9f-a5aa-971ae7d19279-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-8b9ks\" (UID: \"0371ce0f-1e0f-4b9f-a5aa-971ae7d19279\") " pod="openstack/ovn-controller-metrics-8b9ks" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.662552 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fdlw\" (UniqueName: \"kubernetes.io/projected/c89ffbc7-ee0a-42d2-b7b5-f6faad823620-kube-api-access-9fdlw\") pod \"dnsmasq-dns-65f4f97889-g2nhz\" (UID: \"c89ffbc7-ee0a-42d2-b7b5-f6faad823620\") " pod="openstack/dnsmasq-dns-65f4f97889-g2nhz" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.668736 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkgcr\" (UniqueName: \"kubernetes.io/projected/0371ce0f-1e0f-4b9f-a5aa-971ae7d19279-kube-api-access-tkgcr\") pod \"ovn-controller-metrics-8b9ks\" (UID: \"0371ce0f-1e0f-4b9f-a5aa-971ae7d19279\") " pod="openstack/ovn-controller-metrics-8b9ks" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.729407 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65f4f97889-g2nhz"] Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.730452 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65f4f97889-g2nhz" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.766676 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-8b9ks" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.766914 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b99fb9575-gk5sx"] Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.768915 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b99fb9575-gk5sx" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.776026 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.778752 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b99fb9575-gk5sx"] Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.828509 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svvtk\" (UniqueName: \"kubernetes.io/projected/5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135-kube-api-access-svvtk\") pod \"dnsmasq-dns-b99fb9575-gk5sx\" (UID: \"5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135\") " pod="openstack/dnsmasq-dns-b99fb9575-gk5sx" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.828620 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135-ovsdbserver-nb\") pod \"dnsmasq-dns-b99fb9575-gk5sx\" (UID: \"5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135\") " pod="openstack/dnsmasq-dns-b99fb9575-gk5sx" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.828684 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135-config\") pod \"dnsmasq-dns-b99fb9575-gk5sx\" (UID: \"5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135\") " pod="openstack/dnsmasq-dns-b99fb9575-gk5sx" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.828780 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135-ovsdbserver-sb\") pod \"dnsmasq-dns-b99fb9575-gk5sx\" (UID: \"5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135\") " pod="openstack/dnsmasq-dns-b99fb9575-gk5sx" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.828819 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135-dns-svc\") pod \"dnsmasq-dns-b99fb9575-gk5sx\" (UID: \"5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135\") " pod="openstack/dnsmasq-dns-b99fb9575-gk5sx" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.929734 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135-config\") pod \"dnsmasq-dns-b99fb9575-gk5sx\" (UID: \"5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135\") " pod="openstack/dnsmasq-dns-b99fb9575-gk5sx" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.929938 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135-ovsdbserver-sb\") pod \"dnsmasq-dns-b99fb9575-gk5sx\" (UID: \"5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135\") " pod="openstack/dnsmasq-dns-b99fb9575-gk5sx" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.929965 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135-dns-svc\") pod \"dnsmasq-dns-b99fb9575-gk5sx\" (UID: \"5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135\") " pod="openstack/dnsmasq-dns-b99fb9575-gk5sx" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.930019 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svvtk\" (UniqueName: \"kubernetes.io/projected/5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135-kube-api-access-svvtk\") pod \"dnsmasq-dns-b99fb9575-gk5sx\" (UID: \"5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135\") " pod="openstack/dnsmasq-dns-b99fb9575-gk5sx" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.930059 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135-ovsdbserver-nb\") pod \"dnsmasq-dns-b99fb9575-gk5sx\" (UID: \"5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135\") " pod="openstack/dnsmasq-dns-b99fb9575-gk5sx" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.930877 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135-ovsdbserver-nb\") pod \"dnsmasq-dns-b99fb9575-gk5sx\" (UID: \"5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135\") " pod="openstack/dnsmasq-dns-b99fb9575-gk5sx" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.932205 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135-dns-svc\") pod \"dnsmasq-dns-b99fb9575-gk5sx\" (UID: \"5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135\") " pod="openstack/dnsmasq-dns-b99fb9575-gk5sx" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.932661 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135-config\") pod \"dnsmasq-dns-b99fb9575-gk5sx\" (UID: \"5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135\") " pod="openstack/dnsmasq-dns-b99fb9575-gk5sx" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.932741 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135-ovsdbserver-sb\") pod \"dnsmasq-dns-b99fb9575-gk5sx\" (UID: \"5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135\") " pod="openstack/dnsmasq-dns-b99fb9575-gk5sx" Feb 23 17:48:10 crc kubenswrapper[4724]: I0223 17:48:10.954329 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svvtk\" (UniqueName: \"kubernetes.io/projected/5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135-kube-api-access-svvtk\") pod \"dnsmasq-dns-b99fb9575-gk5sx\" (UID: \"5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135\") " pod="openstack/dnsmasq-dns-b99fb9575-gk5sx" Feb 23 17:48:11 crc kubenswrapper[4724]: I0223 17:48:11.065891 4724 generic.go:334] "Generic (PLEG): container finished" podID="ff748f9f-f0de-4ca4-ab7a-487cc4f74311" containerID="86be1ba7bb8290be58f1926afecd1a9bff3a568630f28ce0200b31792c0e74eb" exitCode=0 Feb 23 17:48:11 crc kubenswrapper[4724]: I0223 17:48:11.065996 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-9nzl2" Feb 23 17:48:11 crc kubenswrapper[4724]: I0223 17:48:11.066828 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fb56974c-zxzzb" event={"ID":"ff748f9f-f0de-4ca4-ab7a-487cc4f74311","Type":"ContainerDied","Data":"86be1ba7bb8290be58f1926afecd1a9bff3a568630f28ce0200b31792c0e74eb"} Feb 23 17:48:11 crc kubenswrapper[4724]: I0223 17:48:11.112901 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b99fb9575-gk5sx" Feb 23 17:48:11 crc kubenswrapper[4724]: I0223 17:48:11.131149 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-9nzl2"] Feb 23 17:48:11 crc kubenswrapper[4724]: I0223 17:48:11.140661 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-9nzl2"] Feb 23 17:48:11 crc kubenswrapper[4724]: I0223 17:48:11.142583 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fb56974c-zxzzb" Feb 23 17:48:11 crc kubenswrapper[4724]: I0223 17:48:11.341932 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbhwv\" (UniqueName: \"kubernetes.io/projected/ff748f9f-f0de-4ca4-ab7a-487cc4f74311-kube-api-access-qbhwv\") pod \"ff748f9f-f0de-4ca4-ab7a-487cc4f74311\" (UID: \"ff748f9f-f0de-4ca4-ab7a-487cc4f74311\") " Feb 23 17:48:11 crc kubenswrapper[4724]: I0223 17:48:11.342069 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff748f9f-f0de-4ca4-ab7a-487cc4f74311-dns-svc\") pod \"ff748f9f-f0de-4ca4-ab7a-487cc4f74311\" (UID: \"ff748f9f-f0de-4ca4-ab7a-487cc4f74311\") " Feb 23 17:48:11 crc kubenswrapper[4724]: I0223 17:48:11.342101 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff748f9f-f0de-4ca4-ab7a-487cc4f74311-config\") pod \"ff748f9f-f0de-4ca4-ab7a-487cc4f74311\" (UID: \"ff748f9f-f0de-4ca4-ab7a-487cc4f74311\") " Feb 23 17:48:11 crc kubenswrapper[4724]: I0223 17:48:11.346872 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff748f9f-f0de-4ca4-ab7a-487cc4f74311-kube-api-access-qbhwv" (OuterVolumeSpecName: "kube-api-access-qbhwv") pod "ff748f9f-f0de-4ca4-ab7a-487cc4f74311" (UID: "ff748f9f-f0de-4ca4-ab7a-487cc4f74311"). InnerVolumeSpecName "kube-api-access-qbhwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:48:11 crc kubenswrapper[4724]: I0223 17:48:11.438178 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff748f9f-f0de-4ca4-ab7a-487cc4f74311-config" (OuterVolumeSpecName: "config") pod "ff748f9f-f0de-4ca4-ab7a-487cc4f74311" (UID: "ff748f9f-f0de-4ca4-ab7a-487cc4f74311"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:48:11 crc kubenswrapper[4724]: I0223 17:48:11.447852 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbhwv\" (UniqueName: \"kubernetes.io/projected/ff748f9f-f0de-4ca4-ab7a-487cc4f74311-kube-api-access-qbhwv\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:11 crc kubenswrapper[4724]: I0223 17:48:11.447886 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff748f9f-f0de-4ca4-ab7a-487cc4f74311-config\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:11 crc kubenswrapper[4724]: I0223 17:48:11.451356 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff748f9f-f0de-4ca4-ab7a-487cc4f74311-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ff748f9f-f0de-4ca4-ab7a-487cc4f74311" (UID: "ff748f9f-f0de-4ca4-ab7a-487cc4f74311"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:48:11 crc kubenswrapper[4724]: I0223 17:48:11.550989 4724 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff748f9f-f0de-4ca4-ab7a-487cc4f74311-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:11 crc kubenswrapper[4724]: E0223 17:48:11.581172 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/prometheus-metric-storage-0" podUID="ad58a78a-ccdb-4154-852e-8a8984a2a650" Feb 23 17:48:11 crc kubenswrapper[4724]: I0223 17:48:11.758202 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65f4f97889-g2nhz"] Feb 23 17:48:11 crc kubenswrapper[4724]: I0223 17:48:11.773924 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-w2vrd"] Feb 23 17:48:11 crc kubenswrapper[4724]: I0223 17:48:11.968257 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-8b9ks"] Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.014139 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-j8wwt"] Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.040030 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b99fb9575-gk5sx"] Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.078274 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ad58a78a-ccdb-4154-852e-8a8984a2a650","Type":"ContainerStarted","Data":"de5be2342bd4a6b35d3550920aef4a03893172b11fad0fb7fd67afea2e3564d8"} Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.080178 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-j8wwt" event={"ID":"a67fc661-9de0-49e6-80d6-a87cb0e17e76","Type":"ContainerStarted","Data":"991741670e82559effa0d47f96ebfa0613f99be795f5282d31b2d49562778999"} Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.085920 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-8b9ks" event={"ID":"0371ce0f-1e0f-4b9f-a5aa-971ae7d19279","Type":"ContainerStarted","Data":"a2ebd8021beca4ead223517a05de0bfee856d37d076ba6db068c53bd2e0e9e77"} Feb 23 17:48:12 crc kubenswrapper[4724]: E0223 17:48:12.086517 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:1b555e21bba7c609111ace4380382a696d9aceeb6e9816bf9023b8f689b6c741\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="ad58a78a-ccdb-4154-852e-8a8984a2a650" Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.087159 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-w2vrd" event={"ID":"bc3d191e-4725-42ef-90af-16b57d7bf649","Type":"ContainerStarted","Data":"9734e487247cedd89987b4aeb597bb7a7ca4c0e0d71c7cfcaea08988e8262a35"} Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.095351 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fb56974c-zxzzb" event={"ID":"ff748f9f-f0de-4ca4-ab7a-487cc4f74311","Type":"ContainerDied","Data":"bd2967a38ed75b8fbb2de3e1723caa9a322dbbf05b51cf9263b2fc23fcf71633"} Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.095462 4724 scope.go:117] "RemoveContainer" containerID="86be1ba7bb8290be58f1926afecd1a9bff3a568630f28ce0200b31792c0e74eb" Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.095657 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fb56974c-zxzzb" Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.096858 4724 generic.go:334] "Generic (PLEG): container finished" podID="c89ffbc7-ee0a-42d2-b7b5-f6faad823620" containerID="2d6b6cfd4006609a87b0a94ecb5bc05e2d66cb603e6175d5e8370ab1e9dbc3e1" exitCode=0 Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.096887 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65f4f97889-g2nhz" event={"ID":"c89ffbc7-ee0a-42d2-b7b5-f6faad823620","Type":"ContainerDied","Data":"2d6b6cfd4006609a87b0a94ecb5bc05e2d66cb603e6175d5e8370ab1e9dbc3e1"} Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.096978 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65f4f97889-g2nhz" event={"ID":"c89ffbc7-ee0a-42d2-b7b5-f6faad823620","Type":"ContainerStarted","Data":"cb9acd01cd327438fca05c94efef0f7cb137d9cc1fb30cd360109f1054a30048"} Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.138756 4724 scope.go:117] "RemoveContainer" containerID="a062ad1ec735cba3a0d126f8127dfc3f5a37e81d32df1de26f4c504e2d5e33fb" Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.163729 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6fb56974c-zxzzb"] Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.184852 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6fb56974c-zxzzb"] Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.433571 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-x87lg"] Feb 23 17:48:12 crc kubenswrapper[4724]: E0223 17:48:12.434326 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff748f9f-f0de-4ca4-ab7a-487cc4f74311" containerName="init" Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.434351 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff748f9f-f0de-4ca4-ab7a-487cc4f74311" containerName="init" Feb 23 17:48:12 crc kubenswrapper[4724]: E0223 17:48:12.434434 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff748f9f-f0de-4ca4-ab7a-487cc4f74311" containerName="dnsmasq-dns" Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.434444 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff748f9f-f0de-4ca4-ab7a-487cc4f74311" containerName="dnsmasq-dns" Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.434650 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff748f9f-f0de-4ca4-ab7a-487cc4f74311" containerName="dnsmasq-dns" Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.435322 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-x87lg" Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.448735 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-x87lg"] Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.540861 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65f4f97889-g2nhz" Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.553465 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-1e6b-account-create-update-2q4q5"] Feb 23 17:48:12 crc kubenswrapper[4724]: E0223 17:48:12.553848 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c89ffbc7-ee0a-42d2-b7b5-f6faad823620" containerName="init" Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.553861 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="c89ffbc7-ee0a-42d2-b7b5-f6faad823620" containerName="init" Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.554067 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="c89ffbc7-ee0a-42d2-b7b5-f6faad823620" containerName="init" Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.557231 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1e6b-account-create-update-2q4q5" Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.562677 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-1e6b-account-create-update-2q4q5"] Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.579892 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.582565 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c89ffbc7-ee0a-42d2-b7b5-f6faad823620-config\") pod \"c89ffbc7-ee0a-42d2-b7b5-f6faad823620\" (UID: \"c89ffbc7-ee0a-42d2-b7b5-f6faad823620\") " Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.582715 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzflg\" (UniqueName: \"kubernetes.io/projected/28b45ff2-6bda-4335-aeb4-862daa049364-kube-api-access-rzflg\") pod \"keystone-db-create-x87lg\" (UID: \"28b45ff2-6bda-4335-aeb4-862daa049364\") " pod="openstack/keystone-db-create-x87lg" Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.582813 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3adf9177-46cc-47ea-8884-2868dd612c07-operator-scripts\") pod \"keystone-1e6b-account-create-update-2q4q5\" (UID: \"3adf9177-46cc-47ea-8884-2868dd612c07\") " pod="openstack/keystone-1e6b-account-create-update-2q4q5" Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.583073 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28b45ff2-6bda-4335-aeb4-862daa049364-operator-scripts\") pod \"keystone-db-create-x87lg\" (UID: \"28b45ff2-6bda-4335-aeb4-862daa049364\") " pod="openstack/keystone-db-create-x87lg" Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.583192 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfg2t\" (UniqueName: \"kubernetes.io/projected/3adf9177-46cc-47ea-8884-2868dd612c07-kube-api-access-nfg2t\") pod \"keystone-1e6b-account-create-update-2q4q5\" (UID: \"3adf9177-46cc-47ea-8884-2868dd612c07\") " pod="openstack/keystone-1e6b-account-create-update-2q4q5" Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.609297 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c89ffbc7-ee0a-42d2-b7b5-f6faad823620-config" (OuterVolumeSpecName: "config") pod "c89ffbc7-ee0a-42d2-b7b5-f6faad823620" (UID: "c89ffbc7-ee0a-42d2-b7b5-f6faad823620"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.646153 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-fq65r"] Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.647461 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-fq65r" Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.653532 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-fq65r"] Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.683804 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fdlw\" (UniqueName: \"kubernetes.io/projected/c89ffbc7-ee0a-42d2-b7b5-f6faad823620-kube-api-access-9fdlw\") pod \"c89ffbc7-ee0a-42d2-b7b5-f6faad823620\" (UID: \"c89ffbc7-ee0a-42d2-b7b5-f6faad823620\") " Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.683844 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c89ffbc7-ee0a-42d2-b7b5-f6faad823620-dns-svc\") pod \"c89ffbc7-ee0a-42d2-b7b5-f6faad823620\" (UID: \"c89ffbc7-ee0a-42d2-b7b5-f6faad823620\") " Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.685991 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c89ffbc7-ee0a-42d2-b7b5-f6faad823620-ovsdbserver-sb\") pod \"c89ffbc7-ee0a-42d2-b7b5-f6faad823620\" (UID: \"c89ffbc7-ee0a-42d2-b7b5-f6faad823620\") " Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.686257 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfg2t\" (UniqueName: \"kubernetes.io/projected/3adf9177-46cc-47ea-8884-2868dd612c07-kube-api-access-nfg2t\") pod \"keystone-1e6b-account-create-update-2q4q5\" (UID: \"3adf9177-46cc-47ea-8884-2868dd612c07\") " pod="openstack/keystone-1e6b-account-create-update-2q4q5" Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.686321 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wb9v\" (UniqueName: \"kubernetes.io/projected/866fb28f-2850-4b40-8285-f89763b322e3-kube-api-access-5wb9v\") pod \"placement-db-create-fq65r\" (UID: \"866fb28f-2850-4b40-8285-f89763b322e3\") " pod="openstack/placement-db-create-fq65r" Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.686354 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzflg\" (UniqueName: \"kubernetes.io/projected/28b45ff2-6bda-4335-aeb4-862daa049364-kube-api-access-rzflg\") pod \"keystone-db-create-x87lg\" (UID: \"28b45ff2-6bda-4335-aeb4-862daa049364\") " pod="openstack/keystone-db-create-x87lg" Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.686386 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3adf9177-46cc-47ea-8884-2868dd612c07-operator-scripts\") pod \"keystone-1e6b-account-create-update-2q4q5\" (UID: \"3adf9177-46cc-47ea-8884-2868dd612c07\") " pod="openstack/keystone-1e6b-account-create-update-2q4q5" Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.686448 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/866fb28f-2850-4b40-8285-f89763b322e3-operator-scripts\") pod \"placement-db-create-fq65r\" (UID: \"866fb28f-2850-4b40-8285-f89763b322e3\") " pod="openstack/placement-db-create-fq65r" Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.686531 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28b45ff2-6bda-4335-aeb4-862daa049364-operator-scripts\") pod \"keystone-db-create-x87lg\" (UID: \"28b45ff2-6bda-4335-aeb4-862daa049364\") " pod="openstack/keystone-db-create-x87lg" Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.686572 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c89ffbc7-ee0a-42d2-b7b5-f6faad823620-config\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.687336 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3adf9177-46cc-47ea-8884-2868dd612c07-operator-scripts\") pod \"keystone-1e6b-account-create-update-2q4q5\" (UID: \"3adf9177-46cc-47ea-8884-2868dd612c07\") " pod="openstack/keystone-1e6b-account-create-update-2q4q5" Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.699490 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c89ffbc7-ee0a-42d2-b7b5-f6faad823620-kube-api-access-9fdlw" (OuterVolumeSpecName: "kube-api-access-9fdlw") pod "c89ffbc7-ee0a-42d2-b7b5-f6faad823620" (UID: "c89ffbc7-ee0a-42d2-b7b5-f6faad823620"). InnerVolumeSpecName "kube-api-access-9fdlw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.702010 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28b45ff2-6bda-4335-aeb4-862daa049364-operator-scripts\") pod \"keystone-db-create-x87lg\" (UID: \"28b45ff2-6bda-4335-aeb4-862daa049364\") " pod="openstack/keystone-db-create-x87lg" Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.705095 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c89ffbc7-ee0a-42d2-b7b5-f6faad823620-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c89ffbc7-ee0a-42d2-b7b5-f6faad823620" (UID: "c89ffbc7-ee0a-42d2-b7b5-f6faad823620"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.705226 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c89ffbc7-ee0a-42d2-b7b5-f6faad823620-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c89ffbc7-ee0a-42d2-b7b5-f6faad823620" (UID: "c89ffbc7-ee0a-42d2-b7b5-f6faad823620"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.705319 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfg2t\" (UniqueName: \"kubernetes.io/projected/3adf9177-46cc-47ea-8884-2868dd612c07-kube-api-access-nfg2t\") pod \"keystone-1e6b-account-create-update-2q4q5\" (UID: \"3adf9177-46cc-47ea-8884-2868dd612c07\") " pod="openstack/keystone-1e6b-account-create-update-2q4q5" Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.715169 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzflg\" (UniqueName: \"kubernetes.io/projected/28b45ff2-6bda-4335-aeb4-862daa049364-kube-api-access-rzflg\") pod \"keystone-db-create-x87lg\" (UID: \"28b45ff2-6bda-4335-aeb4-862daa049364\") " pod="openstack/keystone-db-create-x87lg" Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.739784 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-483b-account-create-update-x5wds"] Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.741708 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-483b-account-create-update-x5wds" Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.743743 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.755376 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-483b-account-create-update-x5wds"] Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.787650 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/866fb28f-2850-4b40-8285-f89763b322e3-operator-scripts\") pod \"placement-db-create-fq65r\" (UID: \"866fb28f-2850-4b40-8285-f89763b322e3\") " pod="openstack/placement-db-create-fq65r" Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.787797 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wb9v\" (UniqueName: \"kubernetes.io/projected/866fb28f-2850-4b40-8285-f89763b322e3-kube-api-access-5wb9v\") pod \"placement-db-create-fq65r\" (UID: \"866fb28f-2850-4b40-8285-f89763b322e3\") " pod="openstack/placement-db-create-fq65r" Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.787841 4724 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c89ffbc7-ee0a-42d2-b7b5-f6faad823620-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.787852 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9fdlw\" (UniqueName: \"kubernetes.io/projected/c89ffbc7-ee0a-42d2-b7b5-f6faad823620-kube-api-access-9fdlw\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.787863 4724 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c89ffbc7-ee0a-42d2-b7b5-f6faad823620-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.788891 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/866fb28f-2850-4b40-8285-f89763b322e3-operator-scripts\") pod \"placement-db-create-fq65r\" (UID: \"866fb28f-2850-4b40-8285-f89763b322e3\") " pod="openstack/placement-db-create-fq65r" Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.794459 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.819033 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wb9v\" (UniqueName: \"kubernetes.io/projected/866fb28f-2850-4b40-8285-f89763b322e3-kube-api-access-5wb9v\") pod \"placement-db-create-fq65r\" (UID: \"866fb28f-2850-4b40-8285-f89763b322e3\") " pod="openstack/placement-db-create-fq65r" Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.837054 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-x87lg" Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.890576 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fd67fc2-80dd-4c14-aed2-99eb130182b1-operator-scripts\") pod \"placement-483b-account-create-update-x5wds\" (UID: \"1fd67fc2-80dd-4c14-aed2-99eb130182b1\") " pod="openstack/placement-483b-account-create-update-x5wds" Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.890653 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3946025b-c492-4f1b-a3c3-62d2fa658586-etc-swift\") pod \"swift-storage-0\" (UID: \"3946025b-c492-4f1b-a3c3-62d2fa658586\") " pod="openstack/swift-storage-0" Feb 23 17:48:12 crc kubenswrapper[4724]: E0223 17:48:12.891005 4724 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 23 17:48:12 crc kubenswrapper[4724]: E0223 17:48:12.891025 4724 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.891058 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z2mb\" (UniqueName: \"kubernetes.io/projected/1fd67fc2-80dd-4c14-aed2-99eb130182b1-kube-api-access-6z2mb\") pod \"placement-483b-account-create-update-x5wds\" (UID: \"1fd67fc2-80dd-4c14-aed2-99eb130182b1\") " pod="openstack/placement-483b-account-create-update-x5wds" Feb 23 17:48:12 crc kubenswrapper[4724]: E0223 17:48:12.891067 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3946025b-c492-4f1b-a3c3-62d2fa658586-etc-swift podName:3946025b-c492-4f1b-a3c3-62d2fa658586 nodeName:}" failed. No retries permitted until 2026-02-23 17:48:20.891053214 +0000 UTC m=+1056.707252814 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3946025b-c492-4f1b-a3c3-62d2fa658586-etc-swift") pod "swift-storage-0" (UID: "3946025b-c492-4f1b-a3c3-62d2fa658586") : configmap "swift-ring-files" not found Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.907401 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1e6b-account-create-update-2q4q5" Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.975609 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-fq65r" Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.983544 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21b49929-1a79-4138-aae4-4b5ec923bd3f" path="/var/lib/kubelet/pods/21b49929-1a79-4138-aae4-4b5ec923bd3f/volumes" Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.984089 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff748f9f-f0de-4ca4-ab7a-487cc4f74311" path="/var/lib/kubelet/pods/ff748f9f-f0de-4ca4-ab7a-487cc4f74311/volumes" Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.986036 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.987909 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.992977 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.992992 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.993183 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.993304 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-47ph9" Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.993510 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6z2mb\" (UniqueName: \"kubernetes.io/projected/1fd67fc2-80dd-4c14-aed2-99eb130182b1-kube-api-access-6z2mb\") pod \"placement-483b-account-create-update-x5wds\" (UID: \"1fd67fc2-80dd-4c14-aed2-99eb130182b1\") " pod="openstack/placement-483b-account-create-update-x5wds" Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.993583 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fd67fc2-80dd-4c14-aed2-99eb130182b1-operator-scripts\") pod \"placement-483b-account-create-update-x5wds\" (UID: \"1fd67fc2-80dd-4c14-aed2-99eb130182b1\") " pod="openstack/placement-483b-account-create-update-x5wds" Feb 23 17:48:12 crc kubenswrapper[4724]: I0223 17:48:12.995828 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fd67fc2-80dd-4c14-aed2-99eb130182b1-operator-scripts\") pod \"placement-483b-account-create-update-x5wds\" (UID: \"1fd67fc2-80dd-4c14-aed2-99eb130182b1\") " pod="openstack/placement-483b-account-create-update-x5wds" Feb 23 17:48:13 crc kubenswrapper[4724]: I0223 17:48:13.001381 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 23 17:48:13 crc kubenswrapper[4724]: I0223 17:48:13.018278 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6z2mb\" (UniqueName: \"kubernetes.io/projected/1fd67fc2-80dd-4c14-aed2-99eb130182b1-kube-api-access-6z2mb\") pod \"placement-483b-account-create-update-x5wds\" (UID: \"1fd67fc2-80dd-4c14-aed2-99eb130182b1\") " pod="openstack/placement-483b-account-create-update-x5wds" Feb 23 17:48:13 crc kubenswrapper[4724]: I0223 17:48:13.094805 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/46836cc7-f4d3-432c-aa3e-c448d50a212e-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"46836cc7-f4d3-432c-aa3e-c448d50a212e\") " pod="openstack/ovn-northd-0" Feb 23 17:48:13 crc kubenswrapper[4724]: I0223 17:48:13.094848 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/46836cc7-f4d3-432c-aa3e-c448d50a212e-scripts\") pod \"ovn-northd-0\" (UID: \"46836cc7-f4d3-432c-aa3e-c448d50a212e\") " pod="openstack/ovn-northd-0" Feb 23 17:48:13 crc kubenswrapper[4724]: I0223 17:48:13.095171 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/46836cc7-f4d3-432c-aa3e-c448d50a212e-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"46836cc7-f4d3-432c-aa3e-c448d50a212e\") " pod="openstack/ovn-northd-0" Feb 23 17:48:13 crc kubenswrapper[4724]: I0223 17:48:13.095792 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46836cc7-f4d3-432c-aa3e-c448d50a212e-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"46836cc7-f4d3-432c-aa3e-c448d50a212e\") " pod="openstack/ovn-northd-0" Feb 23 17:48:13 crc kubenswrapper[4724]: I0223 17:48:13.095837 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gfqw\" (UniqueName: \"kubernetes.io/projected/46836cc7-f4d3-432c-aa3e-c448d50a212e-kube-api-access-5gfqw\") pod \"ovn-northd-0\" (UID: \"46836cc7-f4d3-432c-aa3e-c448d50a212e\") " pod="openstack/ovn-northd-0" Feb 23 17:48:13 crc kubenswrapper[4724]: I0223 17:48:13.095880 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46836cc7-f4d3-432c-aa3e-c448d50a212e-config\") pod \"ovn-northd-0\" (UID: \"46836cc7-f4d3-432c-aa3e-c448d50a212e\") " pod="openstack/ovn-northd-0" Feb 23 17:48:13 crc kubenswrapper[4724]: I0223 17:48:13.095920 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/46836cc7-f4d3-432c-aa3e-c448d50a212e-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"46836cc7-f4d3-432c-aa3e-c448d50a212e\") " pod="openstack/ovn-northd-0" Feb 23 17:48:13 crc kubenswrapper[4724]: I0223 17:48:13.103090 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-483b-account-create-update-x5wds" Feb 23 17:48:13 crc kubenswrapper[4724]: I0223 17:48:13.106267 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-8b9ks" event={"ID":"0371ce0f-1e0f-4b9f-a5aa-971ae7d19279","Type":"ContainerStarted","Data":"fd42ac8002e190e470c38beb699c9bb3df99d5f64b3ce557f8ec0ecae6886c73"} Feb 23 17:48:13 crc kubenswrapper[4724]: I0223 17:48:13.108484 4724 generic.go:334] "Generic (PLEG): container finished" podID="5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135" containerID="a3c959eeb3a055d0a236e72b680374425ea1b968ec0219630ea3e198fef4a604" exitCode=0 Feb 23 17:48:13 crc kubenswrapper[4724]: I0223 17:48:13.108547 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b99fb9575-gk5sx" event={"ID":"5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135","Type":"ContainerDied","Data":"a3c959eeb3a055d0a236e72b680374425ea1b968ec0219630ea3e198fef4a604"} Feb 23 17:48:13 crc kubenswrapper[4724]: I0223 17:48:13.108568 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b99fb9575-gk5sx" event={"ID":"5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135","Type":"ContainerStarted","Data":"9babf06dd96ba6620253113c09963dd4cbc014eca9a86bf510d93b466dffbcdd"} Feb 23 17:48:13 crc kubenswrapper[4724]: I0223 17:48:13.111432 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65f4f97889-g2nhz" event={"ID":"c89ffbc7-ee0a-42d2-b7b5-f6faad823620","Type":"ContainerDied","Data":"cb9acd01cd327438fca05c94efef0f7cb137d9cc1fb30cd360109f1054a30048"} Feb 23 17:48:13 crc kubenswrapper[4724]: I0223 17:48:13.111494 4724 scope.go:117] "RemoveContainer" containerID="2d6b6cfd4006609a87b0a94ecb5bc05e2d66cb603e6175d5e8370ab1e9dbc3e1" Feb 23 17:48:13 crc kubenswrapper[4724]: I0223 17:48:13.112283 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65f4f97889-g2nhz" Feb 23 17:48:13 crc kubenswrapper[4724]: I0223 17:48:13.114837 4724 generic.go:334] "Generic (PLEG): container finished" podID="a67fc661-9de0-49e6-80d6-a87cb0e17e76" containerID="283abc00c29ca6a35b5398caf6f4627399287c4f34211c62b13b6db1f3ccfba4" exitCode=0 Feb 23 17:48:13 crc kubenswrapper[4724]: I0223 17:48:13.114868 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-j8wwt" event={"ID":"a67fc661-9de0-49e6-80d6-a87cb0e17e76","Type":"ContainerDied","Data":"283abc00c29ca6a35b5398caf6f4627399287c4f34211c62b13b6db1f3ccfba4"} Feb 23 17:48:13 crc kubenswrapper[4724]: E0223 17:48:13.116093 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:1b555e21bba7c609111ace4380382a696d9aceeb6e9816bf9023b8f689b6c741\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="ad58a78a-ccdb-4154-852e-8a8984a2a650" Feb 23 17:48:13 crc kubenswrapper[4724]: I0223 17:48:13.134418 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-8b9ks" podStartSLOduration=3.134400344 podStartE2EDuration="3.134400344s" podCreationTimestamp="2026-02-23 17:48:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:48:13.12976614 +0000 UTC m=+1048.945965750" watchObservedRunningTime="2026-02-23 17:48:13.134400344 +0000 UTC m=+1048.950599944" Feb 23 17:48:13 crc kubenswrapper[4724]: I0223 17:48:13.179612 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65f4f97889-g2nhz"] Feb 23 17:48:13 crc kubenswrapper[4724]: I0223 17:48:13.191151 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-65f4f97889-g2nhz"] Feb 23 17:48:13 crc kubenswrapper[4724]: I0223 17:48:13.202106 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46836cc7-f4d3-432c-aa3e-c448d50a212e-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"46836cc7-f4d3-432c-aa3e-c448d50a212e\") " pod="openstack/ovn-northd-0" Feb 23 17:48:13 crc kubenswrapper[4724]: I0223 17:48:13.202188 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gfqw\" (UniqueName: \"kubernetes.io/projected/46836cc7-f4d3-432c-aa3e-c448d50a212e-kube-api-access-5gfqw\") pod \"ovn-northd-0\" (UID: \"46836cc7-f4d3-432c-aa3e-c448d50a212e\") " pod="openstack/ovn-northd-0" Feb 23 17:48:13 crc kubenswrapper[4724]: I0223 17:48:13.202223 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46836cc7-f4d3-432c-aa3e-c448d50a212e-config\") pod \"ovn-northd-0\" (UID: \"46836cc7-f4d3-432c-aa3e-c448d50a212e\") " pod="openstack/ovn-northd-0" Feb 23 17:48:13 crc kubenswrapper[4724]: I0223 17:48:13.202273 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/46836cc7-f4d3-432c-aa3e-c448d50a212e-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"46836cc7-f4d3-432c-aa3e-c448d50a212e\") " pod="openstack/ovn-northd-0" Feb 23 17:48:13 crc kubenswrapper[4724]: I0223 17:48:13.202292 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/46836cc7-f4d3-432c-aa3e-c448d50a212e-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"46836cc7-f4d3-432c-aa3e-c448d50a212e\") " pod="openstack/ovn-northd-0" Feb 23 17:48:13 crc kubenswrapper[4724]: I0223 17:48:13.202310 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/46836cc7-f4d3-432c-aa3e-c448d50a212e-scripts\") pod \"ovn-northd-0\" (UID: \"46836cc7-f4d3-432c-aa3e-c448d50a212e\") " pod="openstack/ovn-northd-0" Feb 23 17:48:13 crc kubenswrapper[4724]: I0223 17:48:13.202555 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/46836cc7-f4d3-432c-aa3e-c448d50a212e-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"46836cc7-f4d3-432c-aa3e-c448d50a212e\") " pod="openstack/ovn-northd-0" Feb 23 17:48:13 crc kubenswrapper[4724]: I0223 17:48:13.203639 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46836cc7-f4d3-432c-aa3e-c448d50a212e-config\") pod \"ovn-northd-0\" (UID: \"46836cc7-f4d3-432c-aa3e-c448d50a212e\") " pod="openstack/ovn-northd-0" Feb 23 17:48:13 crc kubenswrapper[4724]: I0223 17:48:13.204075 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/46836cc7-f4d3-432c-aa3e-c448d50a212e-scripts\") pod \"ovn-northd-0\" (UID: \"46836cc7-f4d3-432c-aa3e-c448d50a212e\") " pod="openstack/ovn-northd-0" Feb 23 17:48:13 crc kubenswrapper[4724]: I0223 17:48:13.204613 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/46836cc7-f4d3-432c-aa3e-c448d50a212e-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"46836cc7-f4d3-432c-aa3e-c448d50a212e\") " pod="openstack/ovn-northd-0" Feb 23 17:48:13 crc kubenswrapper[4724]: I0223 17:48:13.206889 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46836cc7-f4d3-432c-aa3e-c448d50a212e-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"46836cc7-f4d3-432c-aa3e-c448d50a212e\") " pod="openstack/ovn-northd-0" Feb 23 17:48:13 crc kubenswrapper[4724]: I0223 17:48:13.207055 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/46836cc7-f4d3-432c-aa3e-c448d50a212e-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"46836cc7-f4d3-432c-aa3e-c448d50a212e\") " pod="openstack/ovn-northd-0" Feb 23 17:48:13 crc kubenswrapper[4724]: I0223 17:48:13.221175 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/46836cc7-f4d3-432c-aa3e-c448d50a212e-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"46836cc7-f4d3-432c-aa3e-c448d50a212e\") " pod="openstack/ovn-northd-0" Feb 23 17:48:13 crc kubenswrapper[4724]: I0223 17:48:13.226350 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gfqw\" (UniqueName: \"kubernetes.io/projected/46836cc7-f4d3-432c-aa3e-c448d50a212e-kube-api-access-5gfqw\") pod \"ovn-northd-0\" (UID: \"46836cc7-f4d3-432c-aa3e-c448d50a212e\") " pod="openstack/ovn-northd-0" Feb 23 17:48:13 crc kubenswrapper[4724]: I0223 17:48:13.327526 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 23 17:48:13 crc kubenswrapper[4724]: I0223 17:48:13.906716 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-create-hfvt8"] Feb 23 17:48:13 crc kubenswrapper[4724]: I0223 17:48:13.908056 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-hfvt8" Feb 23 17:48:13 crc kubenswrapper[4724]: I0223 17:48:13.914862 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-hfvt8"] Feb 23 17:48:14 crc kubenswrapper[4724]: I0223 17:48:14.019434 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83342eb4-7660-4e5f-96e2-883ab91b855e-operator-scripts\") pod \"watcher-db-create-hfvt8\" (UID: \"83342eb4-7660-4e5f-96e2-883ab91b855e\") " pod="openstack/watcher-db-create-hfvt8" Feb 23 17:48:14 crc kubenswrapper[4724]: I0223 17:48:14.019555 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2dgh\" (UniqueName: \"kubernetes.io/projected/83342eb4-7660-4e5f-96e2-883ab91b855e-kube-api-access-x2dgh\") pod \"watcher-db-create-hfvt8\" (UID: \"83342eb4-7660-4e5f-96e2-883ab91b855e\") " pod="openstack/watcher-db-create-hfvt8" Feb 23 17:48:14 crc kubenswrapper[4724]: I0223 17:48:14.043363 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-75e7-account-create-update-j7mwp"] Feb 23 17:48:14 crc kubenswrapper[4724]: I0223 17:48:14.054961 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-75e7-account-create-update-j7mwp"] Feb 23 17:48:14 crc kubenswrapper[4724]: I0223 17:48:14.055050 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-75e7-account-create-update-j7mwp" Feb 23 17:48:14 crc kubenswrapper[4724]: I0223 17:48:14.068681 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-db-secret" Feb 23 17:48:14 crc kubenswrapper[4724]: I0223 17:48:14.121745 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2dgh\" (UniqueName: \"kubernetes.io/projected/83342eb4-7660-4e5f-96e2-883ab91b855e-kube-api-access-x2dgh\") pod \"watcher-db-create-hfvt8\" (UID: \"83342eb4-7660-4e5f-96e2-883ab91b855e\") " pod="openstack/watcher-db-create-hfvt8" Feb 23 17:48:14 crc kubenswrapper[4724]: I0223 17:48:14.121958 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83342eb4-7660-4e5f-96e2-883ab91b855e-operator-scripts\") pod \"watcher-db-create-hfvt8\" (UID: \"83342eb4-7660-4e5f-96e2-883ab91b855e\") " pod="openstack/watcher-db-create-hfvt8" Feb 23 17:48:14 crc kubenswrapper[4724]: I0223 17:48:14.122834 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83342eb4-7660-4e5f-96e2-883ab91b855e-operator-scripts\") pod \"watcher-db-create-hfvt8\" (UID: \"83342eb4-7660-4e5f-96e2-883ab91b855e\") " pod="openstack/watcher-db-create-hfvt8" Feb 23 17:48:14 crc kubenswrapper[4724]: I0223 17:48:14.152134 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2dgh\" (UniqueName: \"kubernetes.io/projected/83342eb4-7660-4e5f-96e2-883ab91b855e-kube-api-access-x2dgh\") pod \"watcher-db-create-hfvt8\" (UID: \"83342eb4-7660-4e5f-96e2-883ab91b855e\") " pod="openstack/watcher-db-create-hfvt8" Feb 23 17:48:14 crc kubenswrapper[4724]: I0223 17:48:14.225336 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1dea0f4c-65e8-438f-a731-024b2074c8df-operator-scripts\") pod \"watcher-75e7-account-create-update-j7mwp\" (UID: \"1dea0f4c-65e8-438f-a731-024b2074c8df\") " pod="openstack/watcher-75e7-account-create-update-j7mwp" Feb 23 17:48:14 crc kubenswrapper[4724]: I0223 17:48:14.225417 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n67r\" (UniqueName: \"kubernetes.io/projected/1dea0f4c-65e8-438f-a731-024b2074c8df-kube-api-access-2n67r\") pod \"watcher-75e7-account-create-update-j7mwp\" (UID: \"1dea0f4c-65e8-438f-a731-024b2074c8df\") " pod="openstack/watcher-75e7-account-create-update-j7mwp" Feb 23 17:48:14 crc kubenswrapper[4724]: I0223 17:48:14.234739 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-hfvt8" Feb 23 17:48:14 crc kubenswrapper[4724]: I0223 17:48:14.326840 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1dea0f4c-65e8-438f-a731-024b2074c8df-operator-scripts\") pod \"watcher-75e7-account-create-update-j7mwp\" (UID: \"1dea0f4c-65e8-438f-a731-024b2074c8df\") " pod="openstack/watcher-75e7-account-create-update-j7mwp" Feb 23 17:48:14 crc kubenswrapper[4724]: I0223 17:48:14.326908 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2n67r\" (UniqueName: \"kubernetes.io/projected/1dea0f4c-65e8-438f-a731-024b2074c8df-kube-api-access-2n67r\") pod \"watcher-75e7-account-create-update-j7mwp\" (UID: \"1dea0f4c-65e8-438f-a731-024b2074c8df\") " pod="openstack/watcher-75e7-account-create-update-j7mwp" Feb 23 17:48:14 crc kubenswrapper[4724]: I0223 17:48:14.327778 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1dea0f4c-65e8-438f-a731-024b2074c8df-operator-scripts\") pod \"watcher-75e7-account-create-update-j7mwp\" (UID: \"1dea0f4c-65e8-438f-a731-024b2074c8df\") " pod="openstack/watcher-75e7-account-create-update-j7mwp" Feb 23 17:48:14 crc kubenswrapper[4724]: I0223 17:48:14.356504 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2n67r\" (UniqueName: \"kubernetes.io/projected/1dea0f4c-65e8-438f-a731-024b2074c8df-kube-api-access-2n67r\") pod \"watcher-75e7-account-create-update-j7mwp\" (UID: \"1dea0f4c-65e8-438f-a731-024b2074c8df\") " pod="openstack/watcher-75e7-account-create-update-j7mwp" Feb 23 17:48:14 crc kubenswrapper[4724]: I0223 17:48:14.393563 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-75e7-account-create-update-j7mwp" Feb 23 17:48:14 crc kubenswrapper[4724]: I0223 17:48:14.966187 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c89ffbc7-ee0a-42d2-b7b5-f6faad823620" path="/var/lib/kubelet/pods/c89ffbc7-ee0a-42d2-b7b5-f6faad823620/volumes" Feb 23 17:48:15 crc kubenswrapper[4724]: I0223 17:48:15.167991 4724 generic.go:334] "Generic (PLEG): container finished" podID="101a4642-f4c0-4f81-9d5a-7b8d95110eb2" containerID="36f992eb2a80fa7d9c5dc03c57cc4e0fea68ee7732caaaca5b79a90820bb87b5" exitCode=0 Feb 23 17:48:15 crc kubenswrapper[4724]: I0223 17:48:15.168087 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"101a4642-f4c0-4f81-9d5a-7b8d95110eb2","Type":"ContainerDied","Data":"36f992eb2a80fa7d9c5dc03c57cc4e0fea68ee7732caaaca5b79a90820bb87b5"} Feb 23 17:48:15 crc kubenswrapper[4724]: I0223 17:48:15.176670 4724 generic.go:334] "Generic (PLEG): container finished" podID="dd0498b8-b963-4905-a986-13400917ef41" containerID="06bd6ecb286b49b9c2e55b06a2075b277273fffc283ff6e9c4e46883dc206c68" exitCode=0 Feb 23 17:48:15 crc kubenswrapper[4724]: I0223 17:48:15.176785 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"dd0498b8-b963-4905-a986-13400917ef41","Type":"ContainerDied","Data":"06bd6ecb286b49b9c2e55b06a2075b277273fffc283ff6e9c4e46883dc206c68"} Feb 23 17:48:15 crc kubenswrapper[4724]: I0223 17:48:15.179261 4724 generic.go:334] "Generic (PLEG): container finished" podID="6e165de7-7e1a-47c3-84d2-9fc675a2224a" containerID="c9d18c20c6962db8499f2253d01a6c3230882bdfa279614ae35a397ad51ddb04" exitCode=0 Feb 23 17:48:15 crc kubenswrapper[4724]: I0223 17:48:15.179438 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/notifications-rabbitmq-server-0" event={"ID":"6e165de7-7e1a-47c3-84d2-9fc675a2224a","Type":"ContainerDied","Data":"c9d18c20c6962db8499f2253d01a6c3230882bdfa279614ae35a397ad51ddb04"} Feb 23 17:48:15 crc kubenswrapper[4724]: I0223 17:48:15.677753 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-j8wwt" Feb 23 17:48:15 crc kubenswrapper[4724]: I0223 17:48:15.752744 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4n2f\" (UniqueName: \"kubernetes.io/projected/a67fc661-9de0-49e6-80d6-a87cb0e17e76-kube-api-access-d4n2f\") pod \"a67fc661-9de0-49e6-80d6-a87cb0e17e76\" (UID: \"a67fc661-9de0-49e6-80d6-a87cb0e17e76\") " Feb 23 17:48:15 crc kubenswrapper[4724]: I0223 17:48:15.752813 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a67fc661-9de0-49e6-80d6-a87cb0e17e76-operator-scripts\") pod \"a67fc661-9de0-49e6-80d6-a87cb0e17e76\" (UID: \"a67fc661-9de0-49e6-80d6-a87cb0e17e76\") " Feb 23 17:48:15 crc kubenswrapper[4724]: I0223 17:48:15.756335 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a67fc661-9de0-49e6-80d6-a87cb0e17e76-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a67fc661-9de0-49e6-80d6-a87cb0e17e76" (UID: "a67fc661-9de0-49e6-80d6-a87cb0e17e76"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:48:15 crc kubenswrapper[4724]: I0223 17:48:15.758455 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a67fc661-9de0-49e6-80d6-a87cb0e17e76-kube-api-access-d4n2f" (OuterVolumeSpecName: "kube-api-access-d4n2f") pod "a67fc661-9de0-49e6-80d6-a87cb0e17e76" (UID: "a67fc661-9de0-49e6-80d6-a87cb0e17e76"). InnerVolumeSpecName "kube-api-access-d4n2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:48:15 crc kubenswrapper[4724]: I0223 17:48:15.855337 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4n2f\" (UniqueName: \"kubernetes.io/projected/a67fc661-9de0-49e6-80d6-a87cb0e17e76-kube-api-access-d4n2f\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:15 crc kubenswrapper[4724]: I0223 17:48:15.855649 4724 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a67fc661-9de0-49e6-80d6-a87cb0e17e76-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:16 crc kubenswrapper[4724]: I0223 17:48:16.057585 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-x87lg"] Feb 23 17:48:16 crc kubenswrapper[4724]: I0223 17:48:16.190209 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"101a4642-f4c0-4f81-9d5a-7b8d95110eb2","Type":"ContainerStarted","Data":"f64fb5384a6d596397c2db6a237dec438aa71902a5b4c232c2f94f2bbab4a529"} Feb 23 17:48:16 crc kubenswrapper[4724]: I0223 17:48:16.191282 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:48:16 crc kubenswrapper[4724]: I0223 17:48:16.196925 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"dd0498b8-b963-4905-a986-13400917ef41","Type":"ContainerStarted","Data":"ded1c50a90f38c33e0870874825e13a050c3dd69b53c46162f08b6fbf6d19bce"} Feb 23 17:48:16 crc kubenswrapper[4724]: I0223 17:48:16.197605 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 23 17:48:16 crc kubenswrapper[4724]: I0223 17:48:16.202155 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-x87lg" event={"ID":"28b45ff2-6bda-4335-aeb4-862daa049364","Type":"ContainerStarted","Data":"840c768dc77b55452906cebcd916d5bd0db9c1ce863592b2b0c4f9ef798b97ed"} Feb 23 17:48:16 crc kubenswrapper[4724]: I0223 17:48:16.207161 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-j8wwt" event={"ID":"a67fc661-9de0-49e6-80d6-a87cb0e17e76","Type":"ContainerDied","Data":"991741670e82559effa0d47f96ebfa0613f99be795f5282d31b2d49562778999"} Feb 23 17:48:16 crc kubenswrapper[4724]: I0223 17:48:16.207205 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="991741670e82559effa0d47f96ebfa0613f99be795f5282d31b2d49562778999" Feb 23 17:48:16 crc kubenswrapper[4724]: I0223 17:48:16.207251 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-j8wwt" Feb 23 17:48:16 crc kubenswrapper[4724]: I0223 17:48:16.222634 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/notifications-rabbitmq-server-0" event={"ID":"6e165de7-7e1a-47c3-84d2-9fc675a2224a","Type":"ContainerStarted","Data":"688aadb7822d36dfc86c873bfc71b3d4a6581eabed9f6aa012b0aaba57edca97"} Feb 23 17:48:16 crc kubenswrapper[4724]: I0223 17:48:16.223507 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/notifications-rabbitmq-server-0" Feb 23 17:48:16 crc kubenswrapper[4724]: I0223 17:48:16.227118 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=54.297000406 podStartE2EDuration="1m9.227093472s" podCreationTimestamp="2026-02-23 17:47:07 +0000 UTC" firstStartedPulling="2026-02-23 17:47:23.492954894 +0000 UTC m=+999.309154494" lastFinishedPulling="2026-02-23 17:47:38.42304796 +0000 UTC m=+1014.239247560" observedRunningTime="2026-02-23 17:48:16.222045553 +0000 UTC m=+1052.038245153" watchObservedRunningTime="2026-02-23 17:48:16.227093472 +0000 UTC m=+1052.043293072" Feb 23 17:48:16 crc kubenswrapper[4724]: I0223 17:48:16.248329 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-w2vrd" event={"ID":"bc3d191e-4725-42ef-90af-16b57d7bf649","Type":"ContainerStarted","Data":"8864c894b7303b270801ea3fe29aaacbc78e1e21bcaa7b8ee54d0612a51eeb7d"} Feb 23 17:48:16 crc kubenswrapper[4724]: I0223 17:48:16.258904 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b99fb9575-gk5sx" event={"ID":"5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135","Type":"ContainerStarted","Data":"d4782fd636c791d232e988b5d1ee9eeeba90336afea3454681705b65978e0acc"} Feb 23 17:48:16 crc kubenswrapper[4724]: I0223 17:48:16.259242 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b99fb9575-gk5sx" Feb 23 17:48:16 crc kubenswrapper[4724]: I0223 17:48:16.304215 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=54.982455689 podStartE2EDuration="1m10.304197192s" podCreationTimestamp="2026-02-23 17:47:06 +0000 UTC" firstStartedPulling="2026-02-23 17:47:23.446059679 +0000 UTC m=+999.262259279" lastFinishedPulling="2026-02-23 17:47:38.767801182 +0000 UTC m=+1014.584000782" observedRunningTime="2026-02-23 17:48:16.293887311 +0000 UTC m=+1052.110086921" watchObservedRunningTime="2026-02-23 17:48:16.304197192 +0000 UTC m=+1052.120396792" Feb 23 17:48:16 crc kubenswrapper[4724]: I0223 17:48:16.345347 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-w2vrd" podStartSLOduration=3.403618247 podStartE2EDuration="7.345324922s" podCreationTimestamp="2026-02-23 17:48:09 +0000 UTC" firstStartedPulling="2026-02-23 17:48:11.79116154 +0000 UTC m=+1047.607361130" lastFinishedPulling="2026-02-23 17:48:15.732868205 +0000 UTC m=+1051.549067805" observedRunningTime="2026-02-23 17:48:16.34266323 +0000 UTC m=+1052.158862830" watchObservedRunningTime="2026-02-23 17:48:16.345324922 +0000 UTC m=+1052.161524522" Feb 23 17:48:16 crc kubenswrapper[4724]: I0223 17:48:16.374119 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-fq65r"] Feb 23 17:48:16 crc kubenswrapper[4724]: I0223 17:48:16.378652 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/notifications-rabbitmq-server-0" podStartSLOduration=55.102756556 podStartE2EDuration="1m10.37863587s" podCreationTimestamp="2026-02-23 17:47:06 +0000 UTC" firstStartedPulling="2026-02-23 17:47:23.491333343 +0000 UTC m=+999.307532943" lastFinishedPulling="2026-02-23 17:47:38.767212637 +0000 UTC m=+1014.583412257" observedRunningTime="2026-02-23 17:48:16.368377721 +0000 UTC m=+1052.184577321" watchObservedRunningTime="2026-02-23 17:48:16.37863587 +0000 UTC m=+1052.194835480" Feb 23 17:48:16 crc kubenswrapper[4724]: W0223 17:48:16.387265 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3adf9177_46cc_47ea_8884_2868dd612c07.slice/crio-3dd8507fec01d542a80ba7f354dca0fad88a63b7333e0d5dfd69a72a6fb4285a WatchSource:0}: Error finding container 3dd8507fec01d542a80ba7f354dca0fad88a63b7333e0d5dfd69a72a6fb4285a: Status 404 returned error can't find the container with id 3dd8507fec01d542a80ba7f354dca0fad88a63b7333e0d5dfd69a72a6fb4285a Feb 23 17:48:16 crc kubenswrapper[4724]: I0223 17:48:16.402592 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-1e6b-account-create-update-2q4q5"] Feb 23 17:48:16 crc kubenswrapper[4724]: I0223 17:48:16.402888 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b99fb9575-gk5sx" podStartSLOduration=6.402869533 podStartE2EDuration="6.402869533s" podCreationTimestamp="2026-02-23 17:48:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:48:16.390827523 +0000 UTC m=+1052.207027123" watchObservedRunningTime="2026-02-23 17:48:16.402869533 +0000 UTC m=+1052.219069123" Feb 23 17:48:16 crc kubenswrapper[4724]: W0223 17:48:16.423579 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fd67fc2_80dd_4c14_aed2_99eb130182b1.slice/crio-09a9e5fdade5145bddf2e81c56e0f32d4d94868ca43603248968ebe929f28f29 WatchSource:0}: Error finding container 09a9e5fdade5145bddf2e81c56e0f32d4d94868ca43603248968ebe929f28f29: Status 404 returned error can't find the container with id 09a9e5fdade5145bddf2e81c56e0f32d4d94868ca43603248968ebe929f28f29 Feb 23 17:48:16 crc kubenswrapper[4724]: I0223 17:48:16.427129 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-483b-account-create-update-x5wds"] Feb 23 17:48:16 crc kubenswrapper[4724]: I0223 17:48:16.556675 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-pzqhh"] Feb 23 17:48:16 crc kubenswrapper[4724]: E0223 17:48:16.556974 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a67fc661-9de0-49e6-80d6-a87cb0e17e76" containerName="mariadb-account-create-update" Feb 23 17:48:16 crc kubenswrapper[4724]: I0223 17:48:16.556989 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="a67fc661-9de0-49e6-80d6-a87cb0e17e76" containerName="mariadb-account-create-update" Feb 23 17:48:16 crc kubenswrapper[4724]: I0223 17:48:16.557174 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="a67fc661-9de0-49e6-80d6-a87cb0e17e76" containerName="mariadb-account-create-update" Feb 23 17:48:16 crc kubenswrapper[4724]: I0223 17:48:16.557720 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-pzqhh" Feb 23 17:48:16 crc kubenswrapper[4724]: I0223 17:48:16.568067 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-pzqhh"] Feb 23 17:48:16 crc kubenswrapper[4724]: I0223 17:48:16.578992 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0764f35c-ec7a-48c0-bdb9-da3568db426a-operator-scripts\") pod \"glance-db-create-pzqhh\" (UID: \"0764f35c-ec7a-48c0-bdb9-da3568db426a\") " pod="openstack/glance-db-create-pzqhh" Feb 23 17:48:16 crc kubenswrapper[4724]: I0223 17:48:16.579072 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqq5m\" (UniqueName: \"kubernetes.io/projected/0764f35c-ec7a-48c0-bdb9-da3568db426a-kube-api-access-nqq5m\") pod \"glance-db-create-pzqhh\" (UID: \"0764f35c-ec7a-48c0-bdb9-da3568db426a\") " pod="openstack/glance-db-create-pzqhh" Feb 23 17:48:16 crc kubenswrapper[4724]: I0223 17:48:16.637258 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-hfvt8"] Feb 23 17:48:16 crc kubenswrapper[4724]: W0223 17:48:16.641884 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46836cc7_f4d3_432c_aa3e_c448d50a212e.slice/crio-14113e19dbc3b2db84859e3c144dec63dfca9b942ed781a16e74fab971c4d3f0 WatchSource:0}: Error finding container 14113e19dbc3b2db84859e3c144dec63dfca9b942ed781a16e74fab971c4d3f0: Status 404 returned error can't find the container with id 14113e19dbc3b2db84859e3c144dec63dfca9b942ed781a16e74fab971c4d3f0 Feb 23 17:48:16 crc kubenswrapper[4724]: I0223 17:48:16.648246 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 23 17:48:16 crc kubenswrapper[4724]: I0223 17:48:16.680239 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0764f35c-ec7a-48c0-bdb9-da3568db426a-operator-scripts\") pod \"glance-db-create-pzqhh\" (UID: \"0764f35c-ec7a-48c0-bdb9-da3568db426a\") " pod="openstack/glance-db-create-pzqhh" Feb 23 17:48:16 crc kubenswrapper[4724]: I0223 17:48:16.680323 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqq5m\" (UniqueName: \"kubernetes.io/projected/0764f35c-ec7a-48c0-bdb9-da3568db426a-kube-api-access-nqq5m\") pod \"glance-db-create-pzqhh\" (UID: \"0764f35c-ec7a-48c0-bdb9-da3568db426a\") " pod="openstack/glance-db-create-pzqhh" Feb 23 17:48:16 crc kubenswrapper[4724]: I0223 17:48:16.681372 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0764f35c-ec7a-48c0-bdb9-da3568db426a-operator-scripts\") pod \"glance-db-create-pzqhh\" (UID: \"0764f35c-ec7a-48c0-bdb9-da3568db426a\") " pod="openstack/glance-db-create-pzqhh" Feb 23 17:48:16 crc kubenswrapper[4724]: I0223 17:48:16.689028 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-75e7-account-create-update-j7mwp"] Feb 23 17:48:16 crc kubenswrapper[4724]: I0223 17:48:16.698670 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-4f5c-account-create-update-skkrv"] Feb 23 17:48:16 crc kubenswrapper[4724]: I0223 17:48:16.699680 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4f5c-account-create-update-skkrv" Feb 23 17:48:16 crc kubenswrapper[4724]: I0223 17:48:16.704720 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 23 17:48:16 crc kubenswrapper[4724]: I0223 17:48:16.711293 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-4f5c-account-create-update-skkrv"] Feb 23 17:48:16 crc kubenswrapper[4724]: I0223 17:48:16.720213 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqq5m\" (UniqueName: \"kubernetes.io/projected/0764f35c-ec7a-48c0-bdb9-da3568db426a-kube-api-access-nqq5m\") pod \"glance-db-create-pzqhh\" (UID: \"0764f35c-ec7a-48c0-bdb9-da3568db426a\") " pod="openstack/glance-db-create-pzqhh" Feb 23 17:48:16 crc kubenswrapper[4724]: I0223 17:48:16.781953 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbkxc\" (UniqueName: \"kubernetes.io/projected/08606168-c618-4094-a730-68080afc85d7-kube-api-access-vbkxc\") pod \"glance-4f5c-account-create-update-skkrv\" (UID: \"08606168-c618-4094-a730-68080afc85d7\") " pod="openstack/glance-4f5c-account-create-update-skkrv" Feb 23 17:48:16 crc kubenswrapper[4724]: I0223 17:48:16.782367 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08606168-c618-4094-a730-68080afc85d7-operator-scripts\") pod \"glance-4f5c-account-create-update-skkrv\" (UID: \"08606168-c618-4094-a730-68080afc85d7\") " pod="openstack/glance-4f5c-account-create-update-skkrv" Feb 23 17:48:16 crc kubenswrapper[4724]: I0223 17:48:16.884329 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbkxc\" (UniqueName: \"kubernetes.io/projected/08606168-c618-4094-a730-68080afc85d7-kube-api-access-vbkxc\") pod \"glance-4f5c-account-create-update-skkrv\" (UID: \"08606168-c618-4094-a730-68080afc85d7\") " pod="openstack/glance-4f5c-account-create-update-skkrv" Feb 23 17:48:16 crc kubenswrapper[4724]: I0223 17:48:16.884494 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08606168-c618-4094-a730-68080afc85d7-operator-scripts\") pod \"glance-4f5c-account-create-update-skkrv\" (UID: \"08606168-c618-4094-a730-68080afc85d7\") " pod="openstack/glance-4f5c-account-create-update-skkrv" Feb 23 17:48:16 crc kubenswrapper[4724]: I0223 17:48:16.885456 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08606168-c618-4094-a730-68080afc85d7-operator-scripts\") pod \"glance-4f5c-account-create-update-skkrv\" (UID: \"08606168-c618-4094-a730-68080afc85d7\") " pod="openstack/glance-4f5c-account-create-update-skkrv" Feb 23 17:48:16 crc kubenswrapper[4724]: I0223 17:48:16.891977 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-pzqhh" Feb 23 17:48:16 crc kubenswrapper[4724]: I0223 17:48:16.906197 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbkxc\" (UniqueName: \"kubernetes.io/projected/08606168-c618-4094-a730-68080afc85d7-kube-api-access-vbkxc\") pod \"glance-4f5c-account-create-update-skkrv\" (UID: \"08606168-c618-4094-a730-68080afc85d7\") " pod="openstack/glance-4f5c-account-create-update-skkrv" Feb 23 17:48:16 crc kubenswrapper[4724]: I0223 17:48:16.972079 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4f5c-account-create-update-skkrv" Feb 23 17:48:17 crc kubenswrapper[4724]: I0223 17:48:17.288545 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-75e7-account-create-update-j7mwp" event={"ID":"1dea0f4c-65e8-438f-a731-024b2074c8df","Type":"ContainerStarted","Data":"3a35009c3015a93806124800b99dae482bbeb42551adb7459592551248ef6e3b"} Feb 23 17:48:17 crc kubenswrapper[4724]: I0223 17:48:17.288776 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-75e7-account-create-update-j7mwp" event={"ID":"1dea0f4c-65e8-438f-a731-024b2074c8df","Type":"ContainerStarted","Data":"08a7ec2085943b5c7e467a86fede3ae28b80656564809ee07b266b89ebcf1a69"} Feb 23 17:48:17 crc kubenswrapper[4724]: I0223 17:48:17.309605 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-1e6b-account-create-update-2q4q5" event={"ID":"3adf9177-46cc-47ea-8884-2868dd612c07","Type":"ContainerStarted","Data":"e527f021b3523d1876018534bed4e165a6c75ee9d83006a33586c06f283ad5b6"} Feb 23 17:48:17 crc kubenswrapper[4724]: I0223 17:48:17.309668 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-1e6b-account-create-update-2q4q5" event={"ID":"3adf9177-46cc-47ea-8884-2868dd612c07","Type":"ContainerStarted","Data":"3dd8507fec01d542a80ba7f354dca0fad88a63b7333e0d5dfd69a72a6fb4285a"} Feb 23 17:48:17 crc kubenswrapper[4724]: I0223 17:48:17.324622 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"46836cc7-f4d3-432c-aa3e-c448d50a212e","Type":"ContainerStarted","Data":"14113e19dbc3b2db84859e3c144dec63dfca9b942ed781a16e74fab971c4d3f0"} Feb 23 17:48:17 crc kubenswrapper[4724]: I0223 17:48:17.348352 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-75e7-account-create-update-j7mwp" podStartSLOduration=3.348335776 podStartE2EDuration="3.348335776s" podCreationTimestamp="2026-02-23 17:48:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:48:17.348194734 +0000 UTC m=+1053.164394354" watchObservedRunningTime="2026-02-23 17:48:17.348335776 +0000 UTC m=+1053.164535376" Feb 23 17:48:17 crc kubenswrapper[4724]: I0223 17:48:17.353368 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-483b-account-create-update-x5wds" event={"ID":"1fd67fc2-80dd-4c14-aed2-99eb130182b1","Type":"ContainerStarted","Data":"53b7ebad95b623390eef006949fdd7cce73db7f2464e0230186f14ed534af6c8"} Feb 23 17:48:17 crc kubenswrapper[4724]: I0223 17:48:17.353431 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-483b-account-create-update-x5wds" event={"ID":"1fd67fc2-80dd-4c14-aed2-99eb130182b1","Type":"ContainerStarted","Data":"09a9e5fdade5145bddf2e81c56e0f32d4d94868ca43603248968ebe929f28f29"} Feb 23 17:48:17 crc kubenswrapper[4724]: I0223 17:48:17.379471 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-hfvt8" event={"ID":"83342eb4-7660-4e5f-96e2-883ab91b855e","Type":"ContainerStarted","Data":"b8b0290dd7c985d62b4c0175ab00cdca8faf5c6dd4f85457ccb82c659838f144"} Feb 23 17:48:17 crc kubenswrapper[4724]: I0223 17:48:17.379515 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-hfvt8" event={"ID":"83342eb4-7660-4e5f-96e2-883ab91b855e","Type":"ContainerStarted","Data":"5bae0483f84b834cc07baf483335908aa5e2e82996b60eb2f91ef6127e247140"} Feb 23 17:48:17 crc kubenswrapper[4724]: I0223 17:48:17.393752 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-1e6b-account-create-update-2q4q5" podStartSLOduration=5.393734436 podStartE2EDuration="5.393734436s" podCreationTimestamp="2026-02-23 17:48:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:48:17.387681935 +0000 UTC m=+1053.203881535" watchObservedRunningTime="2026-02-23 17:48:17.393734436 +0000 UTC m=+1053.209934036" Feb 23 17:48:17 crc kubenswrapper[4724]: I0223 17:48:17.394901 4724 generic.go:334] "Generic (PLEG): container finished" podID="28b45ff2-6bda-4335-aeb4-862daa049364" containerID="c15b6390eb34d563565d04df57b7770748a4986a9b30c93e3356990f2e1ce9ab" exitCode=0 Feb 23 17:48:17 crc kubenswrapper[4724]: I0223 17:48:17.394979 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-x87lg" event={"ID":"28b45ff2-6bda-4335-aeb4-862daa049364","Type":"ContainerDied","Data":"c15b6390eb34d563565d04df57b7770748a4986a9b30c93e3356990f2e1ce9ab"} Feb 23 17:48:17 crc kubenswrapper[4724]: I0223 17:48:17.411319 4724 generic.go:334] "Generic (PLEG): container finished" podID="866fb28f-2850-4b40-8285-f89763b322e3" containerID="b7ee0a74cb8b42ae64f62fb5d95f30e31f5133f05f935d6b4c2d7551863c20eb" exitCode=0 Feb 23 17:48:17 crc kubenswrapper[4724]: I0223 17:48:17.412554 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-fq65r" event={"ID":"866fb28f-2850-4b40-8285-f89763b322e3","Type":"ContainerDied","Data":"b7ee0a74cb8b42ae64f62fb5d95f30e31f5133f05f935d6b4c2d7551863c20eb"} Feb 23 17:48:17 crc kubenswrapper[4724]: I0223 17:48:17.412586 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-fq65r" event={"ID":"866fb28f-2850-4b40-8285-f89763b322e3","Type":"ContainerStarted","Data":"455d47506ac01f5045d4f417661c56d7bc1ba2b6508c2a892dd4b709e1b78b09"} Feb 23 17:48:17 crc kubenswrapper[4724]: I0223 17:48:17.421068 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-create-hfvt8" podStartSLOduration=4.421041934 podStartE2EDuration="4.421041934s" podCreationTimestamp="2026-02-23 17:48:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:48:17.411765826 +0000 UTC m=+1053.227965426" watchObservedRunningTime="2026-02-23 17:48:17.421041934 +0000 UTC m=+1053.237241534" Feb 23 17:48:17 crc kubenswrapper[4724]: I0223 17:48:17.431067 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-483b-account-create-update-x5wds" podStartSLOduration=5.431052041 podStartE2EDuration="5.431052041s" podCreationTimestamp="2026-02-23 17:48:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:48:17.429830367 +0000 UTC m=+1053.246029957" watchObservedRunningTime="2026-02-23 17:48:17.431052041 +0000 UTC m=+1053.247251641" Feb 23 17:48:17 crc kubenswrapper[4724]: W0223 17:48:17.528259 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0764f35c_ec7a_48c0_bdb9_da3568db426a.slice/crio-b272704713eb649275e05e32fb59dbd82269609b267c1738f9ea87c2589ee546 WatchSource:0}: Error finding container b272704713eb649275e05e32fb59dbd82269609b267c1738f9ea87c2589ee546: Status 404 returned error can't find the container with id b272704713eb649275e05e32fb59dbd82269609b267c1738f9ea87c2589ee546 Feb 23 17:48:17 crc kubenswrapper[4724]: I0223 17:48:17.537455 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-pzqhh"] Feb 23 17:48:17 crc kubenswrapper[4724]: I0223 17:48:17.611611 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-4f5c-account-create-update-skkrv"] Feb 23 17:48:18 crc kubenswrapper[4724]: I0223 17:48:18.306381 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-j8wwt"] Feb 23 17:48:18 crc kubenswrapper[4724]: I0223 17:48:18.315958 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-j8wwt"] Feb 23 17:48:18 crc kubenswrapper[4724]: I0223 17:48:18.391892 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-clggs"] Feb 23 17:48:18 crc kubenswrapper[4724]: I0223 17:48:18.393810 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-clggs" Feb 23 17:48:18 crc kubenswrapper[4724]: I0223 17:48:18.396828 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 23 17:48:18 crc kubenswrapper[4724]: I0223 17:48:18.410546 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-clggs"] Feb 23 17:48:18 crc kubenswrapper[4724]: I0223 17:48:18.421897 4724 generic.go:334] "Generic (PLEG): container finished" podID="1fd67fc2-80dd-4c14-aed2-99eb130182b1" containerID="53b7ebad95b623390eef006949fdd7cce73db7f2464e0230186f14ed534af6c8" exitCode=0 Feb 23 17:48:18 crc kubenswrapper[4724]: I0223 17:48:18.421971 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-483b-account-create-update-x5wds" event={"ID":"1fd67fc2-80dd-4c14-aed2-99eb130182b1","Type":"ContainerDied","Data":"53b7ebad95b623390eef006949fdd7cce73db7f2464e0230186f14ed534af6c8"} Feb 23 17:48:18 crc kubenswrapper[4724]: I0223 17:48:18.423634 4724 generic.go:334] "Generic (PLEG): container finished" podID="83342eb4-7660-4e5f-96e2-883ab91b855e" containerID="b8b0290dd7c985d62b4c0175ab00cdca8faf5c6dd4f85457ccb82c659838f144" exitCode=0 Feb 23 17:48:18 crc kubenswrapper[4724]: I0223 17:48:18.423707 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-hfvt8" event={"ID":"83342eb4-7660-4e5f-96e2-883ab91b855e","Type":"ContainerDied","Data":"b8b0290dd7c985d62b4c0175ab00cdca8faf5c6dd4f85457ccb82c659838f144"} Feb 23 17:48:18 crc kubenswrapper[4724]: I0223 17:48:18.425332 4724 generic.go:334] "Generic (PLEG): container finished" podID="0764f35c-ec7a-48c0-bdb9-da3568db426a" containerID="91233d0bfc6d43e7787f565c29d652054e55ccd20a88722464640b9da3923f5c" exitCode=0 Feb 23 17:48:18 crc kubenswrapper[4724]: I0223 17:48:18.425427 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-pzqhh" event={"ID":"0764f35c-ec7a-48c0-bdb9-da3568db426a","Type":"ContainerDied","Data":"91233d0bfc6d43e7787f565c29d652054e55ccd20a88722464640b9da3923f5c"} Feb 23 17:48:18 crc kubenswrapper[4724]: I0223 17:48:18.425500 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-pzqhh" event={"ID":"0764f35c-ec7a-48c0-bdb9-da3568db426a","Type":"ContainerStarted","Data":"b272704713eb649275e05e32fb59dbd82269609b267c1738f9ea87c2589ee546"} Feb 23 17:48:18 crc kubenswrapper[4724]: I0223 17:48:18.427329 4724 generic.go:334] "Generic (PLEG): container finished" podID="1dea0f4c-65e8-438f-a731-024b2074c8df" containerID="3a35009c3015a93806124800b99dae482bbeb42551adb7459592551248ef6e3b" exitCode=0 Feb 23 17:48:18 crc kubenswrapper[4724]: I0223 17:48:18.427467 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-75e7-account-create-update-j7mwp" event={"ID":"1dea0f4c-65e8-438f-a731-024b2074c8df","Type":"ContainerDied","Data":"3a35009c3015a93806124800b99dae482bbeb42551adb7459592551248ef6e3b"} Feb 23 17:48:18 crc kubenswrapper[4724]: I0223 17:48:18.445514 4724 generic.go:334] "Generic (PLEG): container finished" podID="3adf9177-46cc-47ea-8884-2868dd612c07" containerID="e527f021b3523d1876018534bed4e165a6c75ee9d83006a33586c06f283ad5b6" exitCode=0 Feb 23 17:48:18 crc kubenswrapper[4724]: I0223 17:48:18.445635 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-1e6b-account-create-update-2q4q5" event={"ID":"3adf9177-46cc-47ea-8884-2868dd612c07","Type":"ContainerDied","Data":"e527f021b3523d1876018534bed4e165a6c75ee9d83006a33586c06f283ad5b6"} Feb 23 17:48:18 crc kubenswrapper[4724]: I0223 17:48:18.458111 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4f5c-account-create-update-skkrv" event={"ID":"08606168-c618-4094-a730-68080afc85d7","Type":"ContainerStarted","Data":"966fa6a5bd5eb4a2b058eb8b35d278396f9980fe1a6dd3b6b518e73bd4d555e8"} Feb 23 17:48:18 crc kubenswrapper[4724]: I0223 17:48:18.458173 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4f5c-account-create-update-skkrv" event={"ID":"08606168-c618-4094-a730-68080afc85d7","Type":"ContainerStarted","Data":"accac8ca3a40814d96135f33cc9860e231ef0486a05bf5f135d9d7dc39bf9c98"} Feb 23 17:48:18 crc kubenswrapper[4724]: I0223 17:48:18.520327 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgxsl\" (UniqueName: \"kubernetes.io/projected/77748b06-eebe-407f-b827-4b5e727d7438-kube-api-access-dgxsl\") pod \"root-account-create-update-clggs\" (UID: \"77748b06-eebe-407f-b827-4b5e727d7438\") " pod="openstack/root-account-create-update-clggs" Feb 23 17:48:18 crc kubenswrapper[4724]: I0223 17:48:18.520444 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77748b06-eebe-407f-b827-4b5e727d7438-operator-scripts\") pod \"root-account-create-update-clggs\" (UID: \"77748b06-eebe-407f-b827-4b5e727d7438\") " pod="openstack/root-account-create-update-clggs" Feb 23 17:48:18 crc kubenswrapper[4724]: I0223 17:48:18.572194 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-4f5c-account-create-update-skkrv" podStartSLOduration=2.572170766 podStartE2EDuration="2.572170766s" podCreationTimestamp="2026-02-23 17:48:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:48:18.572140026 +0000 UTC m=+1054.388339636" watchObservedRunningTime="2026-02-23 17:48:18.572170766 +0000 UTC m=+1054.388370366" Feb 23 17:48:18 crc kubenswrapper[4724]: I0223 17:48:18.622241 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgxsl\" (UniqueName: \"kubernetes.io/projected/77748b06-eebe-407f-b827-4b5e727d7438-kube-api-access-dgxsl\") pod \"root-account-create-update-clggs\" (UID: \"77748b06-eebe-407f-b827-4b5e727d7438\") " pod="openstack/root-account-create-update-clggs" Feb 23 17:48:18 crc kubenswrapper[4724]: I0223 17:48:18.622321 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77748b06-eebe-407f-b827-4b5e727d7438-operator-scripts\") pod \"root-account-create-update-clggs\" (UID: \"77748b06-eebe-407f-b827-4b5e727d7438\") " pod="openstack/root-account-create-update-clggs" Feb 23 17:48:18 crc kubenswrapper[4724]: I0223 17:48:18.623018 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77748b06-eebe-407f-b827-4b5e727d7438-operator-scripts\") pod \"root-account-create-update-clggs\" (UID: \"77748b06-eebe-407f-b827-4b5e727d7438\") " pod="openstack/root-account-create-update-clggs" Feb 23 17:48:18 crc kubenswrapper[4724]: I0223 17:48:18.640320 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgxsl\" (UniqueName: \"kubernetes.io/projected/77748b06-eebe-407f-b827-4b5e727d7438-kube-api-access-dgxsl\") pod \"root-account-create-update-clggs\" (UID: \"77748b06-eebe-407f-b827-4b5e727d7438\") " pod="openstack/root-account-create-update-clggs" Feb 23 17:48:18 crc kubenswrapper[4724]: I0223 17:48:18.728032 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-clggs" Feb 23 17:48:18 crc kubenswrapper[4724]: I0223 17:48:18.862960 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-x87lg" Feb 23 17:48:18 crc kubenswrapper[4724]: I0223 17:48:18.927063 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28b45ff2-6bda-4335-aeb4-862daa049364-operator-scripts\") pod \"28b45ff2-6bda-4335-aeb4-862daa049364\" (UID: \"28b45ff2-6bda-4335-aeb4-862daa049364\") " Feb 23 17:48:18 crc kubenswrapper[4724]: I0223 17:48:18.927118 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzflg\" (UniqueName: \"kubernetes.io/projected/28b45ff2-6bda-4335-aeb4-862daa049364-kube-api-access-rzflg\") pod \"28b45ff2-6bda-4335-aeb4-862daa049364\" (UID: \"28b45ff2-6bda-4335-aeb4-862daa049364\") " Feb 23 17:48:18 crc kubenswrapper[4724]: I0223 17:48:18.928052 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28b45ff2-6bda-4335-aeb4-862daa049364-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "28b45ff2-6bda-4335-aeb4-862daa049364" (UID: "28b45ff2-6bda-4335-aeb4-862daa049364"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:48:18 crc kubenswrapper[4724]: I0223 17:48:18.935414 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28b45ff2-6bda-4335-aeb4-862daa049364-kube-api-access-rzflg" (OuterVolumeSpecName: "kube-api-access-rzflg") pod "28b45ff2-6bda-4335-aeb4-862daa049364" (UID: "28b45ff2-6bda-4335-aeb4-862daa049364"). InnerVolumeSpecName "kube-api-access-rzflg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:48:18 crc kubenswrapper[4724]: I0223 17:48:18.954283 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-fq65r" Feb 23 17:48:18 crc kubenswrapper[4724]: I0223 17:48:18.966960 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a67fc661-9de0-49e6-80d6-a87cb0e17e76" path="/var/lib/kubelet/pods/a67fc661-9de0-49e6-80d6-a87cb0e17e76/volumes" Feb 23 17:48:19 crc kubenswrapper[4724]: I0223 17:48:19.028363 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/866fb28f-2850-4b40-8285-f89763b322e3-operator-scripts\") pod \"866fb28f-2850-4b40-8285-f89763b322e3\" (UID: \"866fb28f-2850-4b40-8285-f89763b322e3\") " Feb 23 17:48:19 crc kubenswrapper[4724]: I0223 17:48:19.028519 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wb9v\" (UniqueName: \"kubernetes.io/projected/866fb28f-2850-4b40-8285-f89763b322e3-kube-api-access-5wb9v\") pod \"866fb28f-2850-4b40-8285-f89763b322e3\" (UID: \"866fb28f-2850-4b40-8285-f89763b322e3\") " Feb 23 17:48:19 crc kubenswrapper[4724]: I0223 17:48:19.028908 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/866fb28f-2850-4b40-8285-f89763b322e3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "866fb28f-2850-4b40-8285-f89763b322e3" (UID: "866fb28f-2850-4b40-8285-f89763b322e3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:48:19 crc kubenswrapper[4724]: I0223 17:48:19.028969 4724 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28b45ff2-6bda-4335-aeb4-862daa049364-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:19 crc kubenswrapper[4724]: I0223 17:48:19.028983 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzflg\" (UniqueName: \"kubernetes.io/projected/28b45ff2-6bda-4335-aeb4-862daa049364-kube-api-access-rzflg\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:19 crc kubenswrapper[4724]: I0223 17:48:19.033642 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/866fb28f-2850-4b40-8285-f89763b322e3-kube-api-access-5wb9v" (OuterVolumeSpecName: "kube-api-access-5wb9v") pod "866fb28f-2850-4b40-8285-f89763b322e3" (UID: "866fb28f-2850-4b40-8285-f89763b322e3"). InnerVolumeSpecName "kube-api-access-5wb9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:48:19 crc kubenswrapper[4724]: I0223 17:48:19.130845 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wb9v\" (UniqueName: \"kubernetes.io/projected/866fb28f-2850-4b40-8285-f89763b322e3-kube-api-access-5wb9v\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:19 crc kubenswrapper[4724]: I0223 17:48:19.130884 4724 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/866fb28f-2850-4b40-8285-f89763b322e3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:19 crc kubenswrapper[4724]: I0223 17:48:19.282197 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-clggs"] Feb 23 17:48:19 crc kubenswrapper[4724]: W0223 17:48:19.283377 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod77748b06_eebe_407f_b827_4b5e727d7438.slice/crio-a4c7a37b95e3df9b8cdb952b5724941648449329821946d7a5f68e4fa9fb4b4b WatchSource:0}: Error finding container a4c7a37b95e3df9b8cdb952b5724941648449329821946d7a5f68e4fa9fb4b4b: Status 404 returned error can't find the container with id a4c7a37b95e3df9b8cdb952b5724941648449329821946d7a5f68e4fa9fb4b4b Feb 23 17:48:19 crc kubenswrapper[4724]: I0223 17:48:19.467950 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-fq65r" event={"ID":"866fb28f-2850-4b40-8285-f89763b322e3","Type":"ContainerDied","Data":"455d47506ac01f5045d4f417661c56d7bc1ba2b6508c2a892dd4b709e1b78b09"} Feb 23 17:48:19 crc kubenswrapper[4724]: I0223 17:48:19.468015 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="455d47506ac01f5045d4f417661c56d7bc1ba2b6508c2a892dd4b709e1b78b09" Feb 23 17:48:19 crc kubenswrapper[4724]: I0223 17:48:19.467970 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-fq65r" Feb 23 17:48:19 crc kubenswrapper[4724]: I0223 17:48:19.470884 4724 generic.go:334] "Generic (PLEG): container finished" podID="08606168-c618-4094-a730-68080afc85d7" containerID="966fa6a5bd5eb4a2b058eb8b35d278396f9980fe1a6dd3b6b518e73bd4d555e8" exitCode=0 Feb 23 17:48:19 crc kubenswrapper[4724]: I0223 17:48:19.470972 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4f5c-account-create-update-skkrv" event={"ID":"08606168-c618-4094-a730-68080afc85d7","Type":"ContainerDied","Data":"966fa6a5bd5eb4a2b058eb8b35d278396f9980fe1a6dd3b6b518e73bd4d555e8"} Feb 23 17:48:19 crc kubenswrapper[4724]: I0223 17:48:19.473195 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"46836cc7-f4d3-432c-aa3e-c448d50a212e","Type":"ContainerStarted","Data":"a833c4847099ba4c662723dff765412ce5d27634b9d3023b80d5854a01b98cad"} Feb 23 17:48:19 crc kubenswrapper[4724]: I0223 17:48:19.473243 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"46836cc7-f4d3-432c-aa3e-c448d50a212e","Type":"ContainerStarted","Data":"f2b4351a30450d9c5755ecea9de4f7f26331dbae7c5df073a44ecfa253b12c2f"} Feb 23 17:48:19 crc kubenswrapper[4724]: I0223 17:48:19.473430 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 23 17:48:19 crc kubenswrapper[4724]: I0223 17:48:19.475117 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-clggs" event={"ID":"77748b06-eebe-407f-b827-4b5e727d7438","Type":"ContainerStarted","Data":"4d5d27132bce363f640305e795e5f100871e5bcfa4d27f8fecb29a0bbde49b45"} Feb 23 17:48:19 crc kubenswrapper[4724]: I0223 17:48:19.475146 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-clggs" event={"ID":"77748b06-eebe-407f-b827-4b5e727d7438","Type":"ContainerStarted","Data":"a4c7a37b95e3df9b8cdb952b5724941648449329821946d7a5f68e4fa9fb4b4b"} Feb 23 17:48:19 crc kubenswrapper[4724]: I0223 17:48:19.481024 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-x87lg" Feb 23 17:48:19 crc kubenswrapper[4724]: I0223 17:48:19.481645 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-x87lg" event={"ID":"28b45ff2-6bda-4335-aeb4-862daa049364","Type":"ContainerDied","Data":"840c768dc77b55452906cebcd916d5bd0db9c1ce863592b2b0c4f9ef798b97ed"} Feb 23 17:48:19 crc kubenswrapper[4724]: I0223 17:48:19.481670 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="840c768dc77b55452906cebcd916d5bd0db9c1ce863592b2b0c4f9ef798b97ed" Feb 23 17:48:19 crc kubenswrapper[4724]: I0223 17:48:19.532248 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=5.662019466 podStartE2EDuration="7.532229829s" podCreationTimestamp="2026-02-23 17:48:12 +0000 UTC" firstStartedPulling="2026-02-23 17:48:16.646514096 +0000 UTC m=+1052.462713696" lastFinishedPulling="2026-02-23 17:48:18.516724459 +0000 UTC m=+1054.332924059" observedRunningTime="2026-02-23 17:48:19.52375242 +0000 UTC m=+1055.339952020" watchObservedRunningTime="2026-02-23 17:48:19.532229829 +0000 UTC m=+1055.348429429" Feb 23 17:48:19 crc kubenswrapper[4724]: I0223 17:48:19.590627 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-clggs" podStartSLOduration=1.59060024 podStartE2EDuration="1.59060024s" podCreationTimestamp="2026-02-23 17:48:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:48:19.579655903 +0000 UTC m=+1055.395855503" watchObservedRunningTime="2026-02-23 17:48:19.59060024 +0000 UTC m=+1055.406799850" Feb 23 17:48:20 crc kubenswrapper[4724]: I0223 17:48:20.016760 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-483b-account-create-update-x5wds" Feb 23 17:48:20 crc kubenswrapper[4724]: I0223 17:48:20.160306 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fd67fc2-80dd-4c14-aed2-99eb130182b1-operator-scripts\") pod \"1fd67fc2-80dd-4c14-aed2-99eb130182b1\" (UID: \"1fd67fc2-80dd-4c14-aed2-99eb130182b1\") " Feb 23 17:48:20 crc kubenswrapper[4724]: I0223 17:48:20.160561 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6z2mb\" (UniqueName: \"kubernetes.io/projected/1fd67fc2-80dd-4c14-aed2-99eb130182b1-kube-api-access-6z2mb\") pod \"1fd67fc2-80dd-4c14-aed2-99eb130182b1\" (UID: \"1fd67fc2-80dd-4c14-aed2-99eb130182b1\") " Feb 23 17:48:20 crc kubenswrapper[4724]: I0223 17:48:20.161048 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fd67fc2-80dd-4c14-aed2-99eb130182b1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1fd67fc2-80dd-4c14-aed2-99eb130182b1" (UID: "1fd67fc2-80dd-4c14-aed2-99eb130182b1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:48:20 crc kubenswrapper[4724]: I0223 17:48:20.196325 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fd67fc2-80dd-4c14-aed2-99eb130182b1-kube-api-access-6z2mb" (OuterVolumeSpecName: "kube-api-access-6z2mb") pod "1fd67fc2-80dd-4c14-aed2-99eb130182b1" (UID: "1fd67fc2-80dd-4c14-aed2-99eb130182b1"). InnerVolumeSpecName "kube-api-access-6z2mb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:48:20 crc kubenswrapper[4724]: I0223 17:48:20.263876 4724 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fd67fc2-80dd-4c14-aed2-99eb130182b1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:20 crc kubenswrapper[4724]: I0223 17:48:20.263908 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6z2mb\" (UniqueName: \"kubernetes.io/projected/1fd67fc2-80dd-4c14-aed2-99eb130182b1-kube-api-access-6z2mb\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:20 crc kubenswrapper[4724]: I0223 17:48:20.287013 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1e6b-account-create-update-2q4q5" Feb 23 17:48:20 crc kubenswrapper[4724]: I0223 17:48:20.296731 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-pzqhh" Feb 23 17:48:20 crc kubenswrapper[4724]: I0223 17:48:20.365214 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3adf9177-46cc-47ea-8884-2868dd612c07-operator-scripts\") pod \"3adf9177-46cc-47ea-8884-2868dd612c07\" (UID: \"3adf9177-46cc-47ea-8884-2868dd612c07\") " Feb 23 17:48:20 crc kubenswrapper[4724]: I0223 17:48:20.365263 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfg2t\" (UniqueName: \"kubernetes.io/projected/3adf9177-46cc-47ea-8884-2868dd612c07-kube-api-access-nfg2t\") pod \"3adf9177-46cc-47ea-8884-2868dd612c07\" (UID: \"3adf9177-46cc-47ea-8884-2868dd612c07\") " Feb 23 17:48:20 crc kubenswrapper[4724]: I0223 17:48:20.365322 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0764f35c-ec7a-48c0-bdb9-da3568db426a-operator-scripts\") pod \"0764f35c-ec7a-48c0-bdb9-da3568db426a\" (UID: \"0764f35c-ec7a-48c0-bdb9-da3568db426a\") " Feb 23 17:48:20 crc kubenswrapper[4724]: I0223 17:48:20.365361 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqq5m\" (UniqueName: \"kubernetes.io/projected/0764f35c-ec7a-48c0-bdb9-da3568db426a-kube-api-access-nqq5m\") pod \"0764f35c-ec7a-48c0-bdb9-da3568db426a\" (UID: \"0764f35c-ec7a-48c0-bdb9-da3568db426a\") " Feb 23 17:48:20 crc kubenswrapper[4724]: I0223 17:48:20.366768 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3adf9177-46cc-47ea-8884-2868dd612c07-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3adf9177-46cc-47ea-8884-2868dd612c07" (UID: "3adf9177-46cc-47ea-8884-2868dd612c07"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:48:20 crc kubenswrapper[4724]: I0223 17:48:20.366978 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0764f35c-ec7a-48c0-bdb9-da3568db426a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0764f35c-ec7a-48c0-bdb9-da3568db426a" (UID: "0764f35c-ec7a-48c0-bdb9-da3568db426a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:48:20 crc kubenswrapper[4724]: I0223 17:48:20.369097 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0764f35c-ec7a-48c0-bdb9-da3568db426a-kube-api-access-nqq5m" (OuterVolumeSpecName: "kube-api-access-nqq5m") pod "0764f35c-ec7a-48c0-bdb9-da3568db426a" (UID: "0764f35c-ec7a-48c0-bdb9-da3568db426a"). InnerVolumeSpecName "kube-api-access-nqq5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:48:20 crc kubenswrapper[4724]: I0223 17:48:20.369754 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3adf9177-46cc-47ea-8884-2868dd612c07-kube-api-access-nfg2t" (OuterVolumeSpecName: "kube-api-access-nfg2t") pod "3adf9177-46cc-47ea-8884-2868dd612c07" (UID: "3adf9177-46cc-47ea-8884-2868dd612c07"). InnerVolumeSpecName "kube-api-access-nfg2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:48:20 crc kubenswrapper[4724]: I0223 17:48:20.374220 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-75e7-account-create-update-j7mwp" Feb 23 17:48:20 crc kubenswrapper[4724]: I0223 17:48:20.380692 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-hfvt8" Feb 23 17:48:20 crc kubenswrapper[4724]: I0223 17:48:20.467667 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83342eb4-7660-4e5f-96e2-883ab91b855e-operator-scripts\") pod \"83342eb4-7660-4e5f-96e2-883ab91b855e\" (UID: \"83342eb4-7660-4e5f-96e2-883ab91b855e\") " Feb 23 17:48:20 crc kubenswrapper[4724]: I0223 17:48:20.467768 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2dgh\" (UniqueName: \"kubernetes.io/projected/83342eb4-7660-4e5f-96e2-883ab91b855e-kube-api-access-x2dgh\") pod \"83342eb4-7660-4e5f-96e2-883ab91b855e\" (UID: \"83342eb4-7660-4e5f-96e2-883ab91b855e\") " Feb 23 17:48:20 crc kubenswrapper[4724]: I0223 17:48:20.467820 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1dea0f4c-65e8-438f-a731-024b2074c8df-operator-scripts\") pod \"1dea0f4c-65e8-438f-a731-024b2074c8df\" (UID: \"1dea0f4c-65e8-438f-a731-024b2074c8df\") " Feb 23 17:48:20 crc kubenswrapper[4724]: I0223 17:48:20.467890 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2n67r\" (UniqueName: \"kubernetes.io/projected/1dea0f4c-65e8-438f-a731-024b2074c8df-kube-api-access-2n67r\") pod \"1dea0f4c-65e8-438f-a731-024b2074c8df\" (UID: \"1dea0f4c-65e8-438f-a731-024b2074c8df\") " Feb 23 17:48:20 crc kubenswrapper[4724]: I0223 17:48:20.468113 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83342eb4-7660-4e5f-96e2-883ab91b855e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "83342eb4-7660-4e5f-96e2-883ab91b855e" (UID: "83342eb4-7660-4e5f-96e2-883ab91b855e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:48:20 crc kubenswrapper[4724]: I0223 17:48:20.468432 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dea0f4c-65e8-438f-a731-024b2074c8df-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1dea0f4c-65e8-438f-a731-024b2074c8df" (UID: "1dea0f4c-65e8-438f-a731-024b2074c8df"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:48:20 crc kubenswrapper[4724]: I0223 17:48:20.470011 4724 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83342eb4-7660-4e5f-96e2-883ab91b855e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:20 crc kubenswrapper[4724]: I0223 17:48:20.470054 4724 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1dea0f4c-65e8-438f-a731-024b2074c8df-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:20 crc kubenswrapper[4724]: I0223 17:48:20.470067 4724 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3adf9177-46cc-47ea-8884-2868dd612c07-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:20 crc kubenswrapper[4724]: I0223 17:48:20.470553 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfg2t\" (UniqueName: \"kubernetes.io/projected/3adf9177-46cc-47ea-8884-2868dd612c07-kube-api-access-nfg2t\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:20 crc kubenswrapper[4724]: I0223 17:48:20.470625 4724 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0764f35c-ec7a-48c0-bdb9-da3568db426a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:20 crc kubenswrapper[4724]: I0223 17:48:20.470639 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqq5m\" (UniqueName: \"kubernetes.io/projected/0764f35c-ec7a-48c0-bdb9-da3568db426a-kube-api-access-nqq5m\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:20 crc kubenswrapper[4724]: I0223 17:48:20.471344 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83342eb4-7660-4e5f-96e2-883ab91b855e-kube-api-access-x2dgh" (OuterVolumeSpecName: "kube-api-access-x2dgh") pod "83342eb4-7660-4e5f-96e2-883ab91b855e" (UID: "83342eb4-7660-4e5f-96e2-883ab91b855e"). InnerVolumeSpecName "kube-api-access-x2dgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:48:20 crc kubenswrapper[4724]: I0223 17:48:20.474011 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dea0f4c-65e8-438f-a731-024b2074c8df-kube-api-access-2n67r" (OuterVolumeSpecName: "kube-api-access-2n67r") pod "1dea0f4c-65e8-438f-a731-024b2074c8df" (UID: "1dea0f4c-65e8-438f-a731-024b2074c8df"). InnerVolumeSpecName "kube-api-access-2n67r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:48:20 crc kubenswrapper[4724]: I0223 17:48:20.490264 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-1e6b-account-create-update-2q4q5" event={"ID":"3adf9177-46cc-47ea-8884-2868dd612c07","Type":"ContainerDied","Data":"3dd8507fec01d542a80ba7f354dca0fad88a63b7333e0d5dfd69a72a6fb4285a"} Feb 23 17:48:20 crc kubenswrapper[4724]: I0223 17:48:20.490316 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3dd8507fec01d542a80ba7f354dca0fad88a63b7333e0d5dfd69a72a6fb4285a" Feb 23 17:48:20 crc kubenswrapper[4724]: I0223 17:48:20.490280 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1e6b-account-create-update-2q4q5" Feb 23 17:48:20 crc kubenswrapper[4724]: I0223 17:48:20.491715 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-483b-account-create-update-x5wds" event={"ID":"1fd67fc2-80dd-4c14-aed2-99eb130182b1","Type":"ContainerDied","Data":"09a9e5fdade5145bddf2e81c56e0f32d4d94868ca43603248968ebe929f28f29"} Feb 23 17:48:20 crc kubenswrapper[4724]: I0223 17:48:20.491765 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09a9e5fdade5145bddf2e81c56e0f32d4d94868ca43603248968ebe929f28f29" Feb 23 17:48:20 crc kubenswrapper[4724]: I0223 17:48:20.491806 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-483b-account-create-update-x5wds" Feb 23 17:48:20 crc kubenswrapper[4724]: I0223 17:48:20.494454 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-hfvt8" event={"ID":"83342eb4-7660-4e5f-96e2-883ab91b855e","Type":"ContainerDied","Data":"5bae0483f84b834cc07baf483335908aa5e2e82996b60eb2f91ef6127e247140"} Feb 23 17:48:20 crc kubenswrapper[4724]: I0223 17:48:20.494488 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bae0483f84b834cc07baf483335908aa5e2e82996b60eb2f91ef6127e247140" Feb 23 17:48:20 crc kubenswrapper[4724]: I0223 17:48:20.494500 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-hfvt8" Feb 23 17:48:20 crc kubenswrapper[4724]: I0223 17:48:20.498775 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-pzqhh" event={"ID":"0764f35c-ec7a-48c0-bdb9-da3568db426a","Type":"ContainerDied","Data":"b272704713eb649275e05e32fb59dbd82269609b267c1738f9ea87c2589ee546"} Feb 23 17:48:20 crc kubenswrapper[4724]: I0223 17:48:20.498804 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b272704713eb649275e05e32fb59dbd82269609b267c1738f9ea87c2589ee546" Feb 23 17:48:20 crc kubenswrapper[4724]: I0223 17:48:20.498853 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-pzqhh" Feb 23 17:48:20 crc kubenswrapper[4724]: I0223 17:48:20.502336 4724 generic.go:334] "Generic (PLEG): container finished" podID="77748b06-eebe-407f-b827-4b5e727d7438" containerID="4d5d27132bce363f640305e795e5f100871e5bcfa4d27f8fecb29a0bbde49b45" exitCode=0 Feb 23 17:48:20 crc kubenswrapper[4724]: I0223 17:48:20.502810 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-clggs" event={"ID":"77748b06-eebe-407f-b827-4b5e727d7438","Type":"ContainerDied","Data":"4d5d27132bce363f640305e795e5f100871e5bcfa4d27f8fecb29a0bbde49b45"} Feb 23 17:48:20 crc kubenswrapper[4724]: I0223 17:48:20.505751 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-75e7-account-create-update-j7mwp" Feb 23 17:48:20 crc kubenswrapper[4724]: I0223 17:48:20.506171 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-75e7-account-create-update-j7mwp" event={"ID":"1dea0f4c-65e8-438f-a731-024b2074c8df","Type":"ContainerDied","Data":"08a7ec2085943b5c7e467a86fede3ae28b80656564809ee07b266b89ebcf1a69"} Feb 23 17:48:20 crc kubenswrapper[4724]: I0223 17:48:20.506207 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08a7ec2085943b5c7e467a86fede3ae28b80656564809ee07b266b89ebcf1a69" Feb 23 17:48:20 crc kubenswrapper[4724]: I0223 17:48:20.573604 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2dgh\" (UniqueName: \"kubernetes.io/projected/83342eb4-7660-4e5f-96e2-883ab91b855e-kube-api-access-x2dgh\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:20 crc kubenswrapper[4724]: I0223 17:48:20.573628 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2n67r\" (UniqueName: \"kubernetes.io/projected/1dea0f4c-65e8-438f-a731-024b2074c8df-kube-api-access-2n67r\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:20 crc kubenswrapper[4724]: I0223 17:48:20.890607 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4f5c-account-create-update-skkrv" Feb 23 17:48:20 crc kubenswrapper[4724]: I0223 17:48:20.979851 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08606168-c618-4094-a730-68080afc85d7-operator-scripts\") pod \"08606168-c618-4094-a730-68080afc85d7\" (UID: \"08606168-c618-4094-a730-68080afc85d7\") " Feb 23 17:48:20 crc kubenswrapper[4724]: I0223 17:48:20.979962 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbkxc\" (UniqueName: \"kubernetes.io/projected/08606168-c618-4094-a730-68080afc85d7-kube-api-access-vbkxc\") pod \"08606168-c618-4094-a730-68080afc85d7\" (UID: \"08606168-c618-4094-a730-68080afc85d7\") " Feb 23 17:48:20 crc kubenswrapper[4724]: I0223 17:48:20.980213 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3946025b-c492-4f1b-a3c3-62d2fa658586-etc-swift\") pod \"swift-storage-0\" (UID: \"3946025b-c492-4f1b-a3c3-62d2fa658586\") " pod="openstack/swift-storage-0" Feb 23 17:48:20 crc kubenswrapper[4724]: E0223 17:48:20.980503 4724 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 23 17:48:20 crc kubenswrapper[4724]: E0223 17:48:20.980543 4724 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 23 17:48:20 crc kubenswrapper[4724]: I0223 17:48:20.980564 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08606168-c618-4094-a730-68080afc85d7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "08606168-c618-4094-a730-68080afc85d7" (UID: "08606168-c618-4094-a730-68080afc85d7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:48:20 crc kubenswrapper[4724]: E0223 17:48:20.980603 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3946025b-c492-4f1b-a3c3-62d2fa658586-etc-swift podName:3946025b-c492-4f1b-a3c3-62d2fa658586 nodeName:}" failed. No retries permitted until 2026-02-23 17:48:36.98058071 +0000 UTC m=+1072.796780310 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3946025b-c492-4f1b-a3c3-62d2fa658586-etc-swift") pod "swift-storage-0" (UID: "3946025b-c492-4f1b-a3c3-62d2fa658586") : configmap "swift-ring-files" not found Feb 23 17:48:20 crc kubenswrapper[4724]: I0223 17:48:20.985230 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08606168-c618-4094-a730-68080afc85d7-kube-api-access-vbkxc" (OuterVolumeSpecName: "kube-api-access-vbkxc") pod "08606168-c618-4094-a730-68080afc85d7" (UID: "08606168-c618-4094-a730-68080afc85d7"). InnerVolumeSpecName "kube-api-access-vbkxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:48:21 crc kubenswrapper[4724]: I0223 17:48:21.081791 4724 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08606168-c618-4094-a730-68080afc85d7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:21 crc kubenswrapper[4724]: I0223 17:48:21.081825 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbkxc\" (UniqueName: \"kubernetes.io/projected/08606168-c618-4094-a730-68080afc85d7-kube-api-access-vbkxc\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:21 crc kubenswrapper[4724]: I0223 17:48:21.114600 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b99fb9575-gk5sx" Feb 23 17:48:21 crc kubenswrapper[4724]: I0223 17:48:21.188538 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74bcc47849-4gdn2"] Feb 23 17:48:21 crc kubenswrapper[4724]: I0223 17:48:21.188799 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74bcc47849-4gdn2" podUID="f92d7742-3151-48d7-8493-ff07e6803966" containerName="dnsmasq-dns" containerID="cri-o://925e90b22081f3b5d2ac56da2f541030986352340e4160a8634ae05c35c20a73" gracePeriod=10 Feb 23 17:48:21 crc kubenswrapper[4724]: I0223 17:48:21.522441 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4f5c-account-create-update-skkrv" event={"ID":"08606168-c618-4094-a730-68080afc85d7","Type":"ContainerDied","Data":"accac8ca3a40814d96135f33cc9860e231ef0486a05bf5f135d9d7dc39bf9c98"} Feb 23 17:48:21 crc kubenswrapper[4724]: I0223 17:48:21.522478 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="accac8ca3a40814d96135f33cc9860e231ef0486a05bf5f135d9d7dc39bf9c98" Feb 23 17:48:21 crc kubenswrapper[4724]: I0223 17:48:21.522786 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4f5c-account-create-update-skkrv" Feb 23 17:48:21 crc kubenswrapper[4724]: I0223 17:48:21.534433 4724 generic.go:334] "Generic (PLEG): container finished" podID="f92d7742-3151-48d7-8493-ff07e6803966" containerID="925e90b22081f3b5d2ac56da2f541030986352340e4160a8634ae05c35c20a73" exitCode=0 Feb 23 17:48:21 crc kubenswrapper[4724]: I0223 17:48:21.534631 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74bcc47849-4gdn2" event={"ID":"f92d7742-3151-48d7-8493-ff07e6803966","Type":"ContainerDied","Data":"925e90b22081f3b5d2ac56da2f541030986352340e4160a8634ae05c35c20a73"} Feb 23 17:48:21 crc kubenswrapper[4724]: I0223 17:48:21.680371 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-lzxrb" Feb 23 17:48:21 crc kubenswrapper[4724]: I0223 17:48:21.688170 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-lzxrb" Feb 23 17:48:21 crc kubenswrapper[4724]: I0223 17:48:21.961163 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-hh76w-config-js88d"] Feb 23 17:48:21 crc kubenswrapper[4724]: E0223 17:48:21.961529 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="866fb28f-2850-4b40-8285-f89763b322e3" containerName="mariadb-database-create" Feb 23 17:48:21 crc kubenswrapper[4724]: I0223 17:48:21.961545 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="866fb28f-2850-4b40-8285-f89763b322e3" containerName="mariadb-database-create" Feb 23 17:48:21 crc kubenswrapper[4724]: E0223 17:48:21.961556 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83342eb4-7660-4e5f-96e2-883ab91b855e" containerName="mariadb-database-create" Feb 23 17:48:21 crc kubenswrapper[4724]: I0223 17:48:21.961562 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="83342eb4-7660-4e5f-96e2-883ab91b855e" containerName="mariadb-database-create" Feb 23 17:48:21 crc kubenswrapper[4724]: E0223 17:48:21.961577 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0764f35c-ec7a-48c0-bdb9-da3568db426a" containerName="mariadb-database-create" Feb 23 17:48:21 crc kubenswrapper[4724]: I0223 17:48:21.961583 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="0764f35c-ec7a-48c0-bdb9-da3568db426a" containerName="mariadb-database-create" Feb 23 17:48:21 crc kubenswrapper[4724]: E0223 17:48:21.961594 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28b45ff2-6bda-4335-aeb4-862daa049364" containerName="mariadb-database-create" Feb 23 17:48:21 crc kubenswrapper[4724]: I0223 17:48:21.961600 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="28b45ff2-6bda-4335-aeb4-862daa049364" containerName="mariadb-database-create" Feb 23 17:48:21 crc kubenswrapper[4724]: E0223 17:48:21.961609 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08606168-c618-4094-a730-68080afc85d7" containerName="mariadb-account-create-update" Feb 23 17:48:21 crc kubenswrapper[4724]: I0223 17:48:21.961614 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="08606168-c618-4094-a730-68080afc85d7" containerName="mariadb-account-create-update" Feb 23 17:48:21 crc kubenswrapper[4724]: E0223 17:48:21.961623 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3adf9177-46cc-47ea-8884-2868dd612c07" containerName="mariadb-account-create-update" Feb 23 17:48:21 crc kubenswrapper[4724]: I0223 17:48:21.961629 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="3adf9177-46cc-47ea-8884-2868dd612c07" containerName="mariadb-account-create-update" Feb 23 17:48:21 crc kubenswrapper[4724]: E0223 17:48:21.961641 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fd67fc2-80dd-4c14-aed2-99eb130182b1" containerName="mariadb-account-create-update" Feb 23 17:48:21 crc kubenswrapper[4724]: I0223 17:48:21.961647 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fd67fc2-80dd-4c14-aed2-99eb130182b1" containerName="mariadb-account-create-update" Feb 23 17:48:21 crc kubenswrapper[4724]: E0223 17:48:21.961653 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dea0f4c-65e8-438f-a731-024b2074c8df" containerName="mariadb-account-create-update" Feb 23 17:48:21 crc kubenswrapper[4724]: I0223 17:48:21.961658 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dea0f4c-65e8-438f-a731-024b2074c8df" containerName="mariadb-account-create-update" Feb 23 17:48:21 crc kubenswrapper[4724]: I0223 17:48:21.963645 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="866fb28f-2850-4b40-8285-f89763b322e3" containerName="mariadb-database-create" Feb 23 17:48:21 crc kubenswrapper[4724]: I0223 17:48:21.963669 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="83342eb4-7660-4e5f-96e2-883ab91b855e" containerName="mariadb-database-create" Feb 23 17:48:21 crc kubenswrapper[4724]: I0223 17:48:21.963676 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="0764f35c-ec7a-48c0-bdb9-da3568db426a" containerName="mariadb-database-create" Feb 23 17:48:21 crc kubenswrapper[4724]: I0223 17:48:21.963701 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fd67fc2-80dd-4c14-aed2-99eb130182b1" containerName="mariadb-account-create-update" Feb 23 17:48:21 crc kubenswrapper[4724]: I0223 17:48:21.963710 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="08606168-c618-4094-a730-68080afc85d7" containerName="mariadb-account-create-update" Feb 23 17:48:21 crc kubenswrapper[4724]: I0223 17:48:21.963720 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dea0f4c-65e8-438f-a731-024b2074c8df" containerName="mariadb-account-create-update" Feb 23 17:48:21 crc kubenswrapper[4724]: I0223 17:48:21.963730 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="28b45ff2-6bda-4335-aeb4-862daa049364" containerName="mariadb-database-create" Feb 23 17:48:21 crc kubenswrapper[4724]: I0223 17:48:21.963740 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="3adf9177-46cc-47ea-8884-2868dd612c07" containerName="mariadb-account-create-update" Feb 23 17:48:21 crc kubenswrapper[4724]: I0223 17:48:21.964315 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hh76w-config-js88d" Feb 23 17:48:21 crc kubenswrapper[4724]: I0223 17:48:21.965922 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 23 17:48:21 crc kubenswrapper[4724]: I0223 17:48:21.979303 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-hh76w-config-js88d"] Feb 23 17:48:22 crc kubenswrapper[4724]: I0223 17:48:22.006301 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74bcc47849-4gdn2" Feb 23 17:48:22 crc kubenswrapper[4724]: I0223 17:48:22.006748 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-clggs" Feb 23 17:48:22 crc kubenswrapper[4724]: I0223 17:48:22.120130 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f92d7742-3151-48d7-8493-ff07e6803966-dns-svc\") pod \"f92d7742-3151-48d7-8493-ff07e6803966\" (UID: \"f92d7742-3151-48d7-8493-ff07e6803966\") " Feb 23 17:48:22 crc kubenswrapper[4724]: I0223 17:48:22.120370 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f92d7742-3151-48d7-8493-ff07e6803966-config\") pod \"f92d7742-3151-48d7-8493-ff07e6803966\" (UID: \"f92d7742-3151-48d7-8493-ff07e6803966\") " Feb 23 17:48:22 crc kubenswrapper[4724]: I0223 17:48:22.120486 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksfgt\" (UniqueName: \"kubernetes.io/projected/f92d7742-3151-48d7-8493-ff07e6803966-kube-api-access-ksfgt\") pod \"f92d7742-3151-48d7-8493-ff07e6803966\" (UID: \"f92d7742-3151-48d7-8493-ff07e6803966\") " Feb 23 17:48:22 crc kubenswrapper[4724]: I0223 17:48:22.120519 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77748b06-eebe-407f-b827-4b5e727d7438-operator-scripts\") pod \"77748b06-eebe-407f-b827-4b5e727d7438\" (UID: \"77748b06-eebe-407f-b827-4b5e727d7438\") " Feb 23 17:48:22 crc kubenswrapper[4724]: I0223 17:48:22.120545 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgxsl\" (UniqueName: \"kubernetes.io/projected/77748b06-eebe-407f-b827-4b5e727d7438-kube-api-access-dgxsl\") pod \"77748b06-eebe-407f-b827-4b5e727d7438\" (UID: \"77748b06-eebe-407f-b827-4b5e727d7438\") " Feb 23 17:48:22 crc kubenswrapper[4724]: I0223 17:48:22.120743 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bhpp\" (UniqueName: \"kubernetes.io/projected/12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a-kube-api-access-2bhpp\") pod \"ovn-controller-hh76w-config-js88d\" (UID: \"12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a\") " pod="openstack/ovn-controller-hh76w-config-js88d" Feb 23 17:48:22 crc kubenswrapper[4724]: I0223 17:48:22.120773 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a-scripts\") pod \"ovn-controller-hh76w-config-js88d\" (UID: \"12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a\") " pod="openstack/ovn-controller-hh76w-config-js88d" Feb 23 17:48:22 crc kubenswrapper[4724]: I0223 17:48:22.120807 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a-var-run-ovn\") pod \"ovn-controller-hh76w-config-js88d\" (UID: \"12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a\") " pod="openstack/ovn-controller-hh76w-config-js88d" Feb 23 17:48:22 crc kubenswrapper[4724]: I0223 17:48:22.120852 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a-var-log-ovn\") pod \"ovn-controller-hh76w-config-js88d\" (UID: \"12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a\") " pod="openstack/ovn-controller-hh76w-config-js88d" Feb 23 17:48:22 crc kubenswrapper[4724]: I0223 17:48:22.120946 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a-additional-scripts\") pod \"ovn-controller-hh76w-config-js88d\" (UID: \"12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a\") " pod="openstack/ovn-controller-hh76w-config-js88d" Feb 23 17:48:22 crc kubenswrapper[4724]: I0223 17:48:22.120985 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a-var-run\") pod \"ovn-controller-hh76w-config-js88d\" (UID: \"12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a\") " pod="openstack/ovn-controller-hh76w-config-js88d" Feb 23 17:48:22 crc kubenswrapper[4724]: I0223 17:48:22.121440 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77748b06-eebe-407f-b827-4b5e727d7438-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "77748b06-eebe-407f-b827-4b5e727d7438" (UID: "77748b06-eebe-407f-b827-4b5e727d7438"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:48:22 crc kubenswrapper[4724]: I0223 17:48:22.137494 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f92d7742-3151-48d7-8493-ff07e6803966-kube-api-access-ksfgt" (OuterVolumeSpecName: "kube-api-access-ksfgt") pod "f92d7742-3151-48d7-8493-ff07e6803966" (UID: "f92d7742-3151-48d7-8493-ff07e6803966"). InnerVolumeSpecName "kube-api-access-ksfgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:48:22 crc kubenswrapper[4724]: I0223 17:48:22.138575 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77748b06-eebe-407f-b827-4b5e727d7438-kube-api-access-dgxsl" (OuterVolumeSpecName: "kube-api-access-dgxsl") pod "77748b06-eebe-407f-b827-4b5e727d7438" (UID: "77748b06-eebe-407f-b827-4b5e727d7438"). InnerVolumeSpecName "kube-api-access-dgxsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:48:22 crc kubenswrapper[4724]: I0223 17:48:22.166974 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f92d7742-3151-48d7-8493-ff07e6803966-config" (OuterVolumeSpecName: "config") pod "f92d7742-3151-48d7-8493-ff07e6803966" (UID: "f92d7742-3151-48d7-8493-ff07e6803966"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:48:22 crc kubenswrapper[4724]: I0223 17:48:22.167786 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f92d7742-3151-48d7-8493-ff07e6803966-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f92d7742-3151-48d7-8493-ff07e6803966" (UID: "f92d7742-3151-48d7-8493-ff07e6803966"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:48:22 crc kubenswrapper[4724]: I0223 17:48:22.222253 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a-additional-scripts\") pod \"ovn-controller-hh76w-config-js88d\" (UID: \"12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a\") " pod="openstack/ovn-controller-hh76w-config-js88d" Feb 23 17:48:22 crc kubenswrapper[4724]: I0223 17:48:22.222317 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a-var-run\") pod \"ovn-controller-hh76w-config-js88d\" (UID: \"12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a\") " pod="openstack/ovn-controller-hh76w-config-js88d" Feb 23 17:48:22 crc kubenswrapper[4724]: I0223 17:48:22.222345 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bhpp\" (UniqueName: \"kubernetes.io/projected/12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a-kube-api-access-2bhpp\") pod \"ovn-controller-hh76w-config-js88d\" (UID: \"12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a\") " pod="openstack/ovn-controller-hh76w-config-js88d" Feb 23 17:48:22 crc kubenswrapper[4724]: I0223 17:48:22.222366 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a-scripts\") pod \"ovn-controller-hh76w-config-js88d\" (UID: \"12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a\") " pod="openstack/ovn-controller-hh76w-config-js88d" Feb 23 17:48:22 crc kubenswrapper[4724]: I0223 17:48:22.222408 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a-var-run-ovn\") pod \"ovn-controller-hh76w-config-js88d\" (UID: \"12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a\") " pod="openstack/ovn-controller-hh76w-config-js88d" Feb 23 17:48:22 crc kubenswrapper[4724]: I0223 17:48:22.222448 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a-var-log-ovn\") pod \"ovn-controller-hh76w-config-js88d\" (UID: \"12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a\") " pod="openstack/ovn-controller-hh76w-config-js88d" Feb 23 17:48:22 crc kubenswrapper[4724]: I0223 17:48:22.222518 4724 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f92d7742-3151-48d7-8493-ff07e6803966-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:22 crc kubenswrapper[4724]: I0223 17:48:22.222528 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f92d7742-3151-48d7-8493-ff07e6803966-config\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:22 crc kubenswrapper[4724]: I0223 17:48:22.222537 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksfgt\" (UniqueName: \"kubernetes.io/projected/f92d7742-3151-48d7-8493-ff07e6803966-kube-api-access-ksfgt\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:22 crc kubenswrapper[4724]: I0223 17:48:22.222547 4724 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77748b06-eebe-407f-b827-4b5e727d7438-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:22 crc kubenswrapper[4724]: I0223 17:48:22.222556 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgxsl\" (UniqueName: \"kubernetes.io/projected/77748b06-eebe-407f-b827-4b5e727d7438-kube-api-access-dgxsl\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:22 crc kubenswrapper[4724]: I0223 17:48:22.222802 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a-var-log-ovn\") pod \"ovn-controller-hh76w-config-js88d\" (UID: \"12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a\") " pod="openstack/ovn-controller-hh76w-config-js88d" Feb 23 17:48:22 crc kubenswrapper[4724]: I0223 17:48:22.222856 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a-var-run\") pod \"ovn-controller-hh76w-config-js88d\" (UID: \"12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a\") " pod="openstack/ovn-controller-hh76w-config-js88d" Feb 23 17:48:22 crc kubenswrapper[4724]: I0223 17:48:22.222891 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a-var-run-ovn\") pod \"ovn-controller-hh76w-config-js88d\" (UID: \"12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a\") " pod="openstack/ovn-controller-hh76w-config-js88d" Feb 23 17:48:22 crc kubenswrapper[4724]: I0223 17:48:22.223120 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a-additional-scripts\") pod \"ovn-controller-hh76w-config-js88d\" (UID: \"12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a\") " pod="openstack/ovn-controller-hh76w-config-js88d" Feb 23 17:48:22 crc kubenswrapper[4724]: I0223 17:48:22.225178 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a-scripts\") pod \"ovn-controller-hh76w-config-js88d\" (UID: \"12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a\") " pod="openstack/ovn-controller-hh76w-config-js88d" Feb 23 17:48:22 crc kubenswrapper[4724]: I0223 17:48:22.241150 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bhpp\" (UniqueName: \"kubernetes.io/projected/12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a-kube-api-access-2bhpp\") pod \"ovn-controller-hh76w-config-js88d\" (UID: \"12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a\") " pod="openstack/ovn-controller-hh76w-config-js88d" Feb 23 17:48:22 crc kubenswrapper[4724]: I0223 17:48:22.316645 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hh76w-config-js88d" Feb 23 17:48:22 crc kubenswrapper[4724]: I0223 17:48:22.551106 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74bcc47849-4gdn2" event={"ID":"f92d7742-3151-48d7-8493-ff07e6803966","Type":"ContainerDied","Data":"41618f9c9025ec6ea3d96c89d9b63cd1334fed084b721032764f9072ea349e07"} Feb 23 17:48:22 crc kubenswrapper[4724]: I0223 17:48:22.551164 4724 scope.go:117] "RemoveContainer" containerID="925e90b22081f3b5d2ac56da2f541030986352340e4160a8634ae05c35c20a73" Feb 23 17:48:22 crc kubenswrapper[4724]: I0223 17:48:22.551256 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74bcc47849-4gdn2" Feb 23 17:48:22 crc kubenswrapper[4724]: I0223 17:48:22.568496 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-clggs" event={"ID":"77748b06-eebe-407f-b827-4b5e727d7438","Type":"ContainerDied","Data":"a4c7a37b95e3df9b8cdb952b5724941648449329821946d7a5f68e4fa9fb4b4b"} Feb 23 17:48:22 crc kubenswrapper[4724]: I0223 17:48:22.568556 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4c7a37b95e3df9b8cdb952b5724941648449329821946d7a5f68e4fa9fb4b4b" Feb 23 17:48:22 crc kubenswrapper[4724]: I0223 17:48:22.568515 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-clggs" Feb 23 17:48:22 crc kubenswrapper[4724]: I0223 17:48:22.605019 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74bcc47849-4gdn2"] Feb 23 17:48:22 crc kubenswrapper[4724]: I0223 17:48:22.614418 4724 scope.go:117] "RemoveContainer" containerID="b49bd863857d2d200e65ce8a19823c0111ab1a3ee4e7a82b58bdb77647345899" Feb 23 17:48:22 crc kubenswrapper[4724]: I0223 17:48:22.624470 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74bcc47849-4gdn2"] Feb 23 17:48:22 crc kubenswrapper[4724]: I0223 17:48:22.634465 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-hh76w-config-js88d"] Feb 23 17:48:22 crc kubenswrapper[4724]: I0223 17:48:22.961413 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f92d7742-3151-48d7-8493-ff07e6803966" path="/var/lib/kubelet/pods/f92d7742-3151-48d7-8493-ff07e6803966/volumes" Feb 23 17:48:23 crc kubenswrapper[4724]: I0223 17:48:23.578453 4724 generic.go:334] "Generic (PLEG): container finished" podID="12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a" containerID="b4cbb44de734bb9c58a1961f32ab5a2c11cdb1c4c95a88fa7a9a77587c86edee" exitCode=0 Feb 23 17:48:23 crc kubenswrapper[4724]: I0223 17:48:23.578526 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hh76w-config-js88d" event={"ID":"12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a","Type":"ContainerDied","Data":"b4cbb44de734bb9c58a1961f32ab5a2c11cdb1c4c95a88fa7a9a77587c86edee"} Feb 23 17:48:23 crc kubenswrapper[4724]: I0223 17:48:23.578775 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hh76w-config-js88d" event={"ID":"12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a","Type":"ContainerStarted","Data":"104a73f33135c47b5a3baaeebe587d639ae0889c28133ec00352ec983180c533"} Feb 23 17:48:24 crc kubenswrapper[4724]: I0223 17:48:24.774423 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-clggs"] Feb 23 17:48:24 crc kubenswrapper[4724]: I0223 17:48:24.781312 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-clggs"] Feb 23 17:48:24 crc kubenswrapper[4724]: I0223 17:48:24.943930 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hh76w-config-js88d" Feb 23 17:48:24 crc kubenswrapper[4724]: I0223 17:48:24.962753 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77748b06-eebe-407f-b827-4b5e727d7438" path="/var/lib/kubelet/pods/77748b06-eebe-407f-b827-4b5e727d7438/volumes" Feb 23 17:48:25 crc kubenswrapper[4724]: I0223 17:48:25.081452 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a-var-run\") pod \"12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a\" (UID: \"12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a\") " Feb 23 17:48:25 crc kubenswrapper[4724]: I0223 17:48:25.081512 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a-var-log-ovn\") pod \"12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a\" (UID: \"12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a\") " Feb 23 17:48:25 crc kubenswrapper[4724]: I0223 17:48:25.081557 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a-var-run" (OuterVolumeSpecName: "var-run") pod "12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a" (UID: "12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:48:25 crc kubenswrapper[4724]: I0223 17:48:25.081601 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a-additional-scripts\") pod \"12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a\" (UID: \"12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a\") " Feb 23 17:48:25 crc kubenswrapper[4724]: I0223 17:48:25.081649 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a-scripts\") pod \"12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a\" (UID: \"12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a\") " Feb 23 17:48:25 crc kubenswrapper[4724]: I0223 17:48:25.081653 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a" (UID: "12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:48:25 crc kubenswrapper[4724]: I0223 17:48:25.081738 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a-var-run-ovn\") pod \"12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a\" (UID: \"12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a\") " Feb 23 17:48:25 crc kubenswrapper[4724]: I0223 17:48:25.081774 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bhpp\" (UniqueName: \"kubernetes.io/projected/12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a-kube-api-access-2bhpp\") pod \"12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a\" (UID: \"12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a\") " Feb 23 17:48:25 crc kubenswrapper[4724]: I0223 17:48:25.081802 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a" (UID: "12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:48:25 crc kubenswrapper[4724]: I0223 17:48:25.082112 4724 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:25 crc kubenswrapper[4724]: I0223 17:48:25.082127 4724 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a-var-run\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:25 crc kubenswrapper[4724]: I0223 17:48:25.082136 4724 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:25 crc kubenswrapper[4724]: I0223 17:48:25.082727 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a" (UID: "12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:48:25 crc kubenswrapper[4724]: I0223 17:48:25.082923 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a-scripts" (OuterVolumeSpecName: "scripts") pod "12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a" (UID: "12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:48:25 crc kubenswrapper[4724]: I0223 17:48:25.087308 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a-kube-api-access-2bhpp" (OuterVolumeSpecName: "kube-api-access-2bhpp") pod "12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a" (UID: "12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a"). InnerVolumeSpecName "kube-api-access-2bhpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:48:25 crc kubenswrapper[4724]: I0223 17:48:25.183233 4724 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:25 crc kubenswrapper[4724]: I0223 17:48:25.183282 4724 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:25 crc kubenswrapper[4724]: I0223 17:48:25.183292 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bhpp\" (UniqueName: \"kubernetes.io/projected/12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a-kube-api-access-2bhpp\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:25 crc kubenswrapper[4724]: I0223 17:48:25.595777 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hh76w-config-js88d" event={"ID":"12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a","Type":"ContainerDied","Data":"104a73f33135c47b5a3baaeebe587d639ae0889c28133ec00352ec983180c533"} Feb 23 17:48:25 crc kubenswrapper[4724]: I0223 17:48:25.596117 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="104a73f33135c47b5a3baaeebe587d639ae0889c28133ec00352ec983180c533" Feb 23 17:48:25 crc kubenswrapper[4724]: I0223 17:48:25.595857 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hh76w-config-js88d" Feb 23 17:48:26 crc kubenswrapper[4724]: I0223 17:48:26.064739 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-hh76w-config-js88d"] Feb 23 17:48:26 crc kubenswrapper[4724]: I0223 17:48:26.073620 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-hh76w-config-js88d"] Feb 23 17:48:26 crc kubenswrapper[4724]: I0223 17:48:26.104130 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-hh76w-config-wbnm7"] Feb 23 17:48:26 crc kubenswrapper[4724]: E0223 17:48:26.104478 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a" containerName="ovn-config" Feb 23 17:48:26 crc kubenswrapper[4724]: I0223 17:48:26.104493 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a" containerName="ovn-config" Feb 23 17:48:26 crc kubenswrapper[4724]: E0223 17:48:26.104520 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77748b06-eebe-407f-b827-4b5e727d7438" containerName="mariadb-account-create-update" Feb 23 17:48:26 crc kubenswrapper[4724]: I0223 17:48:26.104528 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="77748b06-eebe-407f-b827-4b5e727d7438" containerName="mariadb-account-create-update" Feb 23 17:48:26 crc kubenswrapper[4724]: E0223 17:48:26.104542 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f92d7742-3151-48d7-8493-ff07e6803966" containerName="init" Feb 23 17:48:26 crc kubenswrapper[4724]: I0223 17:48:26.104548 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="f92d7742-3151-48d7-8493-ff07e6803966" containerName="init" Feb 23 17:48:26 crc kubenswrapper[4724]: E0223 17:48:26.104557 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f92d7742-3151-48d7-8493-ff07e6803966" containerName="dnsmasq-dns" Feb 23 17:48:26 crc kubenswrapper[4724]: I0223 17:48:26.104563 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="f92d7742-3151-48d7-8493-ff07e6803966" containerName="dnsmasq-dns" Feb 23 17:48:26 crc kubenswrapper[4724]: I0223 17:48:26.104710 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="f92d7742-3151-48d7-8493-ff07e6803966" containerName="dnsmasq-dns" Feb 23 17:48:26 crc kubenswrapper[4724]: I0223 17:48:26.104734 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="77748b06-eebe-407f-b827-4b5e727d7438" containerName="mariadb-account-create-update" Feb 23 17:48:26 crc kubenswrapper[4724]: I0223 17:48:26.104742 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a" containerName="ovn-config" Feb 23 17:48:26 crc kubenswrapper[4724]: I0223 17:48:26.105263 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hh76w-config-wbnm7" Feb 23 17:48:26 crc kubenswrapper[4724]: I0223 17:48:26.107275 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 23 17:48:26 crc kubenswrapper[4724]: I0223 17:48:26.117775 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-hh76w-config-wbnm7"] Feb 23 17:48:26 crc kubenswrapper[4724]: I0223 17:48:26.197677 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6bb6da05-6990-43b6-91ba-9ae08f245d3a-scripts\") pod \"ovn-controller-hh76w-config-wbnm7\" (UID: \"6bb6da05-6990-43b6-91ba-9ae08f245d3a\") " pod="openstack/ovn-controller-hh76w-config-wbnm7" Feb 23 17:48:26 crc kubenswrapper[4724]: I0223 17:48:26.197753 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2j8d\" (UniqueName: \"kubernetes.io/projected/6bb6da05-6990-43b6-91ba-9ae08f245d3a-kube-api-access-x2j8d\") pod \"ovn-controller-hh76w-config-wbnm7\" (UID: \"6bb6da05-6990-43b6-91ba-9ae08f245d3a\") " pod="openstack/ovn-controller-hh76w-config-wbnm7" Feb 23 17:48:26 crc kubenswrapper[4724]: I0223 17:48:26.197796 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6bb6da05-6990-43b6-91ba-9ae08f245d3a-var-run\") pod \"ovn-controller-hh76w-config-wbnm7\" (UID: \"6bb6da05-6990-43b6-91ba-9ae08f245d3a\") " pod="openstack/ovn-controller-hh76w-config-wbnm7" Feb 23 17:48:26 crc kubenswrapper[4724]: I0223 17:48:26.197888 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6bb6da05-6990-43b6-91ba-9ae08f245d3a-additional-scripts\") pod \"ovn-controller-hh76w-config-wbnm7\" (UID: \"6bb6da05-6990-43b6-91ba-9ae08f245d3a\") " pod="openstack/ovn-controller-hh76w-config-wbnm7" Feb 23 17:48:26 crc kubenswrapper[4724]: I0223 17:48:26.197922 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6bb6da05-6990-43b6-91ba-9ae08f245d3a-var-log-ovn\") pod \"ovn-controller-hh76w-config-wbnm7\" (UID: \"6bb6da05-6990-43b6-91ba-9ae08f245d3a\") " pod="openstack/ovn-controller-hh76w-config-wbnm7" Feb 23 17:48:26 crc kubenswrapper[4724]: I0223 17:48:26.197975 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6bb6da05-6990-43b6-91ba-9ae08f245d3a-var-run-ovn\") pod \"ovn-controller-hh76w-config-wbnm7\" (UID: \"6bb6da05-6990-43b6-91ba-9ae08f245d3a\") " pod="openstack/ovn-controller-hh76w-config-wbnm7" Feb 23 17:48:26 crc kubenswrapper[4724]: I0223 17:48:26.299097 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6bb6da05-6990-43b6-91ba-9ae08f245d3a-scripts\") pod \"ovn-controller-hh76w-config-wbnm7\" (UID: \"6bb6da05-6990-43b6-91ba-9ae08f245d3a\") " pod="openstack/ovn-controller-hh76w-config-wbnm7" Feb 23 17:48:26 crc kubenswrapper[4724]: I0223 17:48:26.299148 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2j8d\" (UniqueName: \"kubernetes.io/projected/6bb6da05-6990-43b6-91ba-9ae08f245d3a-kube-api-access-x2j8d\") pod \"ovn-controller-hh76w-config-wbnm7\" (UID: \"6bb6da05-6990-43b6-91ba-9ae08f245d3a\") " pod="openstack/ovn-controller-hh76w-config-wbnm7" Feb 23 17:48:26 crc kubenswrapper[4724]: I0223 17:48:26.299171 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6bb6da05-6990-43b6-91ba-9ae08f245d3a-var-run\") pod \"ovn-controller-hh76w-config-wbnm7\" (UID: \"6bb6da05-6990-43b6-91ba-9ae08f245d3a\") " pod="openstack/ovn-controller-hh76w-config-wbnm7" Feb 23 17:48:26 crc kubenswrapper[4724]: I0223 17:48:26.299222 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6bb6da05-6990-43b6-91ba-9ae08f245d3a-additional-scripts\") pod \"ovn-controller-hh76w-config-wbnm7\" (UID: \"6bb6da05-6990-43b6-91ba-9ae08f245d3a\") " pod="openstack/ovn-controller-hh76w-config-wbnm7" Feb 23 17:48:26 crc kubenswrapper[4724]: I0223 17:48:26.299243 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6bb6da05-6990-43b6-91ba-9ae08f245d3a-var-log-ovn\") pod \"ovn-controller-hh76w-config-wbnm7\" (UID: \"6bb6da05-6990-43b6-91ba-9ae08f245d3a\") " pod="openstack/ovn-controller-hh76w-config-wbnm7" Feb 23 17:48:26 crc kubenswrapper[4724]: I0223 17:48:26.299283 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6bb6da05-6990-43b6-91ba-9ae08f245d3a-var-run-ovn\") pod \"ovn-controller-hh76w-config-wbnm7\" (UID: \"6bb6da05-6990-43b6-91ba-9ae08f245d3a\") " pod="openstack/ovn-controller-hh76w-config-wbnm7" Feb 23 17:48:26 crc kubenswrapper[4724]: I0223 17:48:26.299571 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6bb6da05-6990-43b6-91ba-9ae08f245d3a-var-run\") pod \"ovn-controller-hh76w-config-wbnm7\" (UID: \"6bb6da05-6990-43b6-91ba-9ae08f245d3a\") " pod="openstack/ovn-controller-hh76w-config-wbnm7" Feb 23 17:48:26 crc kubenswrapper[4724]: I0223 17:48:26.299571 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6bb6da05-6990-43b6-91ba-9ae08f245d3a-var-log-ovn\") pod \"ovn-controller-hh76w-config-wbnm7\" (UID: \"6bb6da05-6990-43b6-91ba-9ae08f245d3a\") " pod="openstack/ovn-controller-hh76w-config-wbnm7" Feb 23 17:48:26 crc kubenswrapper[4724]: I0223 17:48:26.299588 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6bb6da05-6990-43b6-91ba-9ae08f245d3a-var-run-ovn\") pod \"ovn-controller-hh76w-config-wbnm7\" (UID: \"6bb6da05-6990-43b6-91ba-9ae08f245d3a\") " pod="openstack/ovn-controller-hh76w-config-wbnm7" Feb 23 17:48:26 crc kubenswrapper[4724]: I0223 17:48:26.299984 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6bb6da05-6990-43b6-91ba-9ae08f245d3a-additional-scripts\") pod \"ovn-controller-hh76w-config-wbnm7\" (UID: \"6bb6da05-6990-43b6-91ba-9ae08f245d3a\") " pod="openstack/ovn-controller-hh76w-config-wbnm7" Feb 23 17:48:26 crc kubenswrapper[4724]: I0223 17:48:26.301012 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6bb6da05-6990-43b6-91ba-9ae08f245d3a-scripts\") pod \"ovn-controller-hh76w-config-wbnm7\" (UID: \"6bb6da05-6990-43b6-91ba-9ae08f245d3a\") " pod="openstack/ovn-controller-hh76w-config-wbnm7" Feb 23 17:48:26 crc kubenswrapper[4724]: I0223 17:48:26.318477 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2j8d\" (UniqueName: \"kubernetes.io/projected/6bb6da05-6990-43b6-91ba-9ae08f245d3a-kube-api-access-x2j8d\") pod \"ovn-controller-hh76w-config-wbnm7\" (UID: \"6bb6da05-6990-43b6-91ba-9ae08f245d3a\") " pod="openstack/ovn-controller-hh76w-config-wbnm7" Feb 23 17:48:26 crc kubenswrapper[4724]: I0223 17:48:26.421092 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hh76w-config-wbnm7" Feb 23 17:48:26 crc kubenswrapper[4724]: I0223 17:48:26.627721 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-hh76w" Feb 23 17:48:26 crc kubenswrapper[4724]: I0223 17:48:26.832588 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-qqzwt"] Feb 23 17:48:26 crc kubenswrapper[4724]: I0223 17:48:26.833871 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-qqzwt" Feb 23 17:48:26 crc kubenswrapper[4724]: I0223 17:48:26.837289 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 23 17:48:26 crc kubenswrapper[4724]: I0223 17:48:26.837367 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-5xnsd" Feb 23 17:48:26 crc kubenswrapper[4724]: I0223 17:48:26.844692 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-qqzwt"] Feb 23 17:48:26 crc kubenswrapper[4724]: I0223 17:48:26.921924 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4835f23c-1737-45fa-8d8f-d5a381c9d498-db-sync-config-data\") pod \"glance-db-sync-qqzwt\" (UID: \"4835f23c-1737-45fa-8d8f-d5a381c9d498\") " pod="openstack/glance-db-sync-qqzwt" Feb 23 17:48:26 crc kubenswrapper[4724]: I0223 17:48:26.921984 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4835f23c-1737-45fa-8d8f-d5a381c9d498-combined-ca-bundle\") pod \"glance-db-sync-qqzwt\" (UID: \"4835f23c-1737-45fa-8d8f-d5a381c9d498\") " pod="openstack/glance-db-sync-qqzwt" Feb 23 17:48:26 crc kubenswrapper[4724]: I0223 17:48:26.922013 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4835f23c-1737-45fa-8d8f-d5a381c9d498-config-data\") pod \"glance-db-sync-qqzwt\" (UID: \"4835f23c-1737-45fa-8d8f-d5a381c9d498\") " pod="openstack/glance-db-sync-qqzwt" Feb 23 17:48:26 crc kubenswrapper[4724]: I0223 17:48:26.922051 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9476\" (UniqueName: \"kubernetes.io/projected/4835f23c-1737-45fa-8d8f-d5a381c9d498-kube-api-access-p9476\") pod \"glance-db-sync-qqzwt\" (UID: \"4835f23c-1737-45fa-8d8f-d5a381c9d498\") " pod="openstack/glance-db-sync-qqzwt" Feb 23 17:48:26 crc kubenswrapper[4724]: W0223 17:48:26.962285 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6bb6da05_6990_43b6_91ba_9ae08f245d3a.slice/crio-0273071872b269bc8ddd270ac87f9487fad2ae3ad212618a2351a2a926dcd035 WatchSource:0}: Error finding container 0273071872b269bc8ddd270ac87f9487fad2ae3ad212618a2351a2a926dcd035: Status 404 returned error can't find the container with id 0273071872b269bc8ddd270ac87f9487fad2ae3ad212618a2351a2a926dcd035 Feb 23 17:48:26 crc kubenswrapper[4724]: I0223 17:48:26.977766 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a" path="/var/lib/kubelet/pods/12ce2bbe-ce7a-4af3-8d91-9f842d1a1e6a/volumes" Feb 23 17:48:26 crc kubenswrapper[4724]: I0223 17:48:26.978608 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-hh76w-config-wbnm7"] Feb 23 17:48:27 crc kubenswrapper[4724]: I0223 17:48:27.024187 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4835f23c-1737-45fa-8d8f-d5a381c9d498-db-sync-config-data\") pod \"glance-db-sync-qqzwt\" (UID: \"4835f23c-1737-45fa-8d8f-d5a381c9d498\") " pod="openstack/glance-db-sync-qqzwt" Feb 23 17:48:27 crc kubenswrapper[4724]: I0223 17:48:27.024238 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4835f23c-1737-45fa-8d8f-d5a381c9d498-combined-ca-bundle\") pod \"glance-db-sync-qqzwt\" (UID: \"4835f23c-1737-45fa-8d8f-d5a381c9d498\") " pod="openstack/glance-db-sync-qqzwt" Feb 23 17:48:27 crc kubenswrapper[4724]: I0223 17:48:27.024256 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4835f23c-1737-45fa-8d8f-d5a381c9d498-config-data\") pod \"glance-db-sync-qqzwt\" (UID: \"4835f23c-1737-45fa-8d8f-d5a381c9d498\") " pod="openstack/glance-db-sync-qqzwt" Feb 23 17:48:27 crc kubenswrapper[4724]: I0223 17:48:27.024281 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9476\" (UniqueName: \"kubernetes.io/projected/4835f23c-1737-45fa-8d8f-d5a381c9d498-kube-api-access-p9476\") pod \"glance-db-sync-qqzwt\" (UID: \"4835f23c-1737-45fa-8d8f-d5a381c9d498\") " pod="openstack/glance-db-sync-qqzwt" Feb 23 17:48:27 crc kubenswrapper[4724]: I0223 17:48:27.032686 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4835f23c-1737-45fa-8d8f-d5a381c9d498-config-data\") pod \"glance-db-sync-qqzwt\" (UID: \"4835f23c-1737-45fa-8d8f-d5a381c9d498\") " pod="openstack/glance-db-sync-qqzwt" Feb 23 17:48:27 crc kubenswrapper[4724]: I0223 17:48:27.032707 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4835f23c-1737-45fa-8d8f-d5a381c9d498-combined-ca-bundle\") pod \"glance-db-sync-qqzwt\" (UID: \"4835f23c-1737-45fa-8d8f-d5a381c9d498\") " pod="openstack/glance-db-sync-qqzwt" Feb 23 17:48:27 crc kubenswrapper[4724]: I0223 17:48:27.037887 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4835f23c-1737-45fa-8d8f-d5a381c9d498-db-sync-config-data\") pod \"glance-db-sync-qqzwt\" (UID: \"4835f23c-1737-45fa-8d8f-d5a381c9d498\") " pod="openstack/glance-db-sync-qqzwt" Feb 23 17:48:27 crc kubenswrapper[4724]: I0223 17:48:27.042213 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9476\" (UniqueName: \"kubernetes.io/projected/4835f23c-1737-45fa-8d8f-d5a381c9d498-kube-api-access-p9476\") pod \"glance-db-sync-qqzwt\" (UID: \"4835f23c-1737-45fa-8d8f-d5a381c9d498\") " pod="openstack/glance-db-sync-qqzwt" Feb 23 17:48:27 crc kubenswrapper[4724]: I0223 17:48:27.152813 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-qqzwt" Feb 23 17:48:27 crc kubenswrapper[4724]: I0223 17:48:27.626882 4724 generic.go:334] "Generic (PLEG): container finished" podID="bc3d191e-4725-42ef-90af-16b57d7bf649" containerID="8864c894b7303b270801ea3fe29aaacbc78e1e21bcaa7b8ee54d0612a51eeb7d" exitCode=0 Feb 23 17:48:27 crc kubenswrapper[4724]: I0223 17:48:27.626959 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-w2vrd" event={"ID":"bc3d191e-4725-42ef-90af-16b57d7bf649","Type":"ContainerDied","Data":"8864c894b7303b270801ea3fe29aaacbc78e1e21bcaa7b8ee54d0612a51eeb7d"} Feb 23 17:48:27 crc kubenswrapper[4724]: I0223 17:48:27.628893 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hh76w-config-wbnm7" event={"ID":"6bb6da05-6990-43b6-91ba-9ae08f245d3a","Type":"ContainerStarted","Data":"b2da1cebcd254d7bd0efcccf81d514bcae9dae998557c8791d0fbd6420e53d83"} Feb 23 17:48:27 crc kubenswrapper[4724]: I0223 17:48:27.628949 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hh76w-config-wbnm7" event={"ID":"6bb6da05-6990-43b6-91ba-9ae08f245d3a","Type":"ContainerStarted","Data":"0273071872b269bc8ddd270ac87f9487fad2ae3ad212618a2351a2a926dcd035"} Feb 23 17:48:27 crc kubenswrapper[4724]: I0223 17:48:27.670047 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-hh76w-config-wbnm7" podStartSLOduration=1.6700304080000001 podStartE2EDuration="1.670030408s" podCreationTimestamp="2026-02-23 17:48:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:48:27.666802151 +0000 UTC m=+1063.483001761" watchObservedRunningTime="2026-02-23 17:48:27.670030408 +0000 UTC m=+1063.486229998" Feb 23 17:48:27 crc kubenswrapper[4724]: I0223 17:48:27.751863 4724 patch_prober.go:28] interesting pod/machine-config-daemon-rw78r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 17:48:27 crc kubenswrapper[4724]: I0223 17:48:27.751920 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 17:48:27 crc kubenswrapper[4724]: I0223 17:48:27.807223 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/notifications-rabbitmq-server-0" podUID="6e165de7-7e1a-47c3-84d2-9fc675a2224a" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.106:5671: connect: connection refused" Feb 23 17:48:27 crc kubenswrapper[4724]: I0223 17:48:27.842282 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-qqzwt"] Feb 23 17:48:28 crc kubenswrapper[4724]: I0223 17:48:28.132721 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="dd0498b8-b963-4905-a986-13400917ef41" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.107:5671: connect: connection refused" Feb 23 17:48:28 crc kubenswrapper[4724]: I0223 17:48:28.439964 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="101a4642-f4c0-4f81-9d5a-7b8d95110eb2" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.108:5671: connect: connection refused" Feb 23 17:48:28 crc kubenswrapper[4724]: I0223 17:48:28.639785 4724 generic.go:334] "Generic (PLEG): container finished" podID="6bb6da05-6990-43b6-91ba-9ae08f245d3a" containerID="b2da1cebcd254d7bd0efcccf81d514bcae9dae998557c8791d0fbd6420e53d83" exitCode=0 Feb 23 17:48:28 crc kubenswrapper[4724]: I0223 17:48:28.639868 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hh76w-config-wbnm7" event={"ID":"6bb6da05-6990-43b6-91ba-9ae08f245d3a","Type":"ContainerDied","Data":"b2da1cebcd254d7bd0efcccf81d514bcae9dae998557c8791d0fbd6420e53d83"} Feb 23 17:48:28 crc kubenswrapper[4724]: I0223 17:48:28.641713 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-qqzwt" event={"ID":"4835f23c-1737-45fa-8d8f-d5a381c9d498","Type":"ContainerStarted","Data":"a6916bfb1b1ec948e2212bc3352e874aa4f20c8d3c5a868458f7259fbb81bc2c"} Feb 23 17:48:29 crc kubenswrapper[4724]: I0223 17:48:29.354190 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-w2vrd" Feb 23 17:48:29 crc kubenswrapper[4724]: I0223 17:48:29.468542 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc3d191e-4725-42ef-90af-16b57d7bf649-combined-ca-bundle\") pod \"bc3d191e-4725-42ef-90af-16b57d7bf649\" (UID: \"bc3d191e-4725-42ef-90af-16b57d7bf649\") " Feb 23 17:48:29 crc kubenswrapper[4724]: I0223 17:48:29.468649 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bc3d191e-4725-42ef-90af-16b57d7bf649-swiftconf\") pod \"bc3d191e-4725-42ef-90af-16b57d7bf649\" (UID: \"bc3d191e-4725-42ef-90af-16b57d7bf649\") " Feb 23 17:48:29 crc kubenswrapper[4724]: I0223 17:48:29.468725 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bc3d191e-4725-42ef-90af-16b57d7bf649-dispersionconf\") pod \"bc3d191e-4725-42ef-90af-16b57d7bf649\" (UID: \"bc3d191e-4725-42ef-90af-16b57d7bf649\") " Feb 23 17:48:29 crc kubenswrapper[4724]: I0223 17:48:29.468836 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bc3d191e-4725-42ef-90af-16b57d7bf649-scripts\") pod \"bc3d191e-4725-42ef-90af-16b57d7bf649\" (UID: \"bc3d191e-4725-42ef-90af-16b57d7bf649\") " Feb 23 17:48:29 crc kubenswrapper[4724]: I0223 17:48:29.468874 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bc3d191e-4725-42ef-90af-16b57d7bf649-etc-swift\") pod \"bc3d191e-4725-42ef-90af-16b57d7bf649\" (UID: \"bc3d191e-4725-42ef-90af-16b57d7bf649\") " Feb 23 17:48:29 crc kubenswrapper[4724]: I0223 17:48:29.468900 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bc3d191e-4725-42ef-90af-16b57d7bf649-ring-data-devices\") pod \"bc3d191e-4725-42ef-90af-16b57d7bf649\" (UID: \"bc3d191e-4725-42ef-90af-16b57d7bf649\") " Feb 23 17:48:29 crc kubenswrapper[4724]: I0223 17:48:29.468941 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgzmw\" (UniqueName: \"kubernetes.io/projected/bc3d191e-4725-42ef-90af-16b57d7bf649-kube-api-access-hgzmw\") pod \"bc3d191e-4725-42ef-90af-16b57d7bf649\" (UID: \"bc3d191e-4725-42ef-90af-16b57d7bf649\") " Feb 23 17:48:29 crc kubenswrapper[4724]: I0223 17:48:29.471178 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc3d191e-4725-42ef-90af-16b57d7bf649-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "bc3d191e-4725-42ef-90af-16b57d7bf649" (UID: "bc3d191e-4725-42ef-90af-16b57d7bf649"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:48:29 crc kubenswrapper[4724]: I0223 17:48:29.471979 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc3d191e-4725-42ef-90af-16b57d7bf649-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "bc3d191e-4725-42ef-90af-16b57d7bf649" (UID: "bc3d191e-4725-42ef-90af-16b57d7bf649"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:48:29 crc kubenswrapper[4724]: I0223 17:48:29.474580 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc3d191e-4725-42ef-90af-16b57d7bf649-kube-api-access-hgzmw" (OuterVolumeSpecName: "kube-api-access-hgzmw") pod "bc3d191e-4725-42ef-90af-16b57d7bf649" (UID: "bc3d191e-4725-42ef-90af-16b57d7bf649"). InnerVolumeSpecName "kube-api-access-hgzmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:48:29 crc kubenswrapper[4724]: I0223 17:48:29.480453 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc3d191e-4725-42ef-90af-16b57d7bf649-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "bc3d191e-4725-42ef-90af-16b57d7bf649" (UID: "bc3d191e-4725-42ef-90af-16b57d7bf649"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:48:29 crc kubenswrapper[4724]: I0223 17:48:29.493974 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc3d191e-4725-42ef-90af-16b57d7bf649-scripts" (OuterVolumeSpecName: "scripts") pod "bc3d191e-4725-42ef-90af-16b57d7bf649" (UID: "bc3d191e-4725-42ef-90af-16b57d7bf649"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:48:29 crc kubenswrapper[4724]: I0223 17:48:29.494848 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc3d191e-4725-42ef-90af-16b57d7bf649-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc3d191e-4725-42ef-90af-16b57d7bf649" (UID: "bc3d191e-4725-42ef-90af-16b57d7bf649"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:48:29 crc kubenswrapper[4724]: I0223 17:48:29.503610 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc3d191e-4725-42ef-90af-16b57d7bf649-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "bc3d191e-4725-42ef-90af-16b57d7bf649" (UID: "bc3d191e-4725-42ef-90af-16b57d7bf649"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:48:29 crc kubenswrapper[4724]: I0223 17:48:29.570892 4724 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bc3d191e-4725-42ef-90af-16b57d7bf649-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:29 crc kubenswrapper[4724]: I0223 17:48:29.570937 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgzmw\" (UniqueName: \"kubernetes.io/projected/bc3d191e-4725-42ef-90af-16b57d7bf649-kube-api-access-hgzmw\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:29 crc kubenswrapper[4724]: I0223 17:48:29.570951 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc3d191e-4725-42ef-90af-16b57d7bf649-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:29 crc kubenswrapper[4724]: I0223 17:48:29.570964 4724 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bc3d191e-4725-42ef-90af-16b57d7bf649-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:29 crc kubenswrapper[4724]: I0223 17:48:29.570976 4724 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bc3d191e-4725-42ef-90af-16b57d7bf649-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:29 crc kubenswrapper[4724]: I0223 17:48:29.570989 4724 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bc3d191e-4725-42ef-90af-16b57d7bf649-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:29 crc kubenswrapper[4724]: I0223 17:48:29.571001 4724 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bc3d191e-4725-42ef-90af-16b57d7bf649-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:29 crc kubenswrapper[4724]: I0223 17:48:29.649811 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-w2vrd" Feb 23 17:48:29 crc kubenswrapper[4724]: I0223 17:48:29.652215 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-w2vrd" event={"ID":"bc3d191e-4725-42ef-90af-16b57d7bf649","Type":"ContainerDied","Data":"9734e487247cedd89987b4aeb597bb7a7ca4c0e0d71c7cfcaea08988e8262a35"} Feb 23 17:48:29 crc kubenswrapper[4724]: I0223 17:48:29.652249 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9734e487247cedd89987b4aeb597bb7a7ca4c0e0d71c7cfcaea08988e8262a35" Feb 23 17:48:29 crc kubenswrapper[4724]: I0223 17:48:29.660231 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ad58a78a-ccdb-4154-852e-8a8984a2a650","Type":"ContainerStarted","Data":"e974807255898f073fdf68444b2590a33f3d9146d2fd3b57a7e029dfa4743a35"} Feb 23 17:48:29 crc kubenswrapper[4724]: I0223 17:48:29.711596 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=11.196213322 podStartE2EDuration="1m16.71157912s" podCreationTimestamp="2026-02-23 17:47:13 +0000 UTC" firstStartedPulling="2026-02-23 17:47:23.720231298 +0000 UTC m=+999.536430908" lastFinishedPulling="2026-02-23 17:48:29.235597106 +0000 UTC m=+1065.051796706" observedRunningTime="2026-02-23 17:48:29.708440883 +0000 UTC m=+1065.524640483" watchObservedRunningTime="2026-02-23 17:48:29.71157912 +0000 UTC m=+1065.527778720" Feb 23 17:48:29 crc kubenswrapper[4724]: I0223 17:48:29.799921 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-mm7h8"] Feb 23 17:48:29 crc kubenswrapper[4724]: E0223 17:48:29.800438 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc3d191e-4725-42ef-90af-16b57d7bf649" containerName="swift-ring-rebalance" Feb 23 17:48:29 crc kubenswrapper[4724]: I0223 17:48:29.800455 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc3d191e-4725-42ef-90af-16b57d7bf649" containerName="swift-ring-rebalance" Feb 23 17:48:29 crc kubenswrapper[4724]: I0223 17:48:29.800601 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc3d191e-4725-42ef-90af-16b57d7bf649" containerName="swift-ring-rebalance" Feb 23 17:48:29 crc kubenswrapper[4724]: I0223 17:48:29.801153 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mm7h8" Feb 23 17:48:29 crc kubenswrapper[4724]: I0223 17:48:29.812445 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 23 17:48:29 crc kubenswrapper[4724]: I0223 17:48:29.819908 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-mm7h8"] Feb 23 17:48:29 crc kubenswrapper[4724]: I0223 17:48:29.876694 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb690dfd-e2b9-4ea0-a4d6-9a9ff6adf4ef-operator-scripts\") pod \"root-account-create-update-mm7h8\" (UID: \"cb690dfd-e2b9-4ea0-a4d6-9a9ff6adf4ef\") " pod="openstack/root-account-create-update-mm7h8" Feb 23 17:48:29 crc kubenswrapper[4724]: I0223 17:48:29.876774 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7xbr\" (UniqueName: \"kubernetes.io/projected/cb690dfd-e2b9-4ea0-a4d6-9a9ff6adf4ef-kube-api-access-w7xbr\") pod \"root-account-create-update-mm7h8\" (UID: \"cb690dfd-e2b9-4ea0-a4d6-9a9ff6adf4ef\") " pod="openstack/root-account-create-update-mm7h8" Feb 23 17:48:29 crc kubenswrapper[4724]: I0223 17:48:29.978326 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb690dfd-e2b9-4ea0-a4d6-9a9ff6adf4ef-operator-scripts\") pod \"root-account-create-update-mm7h8\" (UID: \"cb690dfd-e2b9-4ea0-a4d6-9a9ff6adf4ef\") " pod="openstack/root-account-create-update-mm7h8" Feb 23 17:48:29 crc kubenswrapper[4724]: I0223 17:48:29.978706 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7xbr\" (UniqueName: \"kubernetes.io/projected/cb690dfd-e2b9-4ea0-a4d6-9a9ff6adf4ef-kube-api-access-w7xbr\") pod \"root-account-create-update-mm7h8\" (UID: \"cb690dfd-e2b9-4ea0-a4d6-9a9ff6adf4ef\") " pod="openstack/root-account-create-update-mm7h8" Feb 23 17:48:29 crc kubenswrapper[4724]: I0223 17:48:29.979790 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb690dfd-e2b9-4ea0-a4d6-9a9ff6adf4ef-operator-scripts\") pod \"root-account-create-update-mm7h8\" (UID: \"cb690dfd-e2b9-4ea0-a4d6-9a9ff6adf4ef\") " pod="openstack/root-account-create-update-mm7h8" Feb 23 17:48:29 crc kubenswrapper[4724]: I0223 17:48:29.998599 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7xbr\" (UniqueName: \"kubernetes.io/projected/cb690dfd-e2b9-4ea0-a4d6-9a9ff6adf4ef-kube-api-access-w7xbr\") pod \"root-account-create-update-mm7h8\" (UID: \"cb690dfd-e2b9-4ea0-a4d6-9a9ff6adf4ef\") " pod="openstack/root-account-create-update-mm7h8" Feb 23 17:48:30 crc kubenswrapper[4724]: I0223 17:48:30.068669 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hh76w-config-wbnm7" Feb 23 17:48:30 crc kubenswrapper[4724]: I0223 17:48:30.177986 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 23 17:48:30 crc kubenswrapper[4724]: I0223 17:48:30.178032 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 23 17:48:30 crc kubenswrapper[4724]: I0223 17:48:30.180451 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 23 17:48:30 crc kubenswrapper[4724]: I0223 17:48:30.182076 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2j8d\" (UniqueName: \"kubernetes.io/projected/6bb6da05-6990-43b6-91ba-9ae08f245d3a-kube-api-access-x2j8d\") pod \"6bb6da05-6990-43b6-91ba-9ae08f245d3a\" (UID: \"6bb6da05-6990-43b6-91ba-9ae08f245d3a\") " Feb 23 17:48:30 crc kubenswrapper[4724]: I0223 17:48:30.182120 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6bb6da05-6990-43b6-91ba-9ae08f245d3a-var-log-ovn\") pod \"6bb6da05-6990-43b6-91ba-9ae08f245d3a\" (UID: \"6bb6da05-6990-43b6-91ba-9ae08f245d3a\") " Feb 23 17:48:30 crc kubenswrapper[4724]: I0223 17:48:30.182151 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6bb6da05-6990-43b6-91ba-9ae08f245d3a-var-run\") pod \"6bb6da05-6990-43b6-91ba-9ae08f245d3a\" (UID: \"6bb6da05-6990-43b6-91ba-9ae08f245d3a\") " Feb 23 17:48:30 crc kubenswrapper[4724]: I0223 17:48:30.182191 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6bb6da05-6990-43b6-91ba-9ae08f245d3a-scripts\") pod \"6bb6da05-6990-43b6-91ba-9ae08f245d3a\" (UID: \"6bb6da05-6990-43b6-91ba-9ae08f245d3a\") " Feb 23 17:48:30 crc kubenswrapper[4724]: I0223 17:48:30.182257 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6bb6da05-6990-43b6-91ba-9ae08f245d3a-additional-scripts\") pod \"6bb6da05-6990-43b6-91ba-9ae08f245d3a\" (UID: \"6bb6da05-6990-43b6-91ba-9ae08f245d3a\") " Feb 23 17:48:30 crc kubenswrapper[4724]: I0223 17:48:30.182276 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bb6da05-6990-43b6-91ba-9ae08f245d3a-var-run" (OuterVolumeSpecName: "var-run") pod "6bb6da05-6990-43b6-91ba-9ae08f245d3a" (UID: "6bb6da05-6990-43b6-91ba-9ae08f245d3a"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:48:30 crc kubenswrapper[4724]: I0223 17:48:30.182308 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6bb6da05-6990-43b6-91ba-9ae08f245d3a-var-run-ovn\") pod \"6bb6da05-6990-43b6-91ba-9ae08f245d3a\" (UID: \"6bb6da05-6990-43b6-91ba-9ae08f245d3a\") " Feb 23 17:48:30 crc kubenswrapper[4724]: I0223 17:48:30.182551 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bb6da05-6990-43b6-91ba-9ae08f245d3a-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "6bb6da05-6990-43b6-91ba-9ae08f245d3a" (UID: "6bb6da05-6990-43b6-91ba-9ae08f245d3a"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:48:30 crc kubenswrapper[4724]: I0223 17:48:30.182756 4724 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6bb6da05-6990-43b6-91ba-9ae08f245d3a-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:30 crc kubenswrapper[4724]: I0223 17:48:30.182768 4724 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6bb6da05-6990-43b6-91ba-9ae08f245d3a-var-run\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:30 crc kubenswrapper[4724]: I0223 17:48:30.183070 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bb6da05-6990-43b6-91ba-9ae08f245d3a-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "6bb6da05-6990-43b6-91ba-9ae08f245d3a" (UID: "6bb6da05-6990-43b6-91ba-9ae08f245d3a"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:48:30 crc kubenswrapper[4724]: I0223 17:48:30.183276 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bb6da05-6990-43b6-91ba-9ae08f245d3a-scripts" (OuterVolumeSpecName: "scripts") pod "6bb6da05-6990-43b6-91ba-9ae08f245d3a" (UID: "6bb6da05-6990-43b6-91ba-9ae08f245d3a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:48:30 crc kubenswrapper[4724]: I0223 17:48:30.184074 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bb6da05-6990-43b6-91ba-9ae08f245d3a-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "6bb6da05-6990-43b6-91ba-9ae08f245d3a" (UID: "6bb6da05-6990-43b6-91ba-9ae08f245d3a"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:48:30 crc kubenswrapper[4724]: I0223 17:48:30.185926 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bb6da05-6990-43b6-91ba-9ae08f245d3a-kube-api-access-x2j8d" (OuterVolumeSpecName: "kube-api-access-x2j8d") pod "6bb6da05-6990-43b6-91ba-9ae08f245d3a" (UID: "6bb6da05-6990-43b6-91ba-9ae08f245d3a"). InnerVolumeSpecName "kube-api-access-x2j8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:48:30 crc kubenswrapper[4724]: I0223 17:48:30.234422 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mm7h8" Feb 23 17:48:30 crc kubenswrapper[4724]: I0223 17:48:30.286424 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2j8d\" (UniqueName: \"kubernetes.io/projected/6bb6da05-6990-43b6-91ba-9ae08f245d3a-kube-api-access-x2j8d\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:30 crc kubenswrapper[4724]: I0223 17:48:30.286455 4724 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6bb6da05-6990-43b6-91ba-9ae08f245d3a-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:30 crc kubenswrapper[4724]: I0223 17:48:30.286465 4724 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6bb6da05-6990-43b6-91ba-9ae08f245d3a-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:30 crc kubenswrapper[4724]: I0223 17:48:30.286477 4724 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6bb6da05-6990-43b6-91ba-9ae08f245d3a-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:30 crc kubenswrapper[4724]: I0223 17:48:30.684699 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hh76w-config-wbnm7" event={"ID":"6bb6da05-6990-43b6-91ba-9ae08f245d3a","Type":"ContainerDied","Data":"0273071872b269bc8ddd270ac87f9487fad2ae3ad212618a2351a2a926dcd035"} Feb 23 17:48:30 crc kubenswrapper[4724]: I0223 17:48:30.684965 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0273071872b269bc8ddd270ac87f9487fad2ae3ad212618a2351a2a926dcd035" Feb 23 17:48:30 crc kubenswrapper[4724]: I0223 17:48:30.686383 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 23 17:48:30 crc kubenswrapper[4724]: I0223 17:48:30.689040 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hh76w-config-wbnm7" Feb 23 17:48:30 crc kubenswrapper[4724]: I0223 17:48:30.852487 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-mm7h8"] Feb 23 17:48:30 crc kubenswrapper[4724]: W0223 17:48:30.870544 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb690dfd_e2b9_4ea0_a4d6_9a9ff6adf4ef.slice/crio-8404a957d018bce92160be2c247bc8e784f2c6d80ffd88a66c657741490ccb91 WatchSource:0}: Error finding container 8404a957d018bce92160be2c247bc8e784f2c6d80ffd88a66c657741490ccb91: Status 404 returned error can't find the container with id 8404a957d018bce92160be2c247bc8e784f2c6d80ffd88a66c657741490ccb91 Feb 23 17:48:31 crc kubenswrapper[4724]: I0223 17:48:31.145355 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-hh76w-config-wbnm7"] Feb 23 17:48:31 crc kubenswrapper[4724]: I0223 17:48:31.151470 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-hh76w-config-wbnm7"] Feb 23 17:48:31 crc kubenswrapper[4724]: I0223 17:48:31.696933 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mm7h8" event={"ID":"cb690dfd-e2b9-4ea0-a4d6-9a9ff6adf4ef","Type":"ContainerStarted","Data":"06fb9d8c6b056cdb5197bad91bfc6bebeedc47243b7dfa65594633a702723d50"} Feb 23 17:48:31 crc kubenswrapper[4724]: I0223 17:48:31.696978 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mm7h8" event={"ID":"cb690dfd-e2b9-4ea0-a4d6-9a9ff6adf4ef","Type":"ContainerStarted","Data":"8404a957d018bce92160be2c247bc8e784f2c6d80ffd88a66c657741490ccb91"} Feb 23 17:48:31 crc kubenswrapper[4724]: I0223 17:48:31.715045 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-mm7h8" podStartSLOduration=2.715028069 podStartE2EDuration="2.715028069s" podCreationTimestamp="2026-02-23 17:48:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:48:31.712725362 +0000 UTC m=+1067.528924982" watchObservedRunningTime="2026-02-23 17:48:31.715028069 +0000 UTC m=+1067.531227669" Feb 23 17:48:32 crc kubenswrapper[4724]: I0223 17:48:32.643194 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 23 17:48:32 crc kubenswrapper[4724]: I0223 17:48:32.705793 4724 generic.go:334] "Generic (PLEG): container finished" podID="cb690dfd-e2b9-4ea0-a4d6-9a9ff6adf4ef" containerID="06fb9d8c6b056cdb5197bad91bfc6bebeedc47243b7dfa65594633a702723d50" exitCode=0 Feb 23 17:48:32 crc kubenswrapper[4724]: I0223 17:48:32.705870 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mm7h8" event={"ID":"cb690dfd-e2b9-4ea0-a4d6-9a9ff6adf4ef","Type":"ContainerDied","Data":"06fb9d8c6b056cdb5197bad91bfc6bebeedc47243b7dfa65594633a702723d50"} Feb 23 17:48:32 crc kubenswrapper[4724]: I0223 17:48:32.965971 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bb6da05-6990-43b6-91ba-9ae08f245d3a" path="/var/lib/kubelet/pods/6bb6da05-6990-43b6-91ba-9ae08f245d3a/volumes" Feb 23 17:48:33 crc kubenswrapper[4724]: I0223 17:48:33.412993 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 23 17:48:33 crc kubenswrapper[4724]: I0223 17:48:33.713893 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="ad58a78a-ccdb-4154-852e-8a8984a2a650" containerName="config-reloader" containerID="cri-o://f5bf59b649fa98c30ad816168fb029be1cc7c10b7b9f0e5f43d7540ba180fb00" gracePeriod=600 Feb 23 17:48:33 crc kubenswrapper[4724]: I0223 17:48:33.714524 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="ad58a78a-ccdb-4154-852e-8a8984a2a650" containerName="prometheus" containerID="cri-o://e974807255898f073fdf68444b2590a33f3d9146d2fd3b57a7e029dfa4743a35" gracePeriod=600 Feb 23 17:48:33 crc kubenswrapper[4724]: I0223 17:48:33.714576 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="ad58a78a-ccdb-4154-852e-8a8984a2a650" containerName="thanos-sidecar" containerID="cri-o://de5be2342bd4a6b35d3550920aef4a03893172b11fad0fb7fd67afea2e3564d8" gracePeriod=600 Feb 23 17:48:34 crc kubenswrapper[4724]: I0223 17:48:34.048289 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mm7h8" Feb 23 17:48:34 crc kubenswrapper[4724]: I0223 17:48:34.153025 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb690dfd-e2b9-4ea0-a4d6-9a9ff6adf4ef-operator-scripts\") pod \"cb690dfd-e2b9-4ea0-a4d6-9a9ff6adf4ef\" (UID: \"cb690dfd-e2b9-4ea0-a4d6-9a9ff6adf4ef\") " Feb 23 17:48:34 crc kubenswrapper[4724]: I0223 17:48:34.153337 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7xbr\" (UniqueName: \"kubernetes.io/projected/cb690dfd-e2b9-4ea0-a4d6-9a9ff6adf4ef-kube-api-access-w7xbr\") pod \"cb690dfd-e2b9-4ea0-a4d6-9a9ff6adf4ef\" (UID: \"cb690dfd-e2b9-4ea0-a4d6-9a9ff6adf4ef\") " Feb 23 17:48:34 crc kubenswrapper[4724]: I0223 17:48:34.153786 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb690dfd-e2b9-4ea0-a4d6-9a9ff6adf4ef-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cb690dfd-e2b9-4ea0-a4d6-9a9ff6adf4ef" (UID: "cb690dfd-e2b9-4ea0-a4d6-9a9ff6adf4ef"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:48:34 crc kubenswrapper[4724]: I0223 17:48:34.184592 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb690dfd-e2b9-4ea0-a4d6-9a9ff6adf4ef-kube-api-access-w7xbr" (OuterVolumeSpecName: "kube-api-access-w7xbr") pod "cb690dfd-e2b9-4ea0-a4d6-9a9ff6adf4ef" (UID: "cb690dfd-e2b9-4ea0-a4d6-9a9ff6adf4ef"). InnerVolumeSpecName "kube-api-access-w7xbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:48:34 crc kubenswrapper[4724]: I0223 17:48:34.254795 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7xbr\" (UniqueName: \"kubernetes.io/projected/cb690dfd-e2b9-4ea0-a4d6-9a9ff6adf4ef-kube-api-access-w7xbr\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:34 crc kubenswrapper[4724]: I0223 17:48:34.254831 4724 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb690dfd-e2b9-4ea0-a4d6-9a9ff6adf4ef-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:34 crc kubenswrapper[4724]: I0223 17:48:34.723125 4724 generic.go:334] "Generic (PLEG): container finished" podID="ad58a78a-ccdb-4154-852e-8a8984a2a650" containerID="e974807255898f073fdf68444b2590a33f3d9146d2fd3b57a7e029dfa4743a35" exitCode=0 Feb 23 17:48:34 crc kubenswrapper[4724]: I0223 17:48:34.723417 4724 generic.go:334] "Generic (PLEG): container finished" podID="ad58a78a-ccdb-4154-852e-8a8984a2a650" containerID="de5be2342bd4a6b35d3550920aef4a03893172b11fad0fb7fd67afea2e3564d8" exitCode=0 Feb 23 17:48:34 crc kubenswrapper[4724]: I0223 17:48:34.723263 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ad58a78a-ccdb-4154-852e-8a8984a2a650","Type":"ContainerDied","Data":"e974807255898f073fdf68444b2590a33f3d9146d2fd3b57a7e029dfa4743a35"} Feb 23 17:48:34 crc kubenswrapper[4724]: I0223 17:48:34.723479 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ad58a78a-ccdb-4154-852e-8a8984a2a650","Type":"ContainerDied","Data":"de5be2342bd4a6b35d3550920aef4a03893172b11fad0fb7fd67afea2e3564d8"} Feb 23 17:48:34 crc kubenswrapper[4724]: I0223 17:48:34.725038 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mm7h8" event={"ID":"cb690dfd-e2b9-4ea0-a4d6-9a9ff6adf4ef","Type":"ContainerDied","Data":"8404a957d018bce92160be2c247bc8e784f2c6d80ffd88a66c657741490ccb91"} Feb 23 17:48:34 crc kubenswrapper[4724]: I0223 17:48:34.725073 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8404a957d018bce92160be2c247bc8e784f2c6d80ffd88a66c657741490ccb91" Feb 23 17:48:34 crc kubenswrapper[4724]: I0223 17:48:34.725273 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mm7h8" Feb 23 17:48:35 crc kubenswrapper[4724]: I0223 17:48:35.735165 4724 generic.go:334] "Generic (PLEG): container finished" podID="ad58a78a-ccdb-4154-852e-8a8984a2a650" containerID="f5bf59b649fa98c30ad816168fb029be1cc7c10b7b9f0e5f43d7540ba180fb00" exitCode=0 Feb 23 17:48:35 crc kubenswrapper[4724]: I0223 17:48:35.735214 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ad58a78a-ccdb-4154-852e-8a8984a2a650","Type":"ContainerDied","Data":"f5bf59b649fa98c30ad816168fb029be1cc7c10b7b9f0e5f43d7540ba180fb00"} Feb 23 17:48:36 crc kubenswrapper[4724]: I0223 17:48:36.996444 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3946025b-c492-4f1b-a3c3-62d2fa658586-etc-swift\") pod \"swift-storage-0\" (UID: \"3946025b-c492-4f1b-a3c3-62d2fa658586\") " pod="openstack/swift-storage-0" Feb 23 17:48:37 crc kubenswrapper[4724]: I0223 17:48:37.003253 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3946025b-c492-4f1b-a3c3-62d2fa658586-etc-swift\") pod \"swift-storage-0\" (UID: \"3946025b-c492-4f1b-a3c3-62d2fa658586\") " pod="openstack/swift-storage-0" Feb 23 17:48:37 crc kubenswrapper[4724]: I0223 17:48:37.182118 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 23 17:48:37 crc kubenswrapper[4724]: I0223 17:48:37.805610 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/notifications-rabbitmq-server-0" podUID="6e165de7-7e1a-47c3-84d2-9fc675a2224a" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.106:5671: connect: connection refused" Feb 23 17:48:38 crc kubenswrapper[4724]: I0223 17:48:38.132294 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="dd0498b8-b963-4905-a986-13400917ef41" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.107:5671: connect: connection refused" Feb 23 17:48:38 crc kubenswrapper[4724]: I0223 17:48:38.178210 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="ad58a78a-ccdb-4154-852e-8a8984a2a650" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.113:9090/-/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 23 17:48:38 crc kubenswrapper[4724]: I0223 17:48:38.438684 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="101a4642-f4c0-4f81-9d5a-7b8d95110eb2" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.108:5671: connect: connection refused" Feb 23 17:48:42 crc kubenswrapper[4724]: I0223 17:48:42.816138 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ad58a78a-ccdb-4154-852e-8a8984a2a650","Type":"ContainerDied","Data":"bb42c41d5efb541cd096a5a897f5371ccbf7bcc91b1abb85e8ffc52104b8cc7b"} Feb 23 17:48:42 crc kubenswrapper[4724]: I0223 17:48:42.816488 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb42c41d5efb541cd096a5a897f5371ccbf7bcc91b1abb85e8ffc52104b8cc7b" Feb 23 17:48:42 crc kubenswrapper[4724]: I0223 17:48:42.914045 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 23 17:48:43 crc kubenswrapper[4724]: I0223 17:48:43.092618 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/ad58a78a-ccdb-4154-852e-8a8984a2a650-prometheus-metric-storage-rulefiles-2\") pod \"ad58a78a-ccdb-4154-852e-8a8984a2a650\" (UID: \"ad58a78a-ccdb-4154-852e-8a8984a2a650\") " Feb 23 17:48:43 crc kubenswrapper[4724]: I0223 17:48:43.092987 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmqft\" (UniqueName: \"kubernetes.io/projected/ad58a78a-ccdb-4154-852e-8a8984a2a650-kube-api-access-cmqft\") pod \"ad58a78a-ccdb-4154-852e-8a8984a2a650\" (UID: \"ad58a78a-ccdb-4154-852e-8a8984a2a650\") " Feb 23 17:48:43 crc kubenswrapper[4724]: I0223 17:48:43.093205 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-81a74b65-f0fc-4662-b1bf-6ba433e2cb11\") pod \"ad58a78a-ccdb-4154-852e-8a8984a2a650\" (UID: \"ad58a78a-ccdb-4154-852e-8a8984a2a650\") " Feb 23 17:48:43 crc kubenswrapper[4724]: I0223 17:48:43.093247 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/ad58a78a-ccdb-4154-852e-8a8984a2a650-thanos-prometheus-http-client-file\") pod \"ad58a78a-ccdb-4154-852e-8a8984a2a650\" (UID: \"ad58a78a-ccdb-4154-852e-8a8984a2a650\") " Feb 23 17:48:43 crc kubenswrapper[4724]: I0223 17:48:43.093282 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ad58a78a-ccdb-4154-852e-8a8984a2a650-tls-assets\") pod \"ad58a78a-ccdb-4154-852e-8a8984a2a650\" (UID: \"ad58a78a-ccdb-4154-852e-8a8984a2a650\") " Feb 23 17:48:43 crc kubenswrapper[4724]: I0223 17:48:43.093300 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/ad58a78a-ccdb-4154-852e-8a8984a2a650-prometheus-metric-storage-rulefiles-0\") pod \"ad58a78a-ccdb-4154-852e-8a8984a2a650\" (UID: \"ad58a78a-ccdb-4154-852e-8a8984a2a650\") " Feb 23 17:48:43 crc kubenswrapper[4724]: I0223 17:48:43.093419 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ad58a78a-ccdb-4154-852e-8a8984a2a650-web-config\") pod \"ad58a78a-ccdb-4154-852e-8a8984a2a650\" (UID: \"ad58a78a-ccdb-4154-852e-8a8984a2a650\") " Feb 23 17:48:43 crc kubenswrapper[4724]: I0223 17:48:43.093439 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ad58a78a-ccdb-4154-852e-8a8984a2a650-config-out\") pod \"ad58a78a-ccdb-4154-852e-8a8984a2a650\" (UID: \"ad58a78a-ccdb-4154-852e-8a8984a2a650\") " Feb 23 17:48:43 crc kubenswrapper[4724]: I0223 17:48:43.093462 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/ad58a78a-ccdb-4154-852e-8a8984a2a650-prometheus-metric-storage-rulefiles-1\") pod \"ad58a78a-ccdb-4154-852e-8a8984a2a650\" (UID: \"ad58a78a-ccdb-4154-852e-8a8984a2a650\") " Feb 23 17:48:43 crc kubenswrapper[4724]: I0223 17:48:43.093509 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ad58a78a-ccdb-4154-852e-8a8984a2a650-config\") pod \"ad58a78a-ccdb-4154-852e-8a8984a2a650\" (UID: \"ad58a78a-ccdb-4154-852e-8a8984a2a650\") " Feb 23 17:48:43 crc kubenswrapper[4724]: I0223 17:48:43.093505 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad58a78a-ccdb-4154-852e-8a8984a2a650-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "ad58a78a-ccdb-4154-852e-8a8984a2a650" (UID: "ad58a78a-ccdb-4154-852e-8a8984a2a650"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:48:43 crc kubenswrapper[4724]: I0223 17:48:43.094052 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad58a78a-ccdb-4154-852e-8a8984a2a650-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "ad58a78a-ccdb-4154-852e-8a8984a2a650" (UID: "ad58a78a-ccdb-4154-852e-8a8984a2a650"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:48:43 crc kubenswrapper[4724]: I0223 17:48:43.094066 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad58a78a-ccdb-4154-852e-8a8984a2a650-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "ad58a78a-ccdb-4154-852e-8a8984a2a650" (UID: "ad58a78a-ccdb-4154-852e-8a8984a2a650"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:48:43 crc kubenswrapper[4724]: I0223 17:48:43.095258 4724 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/ad58a78a-ccdb-4154-852e-8a8984a2a650-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:43 crc kubenswrapper[4724]: I0223 17:48:43.095304 4724 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/ad58a78a-ccdb-4154-852e-8a8984a2a650-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:43 crc kubenswrapper[4724]: I0223 17:48:43.095315 4724 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/ad58a78a-ccdb-4154-852e-8a8984a2a650-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:43 crc kubenswrapper[4724]: I0223 17:48:43.097701 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad58a78a-ccdb-4154-852e-8a8984a2a650-kube-api-access-cmqft" (OuterVolumeSpecName: "kube-api-access-cmqft") pod "ad58a78a-ccdb-4154-852e-8a8984a2a650" (UID: "ad58a78a-ccdb-4154-852e-8a8984a2a650"). InnerVolumeSpecName "kube-api-access-cmqft". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:48:43 crc kubenswrapper[4724]: I0223 17:48:43.098587 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad58a78a-ccdb-4154-852e-8a8984a2a650-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "ad58a78a-ccdb-4154-852e-8a8984a2a650" (UID: "ad58a78a-ccdb-4154-852e-8a8984a2a650"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:48:43 crc kubenswrapper[4724]: I0223 17:48:43.099901 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad58a78a-ccdb-4154-852e-8a8984a2a650-config-out" (OuterVolumeSpecName: "config-out") pod "ad58a78a-ccdb-4154-852e-8a8984a2a650" (UID: "ad58a78a-ccdb-4154-852e-8a8984a2a650"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:48:43 crc kubenswrapper[4724]: I0223 17:48:43.100294 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad58a78a-ccdb-4154-852e-8a8984a2a650-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "ad58a78a-ccdb-4154-852e-8a8984a2a650" (UID: "ad58a78a-ccdb-4154-852e-8a8984a2a650"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:48:43 crc kubenswrapper[4724]: I0223 17:48:43.102218 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad58a78a-ccdb-4154-852e-8a8984a2a650-config" (OuterVolumeSpecName: "config") pod "ad58a78a-ccdb-4154-852e-8a8984a2a650" (UID: "ad58a78a-ccdb-4154-852e-8a8984a2a650"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:48:43 crc kubenswrapper[4724]: I0223 17:48:43.118785 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-81a74b65-f0fc-4662-b1bf-6ba433e2cb11" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "ad58a78a-ccdb-4154-852e-8a8984a2a650" (UID: "ad58a78a-ccdb-4154-852e-8a8984a2a650"). InnerVolumeSpecName "pvc-81a74b65-f0fc-4662-b1bf-6ba433e2cb11". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 23 17:48:43 crc kubenswrapper[4724]: I0223 17:48:43.135882 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad58a78a-ccdb-4154-852e-8a8984a2a650-web-config" (OuterVolumeSpecName: "web-config") pod "ad58a78a-ccdb-4154-852e-8a8984a2a650" (UID: "ad58a78a-ccdb-4154-852e-8a8984a2a650"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:48:43 crc kubenswrapper[4724]: I0223 17:48:43.178954 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="ad58a78a-ccdb-4154-852e-8a8984a2a650" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.113:9090/-/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 23 17:48:43 crc kubenswrapper[4724]: I0223 17:48:43.196921 4724 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ad58a78a-ccdb-4154-852e-8a8984a2a650-web-config\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:43 crc kubenswrapper[4724]: I0223 17:48:43.197192 4724 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ad58a78a-ccdb-4154-852e-8a8984a2a650-config-out\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:43 crc kubenswrapper[4724]: I0223 17:48:43.197262 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/ad58a78a-ccdb-4154-852e-8a8984a2a650-config\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:43 crc kubenswrapper[4724]: I0223 17:48:43.197320 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmqft\" (UniqueName: \"kubernetes.io/projected/ad58a78a-ccdb-4154-852e-8a8984a2a650-kube-api-access-cmqft\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:43 crc kubenswrapper[4724]: I0223 17:48:43.197439 4724 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-81a74b65-f0fc-4662-b1bf-6ba433e2cb11\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-81a74b65-f0fc-4662-b1bf-6ba433e2cb11\") on node \"crc\" " Feb 23 17:48:43 crc kubenswrapper[4724]: I0223 17:48:43.197543 4724 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/ad58a78a-ccdb-4154-852e-8a8984a2a650-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:43 crc kubenswrapper[4724]: I0223 17:48:43.197639 4724 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ad58a78a-ccdb-4154-852e-8a8984a2a650-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:43 crc kubenswrapper[4724]: I0223 17:48:43.221617 4724 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 23 17:48:43 crc kubenswrapper[4724]: I0223 17:48:43.221835 4724 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-81a74b65-f0fc-4662-b1bf-6ba433e2cb11" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-81a74b65-f0fc-4662-b1bf-6ba433e2cb11") on node "crc" Feb 23 17:48:43 crc kubenswrapper[4724]: I0223 17:48:43.299110 4724 reconciler_common.go:293] "Volume detached for volume \"pvc-81a74b65-f0fc-4662-b1bf-6ba433e2cb11\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-81a74b65-f0fc-4662-b1bf-6ba433e2cb11\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:43 crc kubenswrapper[4724]: I0223 17:48:43.403800 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 23 17:48:43 crc kubenswrapper[4724]: I0223 17:48:43.836664 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 23 17:48:43 crc kubenswrapper[4724]: I0223 17:48:43.837206 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3946025b-c492-4f1b-a3c3-62d2fa658586","Type":"ContainerStarted","Data":"b7d665579417547d3498b8883b9e6be44bb597338d1b8bd868e5b81d5700c22e"} Feb 23 17:48:43 crc kubenswrapper[4724]: I0223 17:48:43.878186 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 23 17:48:43 crc kubenswrapper[4724]: I0223 17:48:43.890040 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 23 17:48:43 crc kubenswrapper[4724]: I0223 17:48:43.917383 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 23 17:48:43 crc kubenswrapper[4724]: E0223 17:48:43.918210 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad58a78a-ccdb-4154-852e-8a8984a2a650" containerName="init-config-reloader" Feb 23 17:48:43 crc kubenswrapper[4724]: I0223 17:48:43.918291 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad58a78a-ccdb-4154-852e-8a8984a2a650" containerName="init-config-reloader" Feb 23 17:48:43 crc kubenswrapper[4724]: E0223 17:48:43.918431 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb690dfd-e2b9-4ea0-a4d6-9a9ff6adf4ef" containerName="mariadb-account-create-update" Feb 23 17:48:43 crc kubenswrapper[4724]: I0223 17:48:43.918518 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb690dfd-e2b9-4ea0-a4d6-9a9ff6adf4ef" containerName="mariadb-account-create-update" Feb 23 17:48:43 crc kubenswrapper[4724]: E0223 17:48:43.918577 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad58a78a-ccdb-4154-852e-8a8984a2a650" containerName="config-reloader" Feb 23 17:48:43 crc kubenswrapper[4724]: I0223 17:48:43.918628 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad58a78a-ccdb-4154-852e-8a8984a2a650" containerName="config-reloader" Feb 23 17:48:43 crc kubenswrapper[4724]: E0223 17:48:43.918704 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad58a78a-ccdb-4154-852e-8a8984a2a650" containerName="thanos-sidecar" Feb 23 17:48:43 crc kubenswrapper[4724]: I0223 17:48:43.918764 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad58a78a-ccdb-4154-852e-8a8984a2a650" containerName="thanos-sidecar" Feb 23 17:48:43 crc kubenswrapper[4724]: E0223 17:48:43.918820 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bb6da05-6990-43b6-91ba-9ae08f245d3a" containerName="ovn-config" Feb 23 17:48:43 crc kubenswrapper[4724]: I0223 17:48:43.918882 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bb6da05-6990-43b6-91ba-9ae08f245d3a" containerName="ovn-config" Feb 23 17:48:43 crc kubenswrapper[4724]: E0223 17:48:43.918948 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad58a78a-ccdb-4154-852e-8a8984a2a650" containerName="prometheus" Feb 23 17:48:43 crc kubenswrapper[4724]: I0223 17:48:43.919005 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad58a78a-ccdb-4154-852e-8a8984a2a650" containerName="prometheus" Feb 23 17:48:43 crc kubenswrapper[4724]: I0223 17:48:43.919252 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad58a78a-ccdb-4154-852e-8a8984a2a650" containerName="thanos-sidecar" Feb 23 17:48:43 crc kubenswrapper[4724]: I0223 17:48:43.919327 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb690dfd-e2b9-4ea0-a4d6-9a9ff6adf4ef" containerName="mariadb-account-create-update" Feb 23 17:48:43 crc kubenswrapper[4724]: I0223 17:48:43.919412 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad58a78a-ccdb-4154-852e-8a8984a2a650" containerName="config-reloader" Feb 23 17:48:43 crc kubenswrapper[4724]: I0223 17:48:43.919509 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad58a78a-ccdb-4154-852e-8a8984a2a650" containerName="prometheus" Feb 23 17:48:43 crc kubenswrapper[4724]: I0223 17:48:43.919595 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bb6da05-6990-43b6-91ba-9ae08f245d3a" containerName="ovn-config" Feb 23 17:48:43 crc kubenswrapper[4724]: I0223 17:48:43.921711 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 23 17:48:43 crc kubenswrapper[4724]: I0223 17:48:43.926070 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 23 17:48:43 crc kubenswrapper[4724]: I0223 17:48:43.926304 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-8mdd8" Feb 23 17:48:43 crc kubenswrapper[4724]: I0223 17:48:43.926509 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 23 17:48:43 crc kubenswrapper[4724]: I0223 17:48:43.926700 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 23 17:48:43 crc kubenswrapper[4724]: I0223 17:48:43.926766 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 23 17:48:43 crc kubenswrapper[4724]: I0223 17:48:43.926999 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 23 17:48:43 crc kubenswrapper[4724]: I0223 17:48:43.927167 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 23 17:48:43 crc kubenswrapper[4724]: I0223 17:48:43.936273 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 23 17:48:43 crc kubenswrapper[4724]: I0223 17:48:43.937795 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 23 17:48:43 crc kubenswrapper[4724]: I0223 17:48:43.939990 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 23 17:48:44 crc kubenswrapper[4724]: I0223 17:48:44.126936 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-81a74b65-f0fc-4662-b1bf-6ba433e2cb11\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-81a74b65-f0fc-4662-b1bf-6ba433e2cb11\") pod \"prometheus-metric-storage-0\" (UID: \"28e2f02b-2d94-4130-8d0a-3443aed25fba\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:48:44 crc kubenswrapper[4724]: I0223 17:48:44.126997 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrfpn\" (UniqueName: \"kubernetes.io/projected/28e2f02b-2d94-4130-8d0a-3443aed25fba-kube-api-access-zrfpn\") pod \"prometheus-metric-storage-0\" (UID: \"28e2f02b-2d94-4130-8d0a-3443aed25fba\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:48:44 crc kubenswrapper[4724]: I0223 17:48:44.127029 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/28e2f02b-2d94-4130-8d0a-3443aed25fba-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"28e2f02b-2d94-4130-8d0a-3443aed25fba\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:48:44 crc kubenswrapper[4724]: I0223 17:48:44.127051 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/28e2f02b-2d94-4130-8d0a-3443aed25fba-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"28e2f02b-2d94-4130-8d0a-3443aed25fba\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:48:44 crc kubenswrapper[4724]: I0223 17:48:44.127082 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/28e2f02b-2d94-4130-8d0a-3443aed25fba-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"28e2f02b-2d94-4130-8d0a-3443aed25fba\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:48:44 crc kubenswrapper[4724]: I0223 17:48:44.127116 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/28e2f02b-2d94-4130-8d0a-3443aed25fba-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"28e2f02b-2d94-4130-8d0a-3443aed25fba\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:48:44 crc kubenswrapper[4724]: I0223 17:48:44.127144 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/28e2f02b-2d94-4130-8d0a-3443aed25fba-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"28e2f02b-2d94-4130-8d0a-3443aed25fba\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:48:44 crc kubenswrapper[4724]: I0223 17:48:44.127160 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28e2f02b-2d94-4130-8d0a-3443aed25fba-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"28e2f02b-2d94-4130-8d0a-3443aed25fba\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:48:44 crc kubenswrapper[4724]: I0223 17:48:44.127187 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/28e2f02b-2d94-4130-8d0a-3443aed25fba-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"28e2f02b-2d94-4130-8d0a-3443aed25fba\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:48:44 crc kubenswrapper[4724]: I0223 17:48:44.127203 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/28e2f02b-2d94-4130-8d0a-3443aed25fba-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"28e2f02b-2d94-4130-8d0a-3443aed25fba\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:48:44 crc kubenswrapper[4724]: I0223 17:48:44.127223 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/28e2f02b-2d94-4130-8d0a-3443aed25fba-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"28e2f02b-2d94-4130-8d0a-3443aed25fba\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:48:44 crc kubenswrapper[4724]: I0223 17:48:44.127241 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/28e2f02b-2d94-4130-8d0a-3443aed25fba-config\") pod \"prometheus-metric-storage-0\" (UID: \"28e2f02b-2d94-4130-8d0a-3443aed25fba\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:48:44 crc kubenswrapper[4724]: I0223 17:48:44.127263 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/28e2f02b-2d94-4130-8d0a-3443aed25fba-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"28e2f02b-2d94-4130-8d0a-3443aed25fba\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:48:44 crc kubenswrapper[4724]: I0223 17:48:44.231311 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/28e2f02b-2d94-4130-8d0a-3443aed25fba-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"28e2f02b-2d94-4130-8d0a-3443aed25fba\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:48:44 crc kubenswrapper[4724]: I0223 17:48:44.231373 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/28e2f02b-2d94-4130-8d0a-3443aed25fba-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"28e2f02b-2d94-4130-8d0a-3443aed25fba\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:48:44 crc kubenswrapper[4724]: I0223 17:48:44.231417 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28e2f02b-2d94-4130-8d0a-3443aed25fba-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"28e2f02b-2d94-4130-8d0a-3443aed25fba\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:48:44 crc kubenswrapper[4724]: I0223 17:48:44.231458 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/28e2f02b-2d94-4130-8d0a-3443aed25fba-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"28e2f02b-2d94-4130-8d0a-3443aed25fba\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:48:44 crc kubenswrapper[4724]: I0223 17:48:44.231484 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/28e2f02b-2d94-4130-8d0a-3443aed25fba-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"28e2f02b-2d94-4130-8d0a-3443aed25fba\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:48:44 crc kubenswrapper[4724]: I0223 17:48:44.231510 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/28e2f02b-2d94-4130-8d0a-3443aed25fba-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"28e2f02b-2d94-4130-8d0a-3443aed25fba\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:48:44 crc kubenswrapper[4724]: I0223 17:48:44.231534 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/28e2f02b-2d94-4130-8d0a-3443aed25fba-config\") pod \"prometheus-metric-storage-0\" (UID: \"28e2f02b-2d94-4130-8d0a-3443aed25fba\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:48:44 crc kubenswrapper[4724]: I0223 17:48:44.231565 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/28e2f02b-2d94-4130-8d0a-3443aed25fba-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"28e2f02b-2d94-4130-8d0a-3443aed25fba\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:48:44 crc kubenswrapper[4724]: I0223 17:48:44.231598 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-81a74b65-f0fc-4662-b1bf-6ba433e2cb11\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-81a74b65-f0fc-4662-b1bf-6ba433e2cb11\") pod \"prometheus-metric-storage-0\" (UID: \"28e2f02b-2d94-4130-8d0a-3443aed25fba\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:48:44 crc kubenswrapper[4724]: I0223 17:48:44.231638 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrfpn\" (UniqueName: \"kubernetes.io/projected/28e2f02b-2d94-4130-8d0a-3443aed25fba-kube-api-access-zrfpn\") pod \"prometheus-metric-storage-0\" (UID: \"28e2f02b-2d94-4130-8d0a-3443aed25fba\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:48:44 crc kubenswrapper[4724]: I0223 17:48:44.231676 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/28e2f02b-2d94-4130-8d0a-3443aed25fba-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"28e2f02b-2d94-4130-8d0a-3443aed25fba\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:48:44 crc kubenswrapper[4724]: I0223 17:48:44.231701 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/28e2f02b-2d94-4130-8d0a-3443aed25fba-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"28e2f02b-2d94-4130-8d0a-3443aed25fba\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:48:44 crc kubenswrapper[4724]: I0223 17:48:44.231748 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/28e2f02b-2d94-4130-8d0a-3443aed25fba-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"28e2f02b-2d94-4130-8d0a-3443aed25fba\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:48:44 crc kubenswrapper[4724]: I0223 17:48:44.232092 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/28e2f02b-2d94-4130-8d0a-3443aed25fba-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"28e2f02b-2d94-4130-8d0a-3443aed25fba\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:48:44 crc kubenswrapper[4724]: I0223 17:48:44.232098 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/28e2f02b-2d94-4130-8d0a-3443aed25fba-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"28e2f02b-2d94-4130-8d0a-3443aed25fba\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:48:44 crc kubenswrapper[4724]: I0223 17:48:44.232602 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/28e2f02b-2d94-4130-8d0a-3443aed25fba-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"28e2f02b-2d94-4130-8d0a-3443aed25fba\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:48:44 crc kubenswrapper[4724]: I0223 17:48:44.236453 4724 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 23 17:48:44 crc kubenswrapper[4724]: I0223 17:48:44.236492 4724 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-81a74b65-f0fc-4662-b1bf-6ba433e2cb11\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-81a74b65-f0fc-4662-b1bf-6ba433e2cb11\") pod \"prometheus-metric-storage-0\" (UID: \"28e2f02b-2d94-4130-8d0a-3443aed25fba\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/47f183732fd6cce9e8579bb5bdfe275794daae311819ba60fd57e3b1b945523c/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 23 17:48:44 crc kubenswrapper[4724]: I0223 17:48:44.238580 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/28e2f02b-2d94-4130-8d0a-3443aed25fba-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"28e2f02b-2d94-4130-8d0a-3443aed25fba\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:48:44 crc kubenswrapper[4724]: I0223 17:48:44.238707 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/28e2f02b-2d94-4130-8d0a-3443aed25fba-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"28e2f02b-2d94-4130-8d0a-3443aed25fba\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:48:44 crc kubenswrapper[4724]: I0223 17:48:44.238732 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/28e2f02b-2d94-4130-8d0a-3443aed25fba-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"28e2f02b-2d94-4130-8d0a-3443aed25fba\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:48:44 crc kubenswrapper[4724]: I0223 17:48:44.238831 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/28e2f02b-2d94-4130-8d0a-3443aed25fba-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"28e2f02b-2d94-4130-8d0a-3443aed25fba\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:48:44 crc kubenswrapper[4724]: I0223 17:48:44.239243 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/28e2f02b-2d94-4130-8d0a-3443aed25fba-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"28e2f02b-2d94-4130-8d0a-3443aed25fba\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:48:44 crc kubenswrapper[4724]: I0223 17:48:44.239715 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/28e2f02b-2d94-4130-8d0a-3443aed25fba-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"28e2f02b-2d94-4130-8d0a-3443aed25fba\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:48:44 crc kubenswrapper[4724]: I0223 17:48:44.239856 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/28e2f02b-2d94-4130-8d0a-3443aed25fba-config\") pod \"prometheus-metric-storage-0\" (UID: \"28e2f02b-2d94-4130-8d0a-3443aed25fba\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:48:44 crc kubenswrapper[4724]: I0223 17:48:44.246185 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28e2f02b-2d94-4130-8d0a-3443aed25fba-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"28e2f02b-2d94-4130-8d0a-3443aed25fba\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:48:44 crc kubenswrapper[4724]: I0223 17:48:44.248802 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrfpn\" (UniqueName: \"kubernetes.io/projected/28e2f02b-2d94-4130-8d0a-3443aed25fba-kube-api-access-zrfpn\") pod \"prometheus-metric-storage-0\" (UID: \"28e2f02b-2d94-4130-8d0a-3443aed25fba\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:48:44 crc kubenswrapper[4724]: I0223 17:48:44.273438 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-81a74b65-f0fc-4662-b1bf-6ba433e2cb11\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-81a74b65-f0fc-4662-b1bf-6ba433e2cb11\") pod \"prometheus-metric-storage-0\" (UID: \"28e2f02b-2d94-4130-8d0a-3443aed25fba\") " pod="openstack/prometheus-metric-storage-0" Feb 23 17:48:44 crc kubenswrapper[4724]: I0223 17:48:44.546612 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 23 17:48:44 crc kubenswrapper[4724]: I0223 17:48:44.819143 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 23 17:48:44 crc kubenswrapper[4724]: I0223 17:48:44.868961 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"28e2f02b-2d94-4130-8d0a-3443aed25fba","Type":"ContainerStarted","Data":"d137317e4ebddeec9b1e386a278a5512d36458acd640d2e8fe0e7e7a7470bdf1"} Feb 23 17:48:44 crc kubenswrapper[4724]: I0223 17:48:44.871358 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-qqzwt" event={"ID":"4835f23c-1737-45fa-8d8f-d5a381c9d498","Type":"ContainerStarted","Data":"ec242011e8cfa23eb71faae51f72794f9bc6b983c3dae0e34c6fd63b2963e704"} Feb 23 17:48:44 crc kubenswrapper[4724]: I0223 17:48:44.878626 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3946025b-c492-4f1b-a3c3-62d2fa658586","Type":"ContainerStarted","Data":"771152cebbd4db6bb6325aa513f579d42985f387bca6b821d90aa47b2333bdc2"} Feb 23 17:48:44 crc kubenswrapper[4724]: I0223 17:48:44.892557 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-qqzwt" podStartSLOduration=3.822521012 podStartE2EDuration="18.892534612s" podCreationTimestamp="2026-02-23 17:48:26 +0000 UTC" firstStartedPulling="2026-02-23 17:48:27.851251883 +0000 UTC m=+1063.667451483" lastFinishedPulling="2026-02-23 17:48:42.921265493 +0000 UTC m=+1078.737465083" observedRunningTime="2026-02-23 17:48:44.89007355 +0000 UTC m=+1080.706273150" watchObservedRunningTime="2026-02-23 17:48:44.892534612 +0000 UTC m=+1080.708734212" Feb 23 17:48:44 crc kubenswrapper[4724]: I0223 17:48:44.985850 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad58a78a-ccdb-4154-852e-8a8984a2a650" path="/var/lib/kubelet/pods/ad58a78a-ccdb-4154-852e-8a8984a2a650/volumes" Feb 23 17:48:45 crc kubenswrapper[4724]: I0223 17:48:45.899160 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3946025b-c492-4f1b-a3c3-62d2fa658586","Type":"ContainerStarted","Data":"3fecade6212520af746de5e15b18999d070f49fe76a2099322f4432e3ab980c2"} Feb 23 17:48:45 crc kubenswrapper[4724]: I0223 17:48:45.899543 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3946025b-c492-4f1b-a3c3-62d2fa658586","Type":"ContainerStarted","Data":"dc9bdddd0d9d034eeccb7a67c83abda36a95f8ed581e20aaa74b3603143ad363"} Feb 23 17:48:45 crc kubenswrapper[4724]: I0223 17:48:45.899561 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3946025b-c492-4f1b-a3c3-62d2fa658586","Type":"ContainerStarted","Data":"809406c9cce84b71aeda62e29398197c54a1c5f6c62dadc271f6c4768133718e"} Feb 23 17:48:46 crc kubenswrapper[4724]: I0223 17:48:46.909369 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3946025b-c492-4f1b-a3c3-62d2fa658586","Type":"ContainerStarted","Data":"2d91e79f6154e75427f162acbf5a34a5e0e6ec253cfe996b3bbfc05020b5fec6"} Feb 23 17:48:46 crc kubenswrapper[4724]: I0223 17:48:46.909770 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3946025b-c492-4f1b-a3c3-62d2fa658586","Type":"ContainerStarted","Data":"dbd0d7455a3caf951f9f96592fdf2e0be21952f5b47467d1b76f0ffa363c3799"} Feb 23 17:48:46 crc kubenswrapper[4724]: I0223 17:48:46.909783 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3946025b-c492-4f1b-a3c3-62d2fa658586","Type":"ContainerStarted","Data":"0aeb4a94b960a81962f7709c2b8107c05c06338a132b7ec68ff93aabd9f8361b"} Feb 23 17:48:47 crc kubenswrapper[4724]: I0223 17:48:47.808067 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/notifications-rabbitmq-server-0" Feb 23 17:48:47 crc kubenswrapper[4724]: I0223 17:48:47.946844 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3946025b-c492-4f1b-a3c3-62d2fa658586","Type":"ContainerStarted","Data":"1d3e38f6a3e91882d38c7f080553ddd7fdc5716bc80d29ed18206f78c5e1ba3b"} Feb 23 17:48:47 crc kubenswrapper[4724]: I0223 17:48:47.948153 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"28e2f02b-2d94-4130-8d0a-3443aed25fba","Type":"ContainerStarted","Data":"aafa8a27e8a0b8a6e76b06fe9f2883e7ddd4fa58de5b466c8150296f49b8c03f"} Feb 23 17:48:48 crc kubenswrapper[4724]: I0223 17:48:48.135598 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 23 17:48:48 crc kubenswrapper[4724]: I0223 17:48:48.439631 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:48:48 crc kubenswrapper[4724]: I0223 17:48:48.986998 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3946025b-c492-4f1b-a3c3-62d2fa658586","Type":"ContainerStarted","Data":"845308efa2fa854df54b6c425a29030e74d3e32469a31f5fdca26d673b4e5bb6"} Feb 23 17:48:48 crc kubenswrapper[4724]: I0223 17:48:48.987053 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3946025b-c492-4f1b-a3c3-62d2fa658586","Type":"ContainerStarted","Data":"1a1133875113d493f5be346750e5757e5dd843c5d29681bd335867f4c810100b"} Feb 23 17:48:48 crc kubenswrapper[4724]: I0223 17:48:48.987065 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3946025b-c492-4f1b-a3c3-62d2fa658586","Type":"ContainerStarted","Data":"4398f1d2a52f8ef9ecd5053e9575fb3d062d59712a763e72c36b74cfed38ae52"} Feb 23 17:48:48 crc kubenswrapper[4724]: I0223 17:48:48.987108 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3946025b-c492-4f1b-a3c3-62d2fa658586","Type":"ContainerStarted","Data":"3db69312f83ab95844c45ad1b6a3ce1b147485f917cc65fd0789b1f95cb07531"} Feb 23 17:48:48 crc kubenswrapper[4724]: I0223 17:48:48.987119 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3946025b-c492-4f1b-a3c3-62d2fa658586","Type":"ContainerStarted","Data":"077d31e9013f11c1e6ea27bb934ed093d024eaadf0222b3d0253cfa99da3c2c2"} Feb 23 17:48:49 crc kubenswrapper[4724]: I0223 17:48:49.743550 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-4rgbz"] Feb 23 17:48:49 crc kubenswrapper[4724]: I0223 17:48:49.744959 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-4rgbz" Feb 23 17:48:49 crc kubenswrapper[4724]: I0223 17:48:49.748483 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-4rgbz"] Feb 23 17:48:49 crc kubenswrapper[4724]: I0223 17:48:49.848017 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c3f7706-ecc6-45ce-90d7-89bafc6588fd-operator-scripts\") pod \"barbican-db-create-4rgbz\" (UID: \"1c3f7706-ecc6-45ce-90d7-89bafc6588fd\") " pod="openstack/barbican-db-create-4rgbz" Feb 23 17:48:49 crc kubenswrapper[4724]: I0223 17:48:49.848110 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb4sx\" (UniqueName: \"kubernetes.io/projected/1c3f7706-ecc6-45ce-90d7-89bafc6588fd-kube-api-access-xb4sx\") pod \"barbican-db-create-4rgbz\" (UID: \"1c3f7706-ecc6-45ce-90d7-89bafc6588fd\") " pod="openstack/barbican-db-create-4rgbz" Feb 23 17:48:49 crc kubenswrapper[4724]: I0223 17:48:49.854208 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-c185-account-create-update-2zg86"] Feb 23 17:48:49 crc kubenswrapper[4724]: I0223 17:48:49.855261 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-c185-account-create-update-2zg86" Feb 23 17:48:49 crc kubenswrapper[4724]: I0223 17:48:49.866899 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-c185-account-create-update-2zg86"] Feb 23 17:48:49 crc kubenswrapper[4724]: I0223 17:48:49.877721 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 23 17:48:49 crc kubenswrapper[4724]: I0223 17:48:49.950187 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znhnl\" (UniqueName: \"kubernetes.io/projected/a3811756-e4b8-40fe-9158-30f432841b07-kube-api-access-znhnl\") pod \"barbican-c185-account-create-update-2zg86\" (UID: \"a3811756-e4b8-40fe-9158-30f432841b07\") " pod="openstack/barbican-c185-account-create-update-2zg86" Feb 23 17:48:49 crc kubenswrapper[4724]: I0223 17:48:49.950572 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c3f7706-ecc6-45ce-90d7-89bafc6588fd-operator-scripts\") pod \"barbican-db-create-4rgbz\" (UID: \"1c3f7706-ecc6-45ce-90d7-89bafc6588fd\") " pod="openstack/barbican-db-create-4rgbz" Feb 23 17:48:49 crc kubenswrapper[4724]: I0223 17:48:49.950708 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xb4sx\" (UniqueName: \"kubernetes.io/projected/1c3f7706-ecc6-45ce-90d7-89bafc6588fd-kube-api-access-xb4sx\") pod \"barbican-db-create-4rgbz\" (UID: \"1c3f7706-ecc6-45ce-90d7-89bafc6588fd\") " pod="openstack/barbican-db-create-4rgbz" Feb 23 17:48:49 crc kubenswrapper[4724]: I0223 17:48:49.950811 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3811756-e4b8-40fe-9158-30f432841b07-operator-scripts\") pod \"barbican-c185-account-create-update-2zg86\" (UID: \"a3811756-e4b8-40fe-9158-30f432841b07\") " pod="openstack/barbican-c185-account-create-update-2zg86" Feb 23 17:48:49 crc kubenswrapper[4724]: I0223 17:48:49.952342 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c3f7706-ecc6-45ce-90d7-89bafc6588fd-operator-scripts\") pod \"barbican-db-create-4rgbz\" (UID: \"1c3f7706-ecc6-45ce-90d7-89bafc6588fd\") " pod="openstack/barbican-db-create-4rgbz" Feb 23 17:48:49 crc kubenswrapper[4724]: I0223 17:48:49.971283 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-w2gbv"] Feb 23 17:48:49 crc kubenswrapper[4724]: I0223 17:48:49.972661 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-w2gbv" Feb 23 17:48:49 crc kubenswrapper[4724]: I0223 17:48:49.986080 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-e738-account-create-update-6sklf"] Feb 23 17:48:49 crc kubenswrapper[4724]: I0223 17:48:49.992319 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xb4sx\" (UniqueName: \"kubernetes.io/projected/1c3f7706-ecc6-45ce-90d7-89bafc6588fd-kube-api-access-xb4sx\") pod \"barbican-db-create-4rgbz\" (UID: \"1c3f7706-ecc6-45ce-90d7-89bafc6588fd\") " pod="openstack/barbican-db-create-4rgbz" Feb 23 17:48:49 crc kubenswrapper[4724]: I0223 17:48:49.999573 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e738-account-create-update-6sklf" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.005916 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.023586 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3946025b-c492-4f1b-a3c3-62d2fa658586","Type":"ContainerStarted","Data":"dea0933926d7438ecdb6e707e1d61c7f115e7b6e291387c500679c71de746659"} Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.023672 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3946025b-c492-4f1b-a3c3-62d2fa658586","Type":"ContainerStarted","Data":"8719936b562cd4da85d01a3635e91d93ccac27744ebbcad71f2b3e37e0bc8f12"} Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.039565 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-w2gbv"] Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.052455 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6z5b\" (UniqueName: \"kubernetes.io/projected/8615b39e-8bab-4706-a6b6-e719c566b7dc-kube-api-access-j6z5b\") pod \"cinder-db-create-w2gbv\" (UID: \"8615b39e-8bab-4706-a6b6-e719c566b7dc\") " pod="openstack/cinder-db-create-w2gbv" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.052597 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znhnl\" (UniqueName: \"kubernetes.io/projected/a3811756-e4b8-40fe-9158-30f432841b07-kube-api-access-znhnl\") pod \"barbican-c185-account-create-update-2zg86\" (UID: \"a3811756-e4b8-40fe-9158-30f432841b07\") " pod="openstack/barbican-c185-account-create-update-2zg86" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.052681 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8615b39e-8bab-4706-a6b6-e719c566b7dc-operator-scripts\") pod \"cinder-db-create-w2gbv\" (UID: \"8615b39e-8bab-4706-a6b6-e719c566b7dc\") " pod="openstack/cinder-db-create-w2gbv" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.052798 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3811756-e4b8-40fe-9158-30f432841b07-operator-scripts\") pod \"barbican-c185-account-create-update-2zg86\" (UID: \"a3811756-e4b8-40fe-9158-30f432841b07\") " pod="openstack/barbican-c185-account-create-update-2zg86" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.056774 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3811756-e4b8-40fe-9158-30f432841b07-operator-scripts\") pod \"barbican-c185-account-create-update-2zg86\" (UID: \"a3811756-e4b8-40fe-9158-30f432841b07\") " pod="openstack/barbican-c185-account-create-update-2zg86" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.070842 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-4rgbz" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.105295 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znhnl\" (UniqueName: \"kubernetes.io/projected/a3811756-e4b8-40fe-9158-30f432841b07-kube-api-access-znhnl\") pod \"barbican-c185-account-create-update-2zg86\" (UID: \"a3811756-e4b8-40fe-9158-30f432841b07\") " pod="openstack/barbican-c185-account-create-update-2zg86" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.119064 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-e738-account-create-update-6sklf"] Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.179843 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8615b39e-8bab-4706-a6b6-e719c566b7dc-operator-scripts\") pod \"cinder-db-create-w2gbv\" (UID: \"8615b39e-8bab-4706-a6b6-e719c566b7dc\") " pod="openstack/cinder-db-create-w2gbv" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.181885 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmvdp\" (UniqueName: \"kubernetes.io/projected/68bbcfd5-a073-443b-afab-650c48febc56-kube-api-access-pmvdp\") pod \"cinder-e738-account-create-update-6sklf\" (UID: \"68bbcfd5-a073-443b-afab-650c48febc56\") " pod="openstack/cinder-e738-account-create-update-6sklf" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.181959 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68bbcfd5-a073-443b-afab-650c48febc56-operator-scripts\") pod \"cinder-e738-account-create-update-6sklf\" (UID: \"68bbcfd5-a073-443b-afab-650c48febc56\") " pod="openstack/cinder-e738-account-create-update-6sklf" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.182097 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6z5b\" (UniqueName: \"kubernetes.io/projected/8615b39e-8bab-4706-a6b6-e719c566b7dc-kube-api-access-j6z5b\") pod \"cinder-db-create-w2gbv\" (UID: \"8615b39e-8bab-4706-a6b6-e719c566b7dc\") " pod="openstack/cinder-db-create-w2gbv" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.205358 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8615b39e-8bab-4706-a6b6-e719c566b7dc-operator-scripts\") pod \"cinder-db-create-w2gbv\" (UID: \"8615b39e-8bab-4706-a6b6-e719c566b7dc\") " pod="openstack/cinder-db-create-w2gbv" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.227223 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-c185-account-create-update-2zg86" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.289030 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmvdp\" (UniqueName: \"kubernetes.io/projected/68bbcfd5-a073-443b-afab-650c48febc56-kube-api-access-pmvdp\") pod \"cinder-e738-account-create-update-6sklf\" (UID: \"68bbcfd5-a073-443b-afab-650c48febc56\") " pod="openstack/cinder-e738-account-create-update-6sklf" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.289096 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68bbcfd5-a073-443b-afab-650c48febc56-operator-scripts\") pod \"cinder-e738-account-create-update-6sklf\" (UID: \"68bbcfd5-a073-443b-afab-650c48febc56\") " pod="openstack/cinder-e738-account-create-update-6sklf" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.296235 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68bbcfd5-a073-443b-afab-650c48febc56-operator-scripts\") pod \"cinder-e738-account-create-update-6sklf\" (UID: \"68bbcfd5-a073-443b-afab-650c48febc56\") " pod="openstack/cinder-e738-account-create-update-6sklf" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.301208 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6z5b\" (UniqueName: \"kubernetes.io/projected/8615b39e-8bab-4706-a6b6-e719c566b7dc-kube-api-access-j6z5b\") pod \"cinder-db-create-w2gbv\" (UID: \"8615b39e-8bab-4706-a6b6-e719c566b7dc\") " pod="openstack/cinder-db-create-w2gbv" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.335032 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-chd5w"] Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.336564 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-chd5w" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.354662 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-chd5w"] Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.389455 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-zt6pn"] Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.390549 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-zt6pn" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.391159 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qn755\" (UniqueName: \"kubernetes.io/projected/2e8659d3-de95-405b-b137-54708400f566-kube-api-access-qn755\") pod \"neutron-db-create-chd5w\" (UID: \"2e8659d3-de95-405b-b137-54708400f566\") " pod="openstack/neutron-db-create-chd5w" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.391308 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e8659d3-de95-405b-b137-54708400f566-operator-scripts\") pod \"neutron-db-create-chd5w\" (UID: \"2e8659d3-de95-405b-b137-54708400f566\") " pod="openstack/neutron-db-create-chd5w" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.398698 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.398910 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-8cc4s" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.399068 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.399416 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.400557 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-zt6pn"] Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.417489 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-sync-69qxx"] Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.418859 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-69qxx" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.431993 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=42.006435387 podStartE2EDuration="46.43196476s" podCreationTimestamp="2026-02-23 17:48:04 +0000 UTC" firstStartedPulling="2026-02-23 17:48:43.419342304 +0000 UTC m=+1079.235541914" lastFinishedPulling="2026-02-23 17:48:47.844871687 +0000 UTC m=+1083.661071287" observedRunningTime="2026-02-23 17:48:50.334718064 +0000 UTC m=+1086.150917664" watchObservedRunningTime="2026-02-23 17:48:50.43196476 +0000 UTC m=+1086.248164360" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.438851 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmvdp\" (UniqueName: \"kubernetes.io/projected/68bbcfd5-a073-443b-afab-650c48febc56-kube-api-access-pmvdp\") pod \"cinder-e738-account-create-update-6sklf\" (UID: \"68bbcfd5-a073-443b-afab-650c48febc56\") " pod="openstack/cinder-e738-account-create-update-6sklf" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.439127 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-5brtw" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.439311 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e738-account-create-update-6sklf" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.439418 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-config-data" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.463491 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-69qxx"] Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.492316 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwvf5\" (UniqueName: \"kubernetes.io/projected/20956a35-60c2-4df4-b475-0a64a3fa11ae-kube-api-access-pwvf5\") pod \"keystone-db-sync-zt6pn\" (UID: \"20956a35-60c2-4df4-b475-0a64a3fa11ae\") " pod="openstack/keystone-db-sync-zt6pn" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.492369 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e8659d3-de95-405b-b137-54708400f566-operator-scripts\") pod \"neutron-db-create-chd5w\" (UID: \"2e8659d3-de95-405b-b137-54708400f566\") " pod="openstack/neutron-db-create-chd5w" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.492415 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97824\" (UniqueName: \"kubernetes.io/projected/9686c843-cd47-4a6c-992a-97dd99d4304e-kube-api-access-97824\") pod \"watcher-db-sync-69qxx\" (UID: \"9686c843-cd47-4a6c-992a-97dd99d4304e\") " pod="openstack/watcher-db-sync-69qxx" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.492447 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9686c843-cd47-4a6c-992a-97dd99d4304e-combined-ca-bundle\") pod \"watcher-db-sync-69qxx\" (UID: \"9686c843-cd47-4a6c-992a-97dd99d4304e\") " pod="openstack/watcher-db-sync-69qxx" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.492474 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9686c843-cd47-4a6c-992a-97dd99d4304e-config-data\") pod \"watcher-db-sync-69qxx\" (UID: \"9686c843-cd47-4a6c-992a-97dd99d4304e\") " pod="openstack/watcher-db-sync-69qxx" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.492544 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20956a35-60c2-4df4-b475-0a64a3fa11ae-config-data\") pod \"keystone-db-sync-zt6pn\" (UID: \"20956a35-60c2-4df4-b475-0a64a3fa11ae\") " pod="openstack/keystone-db-sync-zt6pn" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.492585 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qn755\" (UniqueName: \"kubernetes.io/projected/2e8659d3-de95-405b-b137-54708400f566-kube-api-access-qn755\") pod \"neutron-db-create-chd5w\" (UID: \"2e8659d3-de95-405b-b137-54708400f566\") " pod="openstack/neutron-db-create-chd5w" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.492625 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9686c843-cd47-4a6c-992a-97dd99d4304e-db-sync-config-data\") pod \"watcher-db-sync-69qxx\" (UID: \"9686c843-cd47-4a6c-992a-97dd99d4304e\") " pod="openstack/watcher-db-sync-69qxx" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.492649 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20956a35-60c2-4df4-b475-0a64a3fa11ae-combined-ca-bundle\") pod \"keystone-db-sync-zt6pn\" (UID: \"20956a35-60c2-4df4-b475-0a64a3fa11ae\") " pod="openstack/keystone-db-sync-zt6pn" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.493410 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e8659d3-de95-405b-b137-54708400f566-operator-scripts\") pod \"neutron-db-create-chd5w\" (UID: \"2e8659d3-de95-405b-b137-54708400f566\") " pod="openstack/neutron-db-create-chd5w" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.493850 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-959e-account-create-update-kfz4c"] Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.504611 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-959e-account-create-update-kfz4c"] Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.504747 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-959e-account-create-update-kfz4c" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.508183 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.577259 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qn755\" (UniqueName: \"kubernetes.io/projected/2e8659d3-de95-405b-b137-54708400f566-kube-api-access-qn755\") pod \"neutron-db-create-chd5w\" (UID: \"2e8659d3-de95-405b-b137-54708400f566\") " pod="openstack/neutron-db-create-chd5w" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.599179 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-w2gbv" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.600545 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9686c843-cd47-4a6c-992a-97dd99d4304e-db-sync-config-data\") pod \"watcher-db-sync-69qxx\" (UID: \"9686c843-cd47-4a6c-992a-97dd99d4304e\") " pod="openstack/watcher-db-sync-69qxx" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.600578 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20956a35-60c2-4df4-b475-0a64a3fa11ae-combined-ca-bundle\") pod \"keystone-db-sync-zt6pn\" (UID: \"20956a35-60c2-4df4-b475-0a64a3fa11ae\") " pod="openstack/keystone-db-sync-zt6pn" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.600651 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwvf5\" (UniqueName: \"kubernetes.io/projected/20956a35-60c2-4df4-b475-0a64a3fa11ae-kube-api-access-pwvf5\") pod \"keystone-db-sync-zt6pn\" (UID: \"20956a35-60c2-4df4-b475-0a64a3fa11ae\") " pod="openstack/keystone-db-sync-zt6pn" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.600684 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97824\" (UniqueName: \"kubernetes.io/projected/9686c843-cd47-4a6c-992a-97dd99d4304e-kube-api-access-97824\") pod \"watcher-db-sync-69qxx\" (UID: \"9686c843-cd47-4a6c-992a-97dd99d4304e\") " pod="openstack/watcher-db-sync-69qxx" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.600710 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9686c843-cd47-4a6c-992a-97dd99d4304e-combined-ca-bundle\") pod \"watcher-db-sync-69qxx\" (UID: \"9686c843-cd47-4a6c-992a-97dd99d4304e\") " pod="openstack/watcher-db-sync-69qxx" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.600734 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9686c843-cd47-4a6c-992a-97dd99d4304e-config-data\") pod \"watcher-db-sync-69qxx\" (UID: \"9686c843-cd47-4a6c-992a-97dd99d4304e\") " pod="openstack/watcher-db-sync-69qxx" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.600798 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/934da594-6ca7-46b8-954e-c1cff91e3f44-operator-scripts\") pod \"neutron-959e-account-create-update-kfz4c\" (UID: \"934da594-6ca7-46b8-954e-c1cff91e3f44\") " pod="openstack/neutron-959e-account-create-update-kfz4c" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.600829 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95bf8\" (UniqueName: \"kubernetes.io/projected/934da594-6ca7-46b8-954e-c1cff91e3f44-kube-api-access-95bf8\") pod \"neutron-959e-account-create-update-kfz4c\" (UID: \"934da594-6ca7-46b8-954e-c1cff91e3f44\") " pod="openstack/neutron-959e-account-create-update-kfz4c" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.600869 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20956a35-60c2-4df4-b475-0a64a3fa11ae-config-data\") pod \"keystone-db-sync-zt6pn\" (UID: \"20956a35-60c2-4df4-b475-0a64a3fa11ae\") " pod="openstack/keystone-db-sync-zt6pn" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.613220 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9686c843-cd47-4a6c-992a-97dd99d4304e-db-sync-config-data\") pod \"watcher-db-sync-69qxx\" (UID: \"9686c843-cd47-4a6c-992a-97dd99d4304e\") " pod="openstack/watcher-db-sync-69qxx" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.621171 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20956a35-60c2-4df4-b475-0a64a3fa11ae-config-data\") pod \"keystone-db-sync-zt6pn\" (UID: \"20956a35-60c2-4df4-b475-0a64a3fa11ae\") " pod="openstack/keystone-db-sync-zt6pn" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.629252 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9686c843-cd47-4a6c-992a-97dd99d4304e-config-data\") pod \"watcher-db-sync-69qxx\" (UID: \"9686c843-cd47-4a6c-992a-97dd99d4304e\") " pod="openstack/watcher-db-sync-69qxx" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.631339 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20956a35-60c2-4df4-b475-0a64a3fa11ae-combined-ca-bundle\") pod \"keystone-db-sync-zt6pn\" (UID: \"20956a35-60c2-4df4-b475-0a64a3fa11ae\") " pod="openstack/keystone-db-sync-zt6pn" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.636607 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9686c843-cd47-4a6c-992a-97dd99d4304e-combined-ca-bundle\") pod \"watcher-db-sync-69qxx\" (UID: \"9686c843-cd47-4a6c-992a-97dd99d4304e\") " pod="openstack/watcher-db-sync-69qxx" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.673258 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwvf5\" (UniqueName: \"kubernetes.io/projected/20956a35-60c2-4df4-b475-0a64a3fa11ae-kube-api-access-pwvf5\") pod \"keystone-db-sync-zt6pn\" (UID: \"20956a35-60c2-4df4-b475-0a64a3fa11ae\") " pod="openstack/keystone-db-sync-zt6pn" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.687766 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97824\" (UniqueName: \"kubernetes.io/projected/9686c843-cd47-4a6c-992a-97dd99d4304e-kube-api-access-97824\") pod \"watcher-db-sync-69qxx\" (UID: \"9686c843-cd47-4a6c-992a-97dd99d4304e\") " pod="openstack/watcher-db-sync-69qxx" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.699806 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-chd5w" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.707618 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/934da594-6ca7-46b8-954e-c1cff91e3f44-operator-scripts\") pod \"neutron-959e-account-create-update-kfz4c\" (UID: \"934da594-6ca7-46b8-954e-c1cff91e3f44\") " pod="openstack/neutron-959e-account-create-update-kfz4c" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.707684 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95bf8\" (UniqueName: \"kubernetes.io/projected/934da594-6ca7-46b8-954e-c1cff91e3f44-kube-api-access-95bf8\") pod \"neutron-959e-account-create-update-kfz4c\" (UID: \"934da594-6ca7-46b8-954e-c1cff91e3f44\") " pod="openstack/neutron-959e-account-create-update-kfz4c" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.708825 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/934da594-6ca7-46b8-954e-c1cff91e3f44-operator-scripts\") pod \"neutron-959e-account-create-update-kfz4c\" (UID: \"934da594-6ca7-46b8-954e-c1cff91e3f44\") " pod="openstack/neutron-959e-account-create-update-kfz4c" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.747975 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95bf8\" (UniqueName: \"kubernetes.io/projected/934da594-6ca7-46b8-954e-c1cff91e3f44-kube-api-access-95bf8\") pod \"neutron-959e-account-create-update-kfz4c\" (UID: \"934da594-6ca7-46b8-954e-c1cff91e3f44\") " pod="openstack/neutron-959e-account-create-update-kfz4c" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.775694 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-zt6pn" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.791790 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-69qxx" Feb 23 17:48:50 crc kubenswrapper[4724]: I0223 17:48:50.845339 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-959e-account-create-update-kfz4c" Feb 23 17:48:51 crc kubenswrapper[4724]: I0223 17:48:51.057357 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c4949dfdc-glzsk"] Feb 23 17:48:51 crc kubenswrapper[4724]: I0223 17:48:51.058978 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c4949dfdc-glzsk"] Feb 23 17:48:51 crc kubenswrapper[4724]: I0223 17:48:51.058997 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-4rgbz"] Feb 23 17:48:51 crc kubenswrapper[4724]: I0223 17:48:51.059072 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c4949dfdc-glzsk" Feb 23 17:48:51 crc kubenswrapper[4724]: I0223 17:48:51.066033 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 23 17:48:51 crc kubenswrapper[4724]: I0223 17:48:51.118242 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4-dns-swift-storage-0\") pod \"dnsmasq-dns-5c4949dfdc-glzsk\" (UID: \"e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4\") " pod="openstack/dnsmasq-dns-5c4949dfdc-glzsk" Feb 23 17:48:51 crc kubenswrapper[4724]: I0223 17:48:51.118285 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4-ovsdbserver-nb\") pod \"dnsmasq-dns-5c4949dfdc-glzsk\" (UID: \"e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4\") " pod="openstack/dnsmasq-dns-5c4949dfdc-glzsk" Feb 23 17:48:51 crc kubenswrapper[4724]: I0223 17:48:51.118324 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4-ovsdbserver-sb\") pod \"dnsmasq-dns-5c4949dfdc-glzsk\" (UID: \"e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4\") " pod="openstack/dnsmasq-dns-5c4949dfdc-glzsk" Feb 23 17:48:51 crc kubenswrapper[4724]: I0223 17:48:51.118351 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4-dns-svc\") pod \"dnsmasq-dns-5c4949dfdc-glzsk\" (UID: \"e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4\") " pod="openstack/dnsmasq-dns-5c4949dfdc-glzsk" Feb 23 17:48:51 crc kubenswrapper[4724]: I0223 17:48:51.118374 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4-config\") pod \"dnsmasq-dns-5c4949dfdc-glzsk\" (UID: \"e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4\") " pod="openstack/dnsmasq-dns-5c4949dfdc-glzsk" Feb 23 17:48:51 crc kubenswrapper[4724]: I0223 17:48:51.118520 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmh9p\" (UniqueName: \"kubernetes.io/projected/e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4-kube-api-access-tmh9p\") pod \"dnsmasq-dns-5c4949dfdc-glzsk\" (UID: \"e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4\") " pod="openstack/dnsmasq-dns-5c4949dfdc-glzsk" Feb 23 17:48:51 crc kubenswrapper[4724]: I0223 17:48:51.220441 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4-dns-swift-storage-0\") pod \"dnsmasq-dns-5c4949dfdc-glzsk\" (UID: \"e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4\") " pod="openstack/dnsmasq-dns-5c4949dfdc-glzsk" Feb 23 17:48:51 crc kubenswrapper[4724]: I0223 17:48:51.220509 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4-ovsdbserver-nb\") pod \"dnsmasq-dns-5c4949dfdc-glzsk\" (UID: \"e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4\") " pod="openstack/dnsmasq-dns-5c4949dfdc-glzsk" Feb 23 17:48:51 crc kubenswrapper[4724]: I0223 17:48:51.220550 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4-ovsdbserver-sb\") pod \"dnsmasq-dns-5c4949dfdc-glzsk\" (UID: \"e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4\") " pod="openstack/dnsmasq-dns-5c4949dfdc-glzsk" Feb 23 17:48:51 crc kubenswrapper[4724]: I0223 17:48:51.220573 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4-dns-svc\") pod \"dnsmasq-dns-5c4949dfdc-glzsk\" (UID: \"e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4\") " pod="openstack/dnsmasq-dns-5c4949dfdc-glzsk" Feb 23 17:48:51 crc kubenswrapper[4724]: I0223 17:48:51.220595 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4-config\") pod \"dnsmasq-dns-5c4949dfdc-glzsk\" (UID: \"e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4\") " pod="openstack/dnsmasq-dns-5c4949dfdc-glzsk" Feb 23 17:48:51 crc kubenswrapper[4724]: I0223 17:48:51.220831 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmh9p\" (UniqueName: \"kubernetes.io/projected/e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4-kube-api-access-tmh9p\") pod \"dnsmasq-dns-5c4949dfdc-glzsk\" (UID: \"e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4\") " pod="openstack/dnsmasq-dns-5c4949dfdc-glzsk" Feb 23 17:48:51 crc kubenswrapper[4724]: I0223 17:48:51.222021 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4-dns-swift-storage-0\") pod \"dnsmasq-dns-5c4949dfdc-glzsk\" (UID: \"e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4\") " pod="openstack/dnsmasq-dns-5c4949dfdc-glzsk" Feb 23 17:48:51 crc kubenswrapper[4724]: I0223 17:48:51.222412 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4-ovsdbserver-sb\") pod \"dnsmasq-dns-5c4949dfdc-glzsk\" (UID: \"e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4\") " pod="openstack/dnsmasq-dns-5c4949dfdc-glzsk" Feb 23 17:48:51 crc kubenswrapper[4724]: I0223 17:48:51.222585 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4-dns-svc\") pod \"dnsmasq-dns-5c4949dfdc-glzsk\" (UID: \"e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4\") " pod="openstack/dnsmasq-dns-5c4949dfdc-glzsk" Feb 23 17:48:51 crc kubenswrapper[4724]: I0223 17:48:51.223133 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4-config\") pod \"dnsmasq-dns-5c4949dfdc-glzsk\" (UID: \"e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4\") " pod="openstack/dnsmasq-dns-5c4949dfdc-glzsk" Feb 23 17:48:51 crc kubenswrapper[4724]: I0223 17:48:51.223161 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4-ovsdbserver-nb\") pod \"dnsmasq-dns-5c4949dfdc-glzsk\" (UID: \"e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4\") " pod="openstack/dnsmasq-dns-5c4949dfdc-glzsk" Feb 23 17:48:51 crc kubenswrapper[4724]: I0223 17:48:51.252494 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmh9p\" (UniqueName: \"kubernetes.io/projected/e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4-kube-api-access-tmh9p\") pod \"dnsmasq-dns-5c4949dfdc-glzsk\" (UID: \"e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4\") " pod="openstack/dnsmasq-dns-5c4949dfdc-glzsk" Feb 23 17:48:51 crc kubenswrapper[4724]: W0223 17:48:51.478309 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3811756_e4b8_40fe_9158_30f432841b07.slice/crio-243254128f16c6feae24d58b39df565de1e5a932460b76810c138aef62c3bc6f WatchSource:0}: Error finding container 243254128f16c6feae24d58b39df565de1e5a932460b76810c138aef62c3bc6f: Status 404 returned error can't find the container with id 243254128f16c6feae24d58b39df565de1e5a932460b76810c138aef62c3bc6f Feb 23 17:48:51 crc kubenswrapper[4724]: W0223 17:48:51.479894 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68bbcfd5_a073_443b_afab_650c48febc56.slice/crio-313f7b1f4147b1adb6bb8c669dc104457eda971836afeb656da2d02d4d8408fd WatchSource:0}: Error finding container 313f7b1f4147b1adb6bb8c669dc104457eda971836afeb656da2d02d4d8408fd: Status 404 returned error can't find the container with id 313f7b1f4147b1adb6bb8c669dc104457eda971836afeb656da2d02d4d8408fd Feb 23 17:48:51 crc kubenswrapper[4724]: I0223 17:48:51.497151 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-c185-account-create-update-2zg86"] Feb 23 17:48:51 crc kubenswrapper[4724]: I0223 17:48:51.524049 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-e738-account-create-update-6sklf"] Feb 23 17:48:51 crc kubenswrapper[4724]: I0223 17:48:51.537693 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c4949dfdc-glzsk" Feb 23 17:48:51 crc kubenswrapper[4724]: I0223 17:48:51.596856 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-chd5w"] Feb 23 17:48:51 crc kubenswrapper[4724]: I0223 17:48:51.798896 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-w2gbv"] Feb 23 17:48:51 crc kubenswrapper[4724]: I0223 17:48:51.838495 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-959e-account-create-update-kfz4c"] Feb 23 17:48:51 crc kubenswrapper[4724]: I0223 17:48:51.981088 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-zt6pn"] Feb 23 17:48:51 crc kubenswrapper[4724]: I0223 17:48:51.988819 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-69qxx"] Feb 23 17:48:52 crc kubenswrapper[4724]: W0223 17:48:52.008355 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20956a35_60c2_4df4_b475_0a64a3fa11ae.slice/crio-b4ca5f4fff417a6e3dfa78ec8c1d9e3b20995c44220f302177c6b68fa9a0a9b9 WatchSource:0}: Error finding container b4ca5f4fff417a6e3dfa78ec8c1d9e3b20995c44220f302177c6b68fa9a0a9b9: Status 404 returned error can't find the container with id b4ca5f4fff417a6e3dfa78ec8c1d9e3b20995c44220f302177c6b68fa9a0a9b9 Feb 23 17:48:52 crc kubenswrapper[4724]: I0223 17:48:52.054690 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-w2gbv" event={"ID":"8615b39e-8bab-4706-a6b6-e719c566b7dc","Type":"ContainerStarted","Data":"5f64ed7923df3f00fbe2ac8595c6530f536c61bde75a6946e105316af554b38b"} Feb 23 17:48:52 crc kubenswrapper[4724]: I0223 17:48:52.060024 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-959e-account-create-update-kfz4c" event={"ID":"934da594-6ca7-46b8-954e-c1cff91e3f44","Type":"ContainerStarted","Data":"18a6c29a1dd48d5226484608ed17571018149f364bff6ecfbd52970bf77c1f02"} Feb 23 17:48:52 crc kubenswrapper[4724]: I0223 17:48:52.062087 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-c185-account-create-update-2zg86" event={"ID":"a3811756-e4b8-40fe-9158-30f432841b07","Type":"ContainerStarted","Data":"10d7d78fa3fd71b4661921677c22121cd774b794b279c6d1b98cbcd8e2abd565"} Feb 23 17:48:52 crc kubenswrapper[4724]: I0223 17:48:52.062114 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-c185-account-create-update-2zg86" event={"ID":"a3811756-e4b8-40fe-9158-30f432841b07","Type":"ContainerStarted","Data":"243254128f16c6feae24d58b39df565de1e5a932460b76810c138aef62c3bc6f"} Feb 23 17:48:52 crc kubenswrapper[4724]: I0223 17:48:52.063966 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e738-account-create-update-6sklf" event={"ID":"68bbcfd5-a073-443b-afab-650c48febc56","Type":"ContainerStarted","Data":"575e86fe8fcea57123a642069fbd47eb6f5ab040c6ba5558f2c99288053d2460"} Feb 23 17:48:52 crc kubenswrapper[4724]: I0223 17:48:52.064353 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e738-account-create-update-6sklf" event={"ID":"68bbcfd5-a073-443b-afab-650c48febc56","Type":"ContainerStarted","Data":"313f7b1f4147b1adb6bb8c669dc104457eda971836afeb656da2d02d4d8408fd"} Feb 23 17:48:52 crc kubenswrapper[4724]: I0223 17:48:52.065916 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-zt6pn" event={"ID":"20956a35-60c2-4df4-b475-0a64a3fa11ae","Type":"ContainerStarted","Data":"b4ca5f4fff417a6e3dfa78ec8c1d9e3b20995c44220f302177c6b68fa9a0a9b9"} Feb 23 17:48:52 crc kubenswrapper[4724]: I0223 17:48:52.066967 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-69qxx" event={"ID":"9686c843-cd47-4a6c-992a-97dd99d4304e","Type":"ContainerStarted","Data":"94583b07f3078d56cb1fc59d7580d9e8380458bda5d8e58b4365052af74935e7"} Feb 23 17:48:52 crc kubenswrapper[4724]: I0223 17:48:52.072582 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-4rgbz" event={"ID":"1c3f7706-ecc6-45ce-90d7-89bafc6588fd","Type":"ContainerStarted","Data":"8be48d7a1eb0ad41d208dc3291a360e63288a04c10748e48c7853ba7e3656644"} Feb 23 17:48:52 crc kubenswrapper[4724]: I0223 17:48:52.072633 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-4rgbz" event={"ID":"1c3f7706-ecc6-45ce-90d7-89bafc6588fd","Type":"ContainerStarted","Data":"a1b6ff475952f760c2496fcff6dc329abdd8c49f66a998cb8dcd89ee598763aa"} Feb 23 17:48:52 crc kubenswrapper[4724]: I0223 17:48:52.094250 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-chd5w" event={"ID":"2e8659d3-de95-405b-b137-54708400f566","Type":"ContainerStarted","Data":"c52634768d11e5831a1c24915ed25d659594d04b13b74863bee9b508a9921985"} Feb 23 17:48:52 crc kubenswrapper[4724]: I0223 17:48:52.094303 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-chd5w" event={"ID":"2e8659d3-de95-405b-b137-54708400f566","Type":"ContainerStarted","Data":"c5cb9f8f4884cbf27985733bf7ed4de92606cd39945f40a1669828580ea0c929"} Feb 23 17:48:52 crc kubenswrapper[4724]: I0223 17:48:52.094606 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-c185-account-create-update-2zg86" podStartSLOduration=3.094585661 podStartE2EDuration="3.094585661s" podCreationTimestamp="2026-02-23 17:48:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:48:52.080142376 +0000 UTC m=+1087.896341966" watchObservedRunningTime="2026-02-23 17:48:52.094585661 +0000 UTC m=+1087.910785251" Feb 23 17:48:52 crc kubenswrapper[4724]: I0223 17:48:52.118299 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-e738-account-create-update-6sklf" podStartSLOduration=3.118282309 podStartE2EDuration="3.118282309s" podCreationTimestamp="2026-02-23 17:48:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:48:52.096915759 +0000 UTC m=+1087.913115359" watchObservedRunningTime="2026-02-23 17:48:52.118282309 +0000 UTC m=+1087.934481899" Feb 23 17:48:52 crc kubenswrapper[4724]: I0223 17:48:52.130910 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-4rgbz" podStartSLOduration=3.130895617 podStartE2EDuration="3.130895617s" podCreationTimestamp="2026-02-23 17:48:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:48:52.125410299 +0000 UTC m=+1087.941609899" watchObservedRunningTime="2026-02-23 17:48:52.130895617 +0000 UTC m=+1087.947095217" Feb 23 17:48:52 crc kubenswrapper[4724]: I0223 17:48:52.152267 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-chd5w" podStartSLOduration=2.152249067 podStartE2EDuration="2.152249067s" podCreationTimestamp="2026-02-23 17:48:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:48:52.143556197 +0000 UTC m=+1087.959755807" watchObservedRunningTime="2026-02-23 17:48:52.152249067 +0000 UTC m=+1087.968448667" Feb 23 17:48:52 crc kubenswrapper[4724]: I0223 17:48:52.234204 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c4949dfdc-glzsk"] Feb 23 17:48:53 crc kubenswrapper[4724]: I0223 17:48:53.108177 4724 generic.go:334] "Generic (PLEG): container finished" podID="934da594-6ca7-46b8-954e-c1cff91e3f44" containerID="75fa4f6c6f11759f32ec59979bd45a4d7a27600b287363fc603da871ee5f39bd" exitCode=0 Feb 23 17:48:53 crc kubenswrapper[4724]: I0223 17:48:53.108530 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-959e-account-create-update-kfz4c" event={"ID":"934da594-6ca7-46b8-954e-c1cff91e3f44","Type":"ContainerDied","Data":"75fa4f6c6f11759f32ec59979bd45a4d7a27600b287363fc603da871ee5f39bd"} Feb 23 17:48:53 crc kubenswrapper[4724]: I0223 17:48:53.112157 4724 generic.go:334] "Generic (PLEG): container finished" podID="a3811756-e4b8-40fe-9158-30f432841b07" containerID="10d7d78fa3fd71b4661921677c22121cd774b794b279c6d1b98cbcd8e2abd565" exitCode=0 Feb 23 17:48:53 crc kubenswrapper[4724]: I0223 17:48:53.112215 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-c185-account-create-update-2zg86" event={"ID":"a3811756-e4b8-40fe-9158-30f432841b07","Type":"ContainerDied","Data":"10d7d78fa3fd71b4661921677c22121cd774b794b279c6d1b98cbcd8e2abd565"} Feb 23 17:48:53 crc kubenswrapper[4724]: I0223 17:48:53.116213 4724 generic.go:334] "Generic (PLEG): container finished" podID="68bbcfd5-a073-443b-afab-650c48febc56" containerID="575e86fe8fcea57123a642069fbd47eb6f5ab040c6ba5558f2c99288053d2460" exitCode=0 Feb 23 17:48:53 crc kubenswrapper[4724]: I0223 17:48:53.116266 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e738-account-create-update-6sklf" event={"ID":"68bbcfd5-a073-443b-afab-650c48febc56","Type":"ContainerDied","Data":"575e86fe8fcea57123a642069fbd47eb6f5ab040c6ba5558f2c99288053d2460"} Feb 23 17:48:53 crc kubenswrapper[4724]: I0223 17:48:53.119161 4724 generic.go:334] "Generic (PLEG): container finished" podID="e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4" containerID="78101d8fa9f487ff449ed71d66b10b2d053d213d358cf8736056e3004cdbcd59" exitCode=0 Feb 23 17:48:53 crc kubenswrapper[4724]: I0223 17:48:53.119230 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c4949dfdc-glzsk" event={"ID":"e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4","Type":"ContainerDied","Data":"78101d8fa9f487ff449ed71d66b10b2d053d213d358cf8736056e3004cdbcd59"} Feb 23 17:48:53 crc kubenswrapper[4724]: I0223 17:48:53.119261 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c4949dfdc-glzsk" event={"ID":"e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4","Type":"ContainerStarted","Data":"e965a91ca09412a1ac66e18b3fb15d5fede78961149007444e9ee2c8da031e86"} Feb 23 17:48:53 crc kubenswrapper[4724]: I0223 17:48:53.126892 4724 generic.go:334] "Generic (PLEG): container finished" podID="1c3f7706-ecc6-45ce-90d7-89bafc6588fd" containerID="8be48d7a1eb0ad41d208dc3291a360e63288a04c10748e48c7853ba7e3656644" exitCode=0 Feb 23 17:48:53 crc kubenswrapper[4724]: I0223 17:48:53.127000 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-4rgbz" event={"ID":"1c3f7706-ecc6-45ce-90d7-89bafc6588fd","Type":"ContainerDied","Data":"8be48d7a1eb0ad41d208dc3291a360e63288a04c10748e48c7853ba7e3656644"} Feb 23 17:48:53 crc kubenswrapper[4724]: I0223 17:48:53.129351 4724 generic.go:334] "Generic (PLEG): container finished" podID="2e8659d3-de95-405b-b137-54708400f566" containerID="c52634768d11e5831a1c24915ed25d659594d04b13b74863bee9b508a9921985" exitCode=0 Feb 23 17:48:53 crc kubenswrapper[4724]: I0223 17:48:53.129453 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-chd5w" event={"ID":"2e8659d3-de95-405b-b137-54708400f566","Type":"ContainerDied","Data":"c52634768d11e5831a1c24915ed25d659594d04b13b74863bee9b508a9921985"} Feb 23 17:48:53 crc kubenswrapper[4724]: I0223 17:48:53.131560 4724 generic.go:334] "Generic (PLEG): container finished" podID="8615b39e-8bab-4706-a6b6-e719c566b7dc" containerID="fd8dcc156220d851af55cbcd9261a4f3b0d8fa6dce9b958f316266b34bfcd863" exitCode=0 Feb 23 17:48:53 crc kubenswrapper[4724]: I0223 17:48:53.131583 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-w2gbv" event={"ID":"8615b39e-8bab-4706-a6b6-e719c566b7dc","Type":"ContainerDied","Data":"fd8dcc156220d851af55cbcd9261a4f3b0d8fa6dce9b958f316266b34bfcd863"} Feb 23 17:48:54 crc kubenswrapper[4724]: I0223 17:48:54.142248 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c4949dfdc-glzsk" event={"ID":"e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4","Type":"ContainerStarted","Data":"ddf58b4e407180f5020e980b08117ce07de994e115c26f57d05d0cbeb961bf5e"} Feb 23 17:48:54 crc kubenswrapper[4724]: I0223 17:48:54.142619 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c4949dfdc-glzsk" Feb 23 17:48:54 crc kubenswrapper[4724]: I0223 17:48:54.148354 4724 generic.go:334] "Generic (PLEG): container finished" podID="28e2f02b-2d94-4130-8d0a-3443aed25fba" containerID="aafa8a27e8a0b8a6e76b06fe9f2883e7ddd4fa58de5b466c8150296f49b8c03f" exitCode=0 Feb 23 17:48:54 crc kubenswrapper[4724]: I0223 17:48:54.148542 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"28e2f02b-2d94-4130-8d0a-3443aed25fba","Type":"ContainerDied","Data":"aafa8a27e8a0b8a6e76b06fe9f2883e7ddd4fa58de5b466c8150296f49b8c03f"} Feb 23 17:48:54 crc kubenswrapper[4724]: I0223 17:48:54.174681 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c4949dfdc-glzsk" podStartSLOduration=4.174664771 podStartE2EDuration="4.174664771s" podCreationTimestamp="2026-02-23 17:48:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:48:54.169133412 +0000 UTC m=+1089.985333032" watchObservedRunningTime="2026-02-23 17:48:54.174664771 +0000 UTC m=+1089.990864371" Feb 23 17:48:55 crc kubenswrapper[4724]: I0223 17:48:55.158473 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"28e2f02b-2d94-4130-8d0a-3443aed25fba","Type":"ContainerStarted","Data":"81a144562248cdd52d45389349b259977df68466edbb0098ffbce3f964780d79"} Feb 23 17:48:55 crc kubenswrapper[4724]: I0223 17:48:55.160523 4724 generic.go:334] "Generic (PLEG): container finished" podID="4835f23c-1737-45fa-8d8f-d5a381c9d498" containerID="ec242011e8cfa23eb71faae51f72794f9bc6b983c3dae0e34c6fd63b2963e704" exitCode=0 Feb 23 17:48:55 crc kubenswrapper[4724]: I0223 17:48:55.160608 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-qqzwt" event={"ID":"4835f23c-1737-45fa-8d8f-d5a381c9d498","Type":"ContainerDied","Data":"ec242011e8cfa23eb71faae51f72794f9bc6b983c3dae0e34c6fd63b2963e704"} Feb 23 17:48:57 crc kubenswrapper[4724]: I0223 17:48:57.080378 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-4rgbz" Feb 23 17:48:57 crc kubenswrapper[4724]: I0223 17:48:57.088424 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-959e-account-create-update-kfz4c" Feb 23 17:48:57 crc kubenswrapper[4724]: I0223 17:48:57.100246 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e738-account-create-update-6sklf" Feb 23 17:48:57 crc kubenswrapper[4724]: I0223 17:48:57.110473 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-c185-account-create-update-2zg86" Feb 23 17:48:57 crc kubenswrapper[4724]: I0223 17:48:57.132088 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-w2gbv" Feb 23 17:48:57 crc kubenswrapper[4724]: I0223 17:48:57.135760 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-chd5w" Feb 23 17:48:57 crc kubenswrapper[4724]: I0223 17:48:57.226156 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-959e-account-create-update-kfz4c" event={"ID":"934da594-6ca7-46b8-954e-c1cff91e3f44","Type":"ContainerDied","Data":"18a6c29a1dd48d5226484608ed17571018149f364bff6ecfbd52970bf77c1f02"} Feb 23 17:48:57 crc kubenswrapper[4724]: I0223 17:48:57.226204 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18a6c29a1dd48d5226484608ed17571018149f364bff6ecfbd52970bf77c1f02" Feb 23 17:48:57 crc kubenswrapper[4724]: I0223 17:48:57.226253 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-959e-account-create-update-kfz4c" Feb 23 17:48:57 crc kubenswrapper[4724]: I0223 17:48:57.243570 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-c185-account-create-update-2zg86" event={"ID":"a3811756-e4b8-40fe-9158-30f432841b07","Type":"ContainerDied","Data":"243254128f16c6feae24d58b39df565de1e5a932460b76810c138aef62c3bc6f"} Feb 23 17:48:57 crc kubenswrapper[4724]: I0223 17:48:57.243610 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="243254128f16c6feae24d58b39df565de1e5a932460b76810c138aef62c3bc6f" Feb 23 17:48:57 crc kubenswrapper[4724]: I0223 17:48:57.243676 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-c185-account-create-update-2zg86" Feb 23 17:48:57 crc kubenswrapper[4724]: I0223 17:48:57.244241 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znhnl\" (UniqueName: \"kubernetes.io/projected/a3811756-e4b8-40fe-9158-30f432841b07-kube-api-access-znhnl\") pod \"a3811756-e4b8-40fe-9158-30f432841b07\" (UID: \"a3811756-e4b8-40fe-9158-30f432841b07\") " Feb 23 17:48:57 crc kubenswrapper[4724]: I0223 17:48:57.244342 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmvdp\" (UniqueName: \"kubernetes.io/projected/68bbcfd5-a073-443b-afab-650c48febc56-kube-api-access-pmvdp\") pod \"68bbcfd5-a073-443b-afab-650c48febc56\" (UID: \"68bbcfd5-a073-443b-afab-650c48febc56\") " Feb 23 17:48:57 crc kubenswrapper[4724]: I0223 17:48:57.244371 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e8659d3-de95-405b-b137-54708400f566-operator-scripts\") pod \"2e8659d3-de95-405b-b137-54708400f566\" (UID: \"2e8659d3-de95-405b-b137-54708400f566\") " Feb 23 17:48:57 crc kubenswrapper[4724]: I0223 17:48:57.244425 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c3f7706-ecc6-45ce-90d7-89bafc6588fd-operator-scripts\") pod \"1c3f7706-ecc6-45ce-90d7-89bafc6588fd\" (UID: \"1c3f7706-ecc6-45ce-90d7-89bafc6588fd\") " Feb 23 17:48:57 crc kubenswrapper[4724]: I0223 17:48:57.244469 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68bbcfd5-a073-443b-afab-650c48febc56-operator-scripts\") pod \"68bbcfd5-a073-443b-afab-650c48febc56\" (UID: \"68bbcfd5-a073-443b-afab-650c48febc56\") " Feb 23 17:48:57 crc kubenswrapper[4724]: I0223 17:48:57.244489 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/934da594-6ca7-46b8-954e-c1cff91e3f44-operator-scripts\") pod \"934da594-6ca7-46b8-954e-c1cff91e3f44\" (UID: \"934da594-6ca7-46b8-954e-c1cff91e3f44\") " Feb 23 17:48:57 crc kubenswrapper[4724]: I0223 17:48:57.244511 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xb4sx\" (UniqueName: \"kubernetes.io/projected/1c3f7706-ecc6-45ce-90d7-89bafc6588fd-kube-api-access-xb4sx\") pod \"1c3f7706-ecc6-45ce-90d7-89bafc6588fd\" (UID: \"1c3f7706-ecc6-45ce-90d7-89bafc6588fd\") " Feb 23 17:48:57 crc kubenswrapper[4724]: I0223 17:48:57.244571 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95bf8\" (UniqueName: \"kubernetes.io/projected/934da594-6ca7-46b8-954e-c1cff91e3f44-kube-api-access-95bf8\") pod \"934da594-6ca7-46b8-954e-c1cff91e3f44\" (UID: \"934da594-6ca7-46b8-954e-c1cff91e3f44\") " Feb 23 17:48:57 crc kubenswrapper[4724]: I0223 17:48:57.244588 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6z5b\" (UniqueName: \"kubernetes.io/projected/8615b39e-8bab-4706-a6b6-e719c566b7dc-kube-api-access-j6z5b\") pod \"8615b39e-8bab-4706-a6b6-e719c566b7dc\" (UID: \"8615b39e-8bab-4706-a6b6-e719c566b7dc\") " Feb 23 17:48:57 crc kubenswrapper[4724]: I0223 17:48:57.244637 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qn755\" (UniqueName: \"kubernetes.io/projected/2e8659d3-de95-405b-b137-54708400f566-kube-api-access-qn755\") pod \"2e8659d3-de95-405b-b137-54708400f566\" (UID: \"2e8659d3-de95-405b-b137-54708400f566\") " Feb 23 17:48:57 crc kubenswrapper[4724]: I0223 17:48:57.244663 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8615b39e-8bab-4706-a6b6-e719c566b7dc-operator-scripts\") pod \"8615b39e-8bab-4706-a6b6-e719c566b7dc\" (UID: \"8615b39e-8bab-4706-a6b6-e719c566b7dc\") " Feb 23 17:48:57 crc kubenswrapper[4724]: I0223 17:48:57.244681 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3811756-e4b8-40fe-9158-30f432841b07-operator-scripts\") pod \"a3811756-e4b8-40fe-9158-30f432841b07\" (UID: \"a3811756-e4b8-40fe-9158-30f432841b07\") " Feb 23 17:48:57 crc kubenswrapper[4724]: I0223 17:48:57.247604 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3811756-e4b8-40fe-9158-30f432841b07-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a3811756-e4b8-40fe-9158-30f432841b07" (UID: "a3811756-e4b8-40fe-9158-30f432841b07"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:48:57 crc kubenswrapper[4724]: I0223 17:48:57.248889 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/934da594-6ca7-46b8-954e-c1cff91e3f44-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "934da594-6ca7-46b8-954e-c1cff91e3f44" (UID: "934da594-6ca7-46b8-954e-c1cff91e3f44"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:48:57 crc kubenswrapper[4724]: I0223 17:48:57.249033 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e8659d3-de95-405b-b137-54708400f566-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2e8659d3-de95-405b-b137-54708400f566" (UID: "2e8659d3-de95-405b-b137-54708400f566"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:48:57 crc kubenswrapper[4724]: I0223 17:48:57.249287 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c3f7706-ecc6-45ce-90d7-89bafc6588fd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1c3f7706-ecc6-45ce-90d7-89bafc6588fd" (UID: "1c3f7706-ecc6-45ce-90d7-89bafc6588fd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:48:57 crc kubenswrapper[4724]: I0223 17:48:57.249638 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68bbcfd5-a073-443b-afab-650c48febc56-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "68bbcfd5-a073-443b-afab-650c48febc56" (UID: "68bbcfd5-a073-443b-afab-650c48febc56"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:48:57 crc kubenswrapper[4724]: I0223 17:48:57.257817 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8615b39e-8bab-4706-a6b6-e719c566b7dc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8615b39e-8bab-4706-a6b6-e719c566b7dc" (UID: "8615b39e-8bab-4706-a6b6-e719c566b7dc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:48:57 crc kubenswrapper[4724]: I0223 17:48:57.259605 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e8659d3-de95-405b-b137-54708400f566-kube-api-access-qn755" (OuterVolumeSpecName: "kube-api-access-qn755") pod "2e8659d3-de95-405b-b137-54708400f566" (UID: "2e8659d3-de95-405b-b137-54708400f566"). InnerVolumeSpecName "kube-api-access-qn755". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:48:57 crc kubenswrapper[4724]: I0223 17:48:57.261475 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8615b39e-8bab-4706-a6b6-e719c566b7dc-kube-api-access-j6z5b" (OuterVolumeSpecName: "kube-api-access-j6z5b") pod "8615b39e-8bab-4706-a6b6-e719c566b7dc" (UID: "8615b39e-8bab-4706-a6b6-e719c566b7dc"). InnerVolumeSpecName "kube-api-access-j6z5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:48:57 crc kubenswrapper[4724]: I0223 17:48:57.261635 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c3f7706-ecc6-45ce-90d7-89bafc6588fd-kube-api-access-xb4sx" (OuterVolumeSpecName: "kube-api-access-xb4sx") pod "1c3f7706-ecc6-45ce-90d7-89bafc6588fd" (UID: "1c3f7706-ecc6-45ce-90d7-89bafc6588fd"). InnerVolumeSpecName "kube-api-access-xb4sx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:48:57 crc kubenswrapper[4724]: I0223 17:48:57.262108 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/934da594-6ca7-46b8-954e-c1cff91e3f44-kube-api-access-95bf8" (OuterVolumeSpecName: "kube-api-access-95bf8") pod "934da594-6ca7-46b8-954e-c1cff91e3f44" (UID: "934da594-6ca7-46b8-954e-c1cff91e3f44"). InnerVolumeSpecName "kube-api-access-95bf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:48:57 crc kubenswrapper[4724]: I0223 17:48:57.262365 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e738-account-create-update-6sklf" event={"ID":"68bbcfd5-a073-443b-afab-650c48febc56","Type":"ContainerDied","Data":"313f7b1f4147b1adb6bb8c669dc104457eda971836afeb656da2d02d4d8408fd"} Feb 23 17:48:57 crc kubenswrapper[4724]: I0223 17:48:57.262411 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="313f7b1f4147b1adb6bb8c669dc104457eda971836afeb656da2d02d4d8408fd" Feb 23 17:48:57 crc kubenswrapper[4724]: I0223 17:48:57.262471 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e738-account-create-update-6sklf" Feb 23 17:48:57 crc kubenswrapper[4724]: I0223 17:48:57.264006 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68bbcfd5-a073-443b-afab-650c48febc56-kube-api-access-pmvdp" (OuterVolumeSpecName: "kube-api-access-pmvdp") pod "68bbcfd5-a073-443b-afab-650c48febc56" (UID: "68bbcfd5-a073-443b-afab-650c48febc56"). InnerVolumeSpecName "kube-api-access-pmvdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:48:57 crc kubenswrapper[4724]: I0223 17:48:57.269664 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3811756-e4b8-40fe-9158-30f432841b07-kube-api-access-znhnl" (OuterVolumeSpecName: "kube-api-access-znhnl") pod "a3811756-e4b8-40fe-9158-30f432841b07" (UID: "a3811756-e4b8-40fe-9158-30f432841b07"). InnerVolumeSpecName "kube-api-access-znhnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:48:57 crc kubenswrapper[4724]: I0223 17:48:57.301171 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-4rgbz" event={"ID":"1c3f7706-ecc6-45ce-90d7-89bafc6588fd","Type":"ContainerDied","Data":"a1b6ff475952f760c2496fcff6dc329abdd8c49f66a998cb8dcd89ee598763aa"} Feb 23 17:48:57 crc kubenswrapper[4724]: I0223 17:48:57.301214 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1b6ff475952f760c2496fcff6dc329abdd8c49f66a998cb8dcd89ee598763aa" Feb 23 17:48:57 crc kubenswrapper[4724]: I0223 17:48:57.301282 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-4rgbz" Feb 23 17:48:57 crc kubenswrapper[4724]: I0223 17:48:57.312758 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-chd5w" event={"ID":"2e8659d3-de95-405b-b137-54708400f566","Type":"ContainerDied","Data":"c5cb9f8f4884cbf27985733bf7ed4de92606cd39945f40a1669828580ea0c929"} Feb 23 17:48:57 crc kubenswrapper[4724]: I0223 17:48:57.312796 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5cb9f8f4884cbf27985733bf7ed4de92606cd39945f40a1669828580ea0c929" Feb 23 17:48:57 crc kubenswrapper[4724]: I0223 17:48:57.312847 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-chd5w" Feb 23 17:48:57 crc kubenswrapper[4724]: I0223 17:48:57.321485 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-w2gbv" event={"ID":"8615b39e-8bab-4706-a6b6-e719c566b7dc","Type":"ContainerDied","Data":"5f64ed7923df3f00fbe2ac8595c6530f536c61bde75a6946e105316af554b38b"} Feb 23 17:48:57 crc kubenswrapper[4724]: I0223 17:48:57.321598 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f64ed7923df3f00fbe2ac8595c6530f536c61bde75a6946e105316af554b38b" Feb 23 17:48:57 crc kubenswrapper[4724]: I0223 17:48:57.321651 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-w2gbv" Feb 23 17:48:57 crc kubenswrapper[4724]: I0223 17:48:57.346071 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qn755\" (UniqueName: \"kubernetes.io/projected/2e8659d3-de95-405b-b137-54708400f566-kube-api-access-qn755\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:57 crc kubenswrapper[4724]: I0223 17:48:57.346114 4724 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8615b39e-8bab-4706-a6b6-e719c566b7dc-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:57 crc kubenswrapper[4724]: I0223 17:48:57.346123 4724 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3811756-e4b8-40fe-9158-30f432841b07-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:57 crc kubenswrapper[4724]: I0223 17:48:57.346137 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-znhnl\" (UniqueName: \"kubernetes.io/projected/a3811756-e4b8-40fe-9158-30f432841b07-kube-api-access-znhnl\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:57 crc kubenswrapper[4724]: I0223 17:48:57.346145 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmvdp\" (UniqueName: \"kubernetes.io/projected/68bbcfd5-a073-443b-afab-650c48febc56-kube-api-access-pmvdp\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:57 crc kubenswrapper[4724]: I0223 17:48:57.346154 4724 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e8659d3-de95-405b-b137-54708400f566-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:57 crc kubenswrapper[4724]: I0223 17:48:57.346163 4724 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c3f7706-ecc6-45ce-90d7-89bafc6588fd-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:57 crc kubenswrapper[4724]: I0223 17:48:57.346171 4724 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68bbcfd5-a073-443b-afab-650c48febc56-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:57 crc kubenswrapper[4724]: I0223 17:48:57.346179 4724 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/934da594-6ca7-46b8-954e-c1cff91e3f44-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:57 crc kubenswrapper[4724]: I0223 17:48:57.346188 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xb4sx\" (UniqueName: \"kubernetes.io/projected/1c3f7706-ecc6-45ce-90d7-89bafc6588fd-kube-api-access-xb4sx\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:57 crc kubenswrapper[4724]: I0223 17:48:57.346196 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95bf8\" (UniqueName: \"kubernetes.io/projected/934da594-6ca7-46b8-954e-c1cff91e3f44-kube-api-access-95bf8\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:57 crc kubenswrapper[4724]: I0223 17:48:57.346205 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6z5b\" (UniqueName: \"kubernetes.io/projected/8615b39e-8bab-4706-a6b6-e719c566b7dc-kube-api-access-j6z5b\") on node \"crc\" DevicePath \"\"" Feb 23 17:48:57 crc kubenswrapper[4724]: I0223 17:48:57.752317 4724 patch_prober.go:28] interesting pod/machine-config-daemon-rw78r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 17:48:57 crc kubenswrapper[4724]: I0223 17:48:57.752379 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 17:49:00 crc kubenswrapper[4724]: I0223 17:49:00.818609 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-qqzwt" Feb 23 17:49:00 crc kubenswrapper[4724]: I0223 17:49:00.906538 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9476\" (UniqueName: \"kubernetes.io/projected/4835f23c-1737-45fa-8d8f-d5a381c9d498-kube-api-access-p9476\") pod \"4835f23c-1737-45fa-8d8f-d5a381c9d498\" (UID: \"4835f23c-1737-45fa-8d8f-d5a381c9d498\") " Feb 23 17:49:00 crc kubenswrapper[4724]: I0223 17:49:00.906665 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4835f23c-1737-45fa-8d8f-d5a381c9d498-db-sync-config-data\") pod \"4835f23c-1737-45fa-8d8f-d5a381c9d498\" (UID: \"4835f23c-1737-45fa-8d8f-d5a381c9d498\") " Feb 23 17:49:00 crc kubenswrapper[4724]: I0223 17:49:00.906710 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4835f23c-1737-45fa-8d8f-d5a381c9d498-config-data\") pod \"4835f23c-1737-45fa-8d8f-d5a381c9d498\" (UID: \"4835f23c-1737-45fa-8d8f-d5a381c9d498\") " Feb 23 17:49:00 crc kubenswrapper[4724]: I0223 17:49:00.906755 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4835f23c-1737-45fa-8d8f-d5a381c9d498-combined-ca-bundle\") pod \"4835f23c-1737-45fa-8d8f-d5a381c9d498\" (UID: \"4835f23c-1737-45fa-8d8f-d5a381c9d498\") " Feb 23 17:49:00 crc kubenswrapper[4724]: I0223 17:49:00.912290 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4835f23c-1737-45fa-8d8f-d5a381c9d498-kube-api-access-p9476" (OuterVolumeSpecName: "kube-api-access-p9476") pod "4835f23c-1737-45fa-8d8f-d5a381c9d498" (UID: "4835f23c-1737-45fa-8d8f-d5a381c9d498"). InnerVolumeSpecName "kube-api-access-p9476". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:49:00 crc kubenswrapper[4724]: I0223 17:49:00.912955 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4835f23c-1737-45fa-8d8f-d5a381c9d498-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "4835f23c-1737-45fa-8d8f-d5a381c9d498" (UID: "4835f23c-1737-45fa-8d8f-d5a381c9d498"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:49:00 crc kubenswrapper[4724]: I0223 17:49:00.948311 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4835f23c-1737-45fa-8d8f-d5a381c9d498-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4835f23c-1737-45fa-8d8f-d5a381c9d498" (UID: "4835f23c-1737-45fa-8d8f-d5a381c9d498"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:49:00 crc kubenswrapper[4724]: I0223 17:49:00.977560 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4835f23c-1737-45fa-8d8f-d5a381c9d498-config-data" (OuterVolumeSpecName: "config-data") pod "4835f23c-1737-45fa-8d8f-d5a381c9d498" (UID: "4835f23c-1737-45fa-8d8f-d5a381c9d498"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:49:01 crc kubenswrapper[4724]: I0223 17:49:01.009045 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4835f23c-1737-45fa-8d8f-d5a381c9d498-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:01 crc kubenswrapper[4724]: I0223 17:49:01.009084 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9476\" (UniqueName: \"kubernetes.io/projected/4835f23c-1737-45fa-8d8f-d5a381c9d498-kube-api-access-p9476\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:01 crc kubenswrapper[4724]: I0223 17:49:01.009098 4724 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4835f23c-1737-45fa-8d8f-d5a381c9d498-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:01 crc kubenswrapper[4724]: I0223 17:49:01.009107 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4835f23c-1737-45fa-8d8f-d5a381c9d498-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:01 crc kubenswrapper[4724]: I0223 17:49:01.356665 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-qqzwt" event={"ID":"4835f23c-1737-45fa-8d8f-d5a381c9d498","Type":"ContainerDied","Data":"a6916bfb1b1ec948e2212bc3352e874aa4f20c8d3c5a868458f7259fbb81bc2c"} Feb 23 17:49:01 crc kubenswrapper[4724]: I0223 17:49:01.356704 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6916bfb1b1ec948e2212bc3352e874aa4f20c8d3c5a868458f7259fbb81bc2c" Feb 23 17:49:01 crc kubenswrapper[4724]: I0223 17:49:01.356754 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-qqzwt" Feb 23 17:49:01 crc kubenswrapper[4724]: I0223 17:49:01.540876 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c4949dfdc-glzsk" Feb 23 17:49:01 crc kubenswrapper[4724]: I0223 17:49:01.630814 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b99fb9575-gk5sx"] Feb 23 17:49:01 crc kubenswrapper[4724]: I0223 17:49:01.631019 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b99fb9575-gk5sx" podUID="5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135" containerName="dnsmasq-dns" containerID="cri-o://d4782fd636c791d232e988b5d1ee9eeeba90336afea3454681705b65978e0acc" gracePeriod=10 Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.164176 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b99fb9575-gk5sx" Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.271342 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-656d4464cc-fm4h2"] Feb 23 17:49:02 crc kubenswrapper[4724]: E0223 17:49:02.271769 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135" containerName="init" Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.271798 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135" containerName="init" Feb 23 17:49:02 crc kubenswrapper[4724]: E0223 17:49:02.271836 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135" containerName="dnsmasq-dns" Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.271843 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135" containerName="dnsmasq-dns" Feb 23 17:49:02 crc kubenswrapper[4724]: E0223 17:49:02.271860 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3811756-e4b8-40fe-9158-30f432841b07" containerName="mariadb-account-create-update" Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.271866 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3811756-e4b8-40fe-9158-30f432841b07" containerName="mariadb-account-create-update" Feb 23 17:49:02 crc kubenswrapper[4724]: E0223 17:49:02.271877 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e8659d3-de95-405b-b137-54708400f566" containerName="mariadb-database-create" Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.271884 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e8659d3-de95-405b-b137-54708400f566" containerName="mariadb-database-create" Feb 23 17:49:02 crc kubenswrapper[4724]: E0223 17:49:02.271891 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="934da594-6ca7-46b8-954e-c1cff91e3f44" containerName="mariadb-account-create-update" Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.271898 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="934da594-6ca7-46b8-954e-c1cff91e3f44" containerName="mariadb-account-create-update" Feb 23 17:49:02 crc kubenswrapper[4724]: E0223 17:49:02.271908 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4835f23c-1737-45fa-8d8f-d5a381c9d498" containerName="glance-db-sync" Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.271914 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="4835f23c-1737-45fa-8d8f-d5a381c9d498" containerName="glance-db-sync" Feb 23 17:49:02 crc kubenswrapper[4724]: E0223 17:49:02.271932 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c3f7706-ecc6-45ce-90d7-89bafc6588fd" containerName="mariadb-database-create" Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.271938 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c3f7706-ecc6-45ce-90d7-89bafc6588fd" containerName="mariadb-database-create" Feb 23 17:49:02 crc kubenswrapper[4724]: E0223 17:49:02.271949 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68bbcfd5-a073-443b-afab-650c48febc56" containerName="mariadb-account-create-update" Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.271955 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="68bbcfd5-a073-443b-afab-650c48febc56" containerName="mariadb-account-create-update" Feb 23 17:49:02 crc kubenswrapper[4724]: E0223 17:49:02.271964 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8615b39e-8bab-4706-a6b6-e719c566b7dc" containerName="mariadb-database-create" Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.271970 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="8615b39e-8bab-4706-a6b6-e719c566b7dc" containerName="mariadb-database-create" Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.272114 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="68bbcfd5-a073-443b-afab-650c48febc56" containerName="mariadb-account-create-update" Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.272126 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="4835f23c-1737-45fa-8d8f-d5a381c9d498" containerName="glance-db-sync" Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.272136 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c3f7706-ecc6-45ce-90d7-89bafc6588fd" containerName="mariadb-database-create" Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.272144 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="934da594-6ca7-46b8-954e-c1cff91e3f44" containerName="mariadb-account-create-update" Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.272153 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e8659d3-de95-405b-b137-54708400f566" containerName="mariadb-database-create" Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.272161 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3811756-e4b8-40fe-9158-30f432841b07" containerName="mariadb-account-create-update" Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.272175 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135" containerName="dnsmasq-dns" Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.272185 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="8615b39e-8bab-4706-a6b6-e719c566b7dc" containerName="mariadb-database-create" Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.273313 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-656d4464cc-fm4h2" Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.286761 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-656d4464cc-fm4h2"] Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.360445 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135-config\") pod \"5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135\" (UID: \"5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135\") " Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.360558 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135-ovsdbserver-nb\") pod \"5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135\" (UID: \"5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135\") " Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.360597 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135-ovsdbserver-sb\") pod \"5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135\" (UID: \"5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135\") " Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.360633 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135-dns-svc\") pod \"5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135\" (UID: \"5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135\") " Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.360725 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svvtk\" (UniqueName: \"kubernetes.io/projected/5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135-kube-api-access-svvtk\") pod \"5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135\" (UID: \"5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135\") " Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.366832 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135-kube-api-access-svvtk" (OuterVolumeSpecName: "kube-api-access-svvtk") pod "5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135" (UID: "5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135"). InnerVolumeSpecName "kube-api-access-svvtk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.373195 4724 generic.go:334] "Generic (PLEG): container finished" podID="5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135" containerID="d4782fd636c791d232e988b5d1ee9eeeba90336afea3454681705b65978e0acc" exitCode=0 Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.373253 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b99fb9575-gk5sx" event={"ID":"5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135","Type":"ContainerDied","Data":"d4782fd636c791d232e988b5d1ee9eeeba90336afea3454681705b65978e0acc"} Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.373287 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b99fb9575-gk5sx" event={"ID":"5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135","Type":"ContainerDied","Data":"9babf06dd96ba6620253113c09963dd4cbc014eca9a86bf510d93b466dffbcdd"} Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.373319 4724 scope.go:117] "RemoveContainer" containerID="d4782fd636c791d232e988b5d1ee9eeeba90336afea3454681705b65978e0acc" Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.373409 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b99fb9575-gk5sx" Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.416969 4724 scope.go:117] "RemoveContainer" containerID="a3c959eeb3a055d0a236e72b680374425ea1b968ec0219630ea3e198fef4a604" Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.422689 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135-config" (OuterVolumeSpecName: "config") pod "5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135" (UID: "5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.424861 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135" (UID: "5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.445726 4724 scope.go:117] "RemoveContainer" containerID="d4782fd636c791d232e988b5d1ee9eeeba90336afea3454681705b65978e0acc" Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.447160 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135" (UID: "5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.447260 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135" (UID: "5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:49:02 crc kubenswrapper[4724]: E0223 17:49:02.447592 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4782fd636c791d232e988b5d1ee9eeeba90336afea3454681705b65978e0acc\": container with ID starting with d4782fd636c791d232e988b5d1ee9eeeba90336afea3454681705b65978e0acc not found: ID does not exist" containerID="d4782fd636c791d232e988b5d1ee9eeeba90336afea3454681705b65978e0acc" Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.447637 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4782fd636c791d232e988b5d1ee9eeeba90336afea3454681705b65978e0acc"} err="failed to get container status \"d4782fd636c791d232e988b5d1ee9eeeba90336afea3454681705b65978e0acc\": rpc error: code = NotFound desc = could not find container \"d4782fd636c791d232e988b5d1ee9eeeba90336afea3454681705b65978e0acc\": container with ID starting with d4782fd636c791d232e988b5d1ee9eeeba90336afea3454681705b65978e0acc not found: ID does not exist" Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.447664 4724 scope.go:117] "RemoveContainer" containerID="a3c959eeb3a055d0a236e72b680374425ea1b968ec0219630ea3e198fef4a604" Feb 23 17:49:02 crc kubenswrapper[4724]: E0223 17:49:02.448121 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3c959eeb3a055d0a236e72b680374425ea1b968ec0219630ea3e198fef4a604\": container with ID starting with a3c959eeb3a055d0a236e72b680374425ea1b968ec0219630ea3e198fef4a604 not found: ID does not exist" containerID="a3c959eeb3a055d0a236e72b680374425ea1b968ec0219630ea3e198fef4a604" Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.448161 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3c959eeb3a055d0a236e72b680374425ea1b968ec0219630ea3e198fef4a604"} err="failed to get container status \"a3c959eeb3a055d0a236e72b680374425ea1b968ec0219630ea3e198fef4a604\": rpc error: code = NotFound desc = could not find container \"a3c959eeb3a055d0a236e72b680374425ea1b968ec0219630ea3e198fef4a604\": container with ID starting with a3c959eeb3a055d0a236e72b680374425ea1b968ec0219630ea3e198fef4a604 not found: ID does not exist" Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.462842 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f344b52-c041-4a0b-bfb7-4c3ff396301a-config\") pod \"dnsmasq-dns-656d4464cc-fm4h2\" (UID: \"2f344b52-c041-4a0b-bfb7-4c3ff396301a\") " pod="openstack/dnsmasq-dns-656d4464cc-fm4h2" Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.462896 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f344b52-c041-4a0b-bfb7-4c3ff396301a-dns-svc\") pod \"dnsmasq-dns-656d4464cc-fm4h2\" (UID: \"2f344b52-c041-4a0b-bfb7-4c3ff396301a\") " pod="openstack/dnsmasq-dns-656d4464cc-fm4h2" Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.462932 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2f344b52-c041-4a0b-bfb7-4c3ff396301a-dns-swift-storage-0\") pod \"dnsmasq-dns-656d4464cc-fm4h2\" (UID: \"2f344b52-c041-4a0b-bfb7-4c3ff396301a\") " pod="openstack/dnsmasq-dns-656d4464cc-fm4h2" Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.462990 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2f344b52-c041-4a0b-bfb7-4c3ff396301a-ovsdbserver-sb\") pod \"dnsmasq-dns-656d4464cc-fm4h2\" (UID: \"2f344b52-c041-4a0b-bfb7-4c3ff396301a\") " pod="openstack/dnsmasq-dns-656d4464cc-fm4h2" Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.463024 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2f344b52-c041-4a0b-bfb7-4c3ff396301a-ovsdbserver-nb\") pod \"dnsmasq-dns-656d4464cc-fm4h2\" (UID: \"2f344b52-c041-4a0b-bfb7-4c3ff396301a\") " pod="openstack/dnsmasq-dns-656d4464cc-fm4h2" Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.463046 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjvpz\" (UniqueName: \"kubernetes.io/projected/2f344b52-c041-4a0b-bfb7-4c3ff396301a-kube-api-access-wjvpz\") pod \"dnsmasq-dns-656d4464cc-fm4h2\" (UID: \"2f344b52-c041-4a0b-bfb7-4c3ff396301a\") " pod="openstack/dnsmasq-dns-656d4464cc-fm4h2" Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.463181 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svvtk\" (UniqueName: \"kubernetes.io/projected/5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135-kube-api-access-svvtk\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.463215 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135-config\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.463227 4724 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.463239 4724 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.463249 4724 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.564503 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2f344b52-c041-4a0b-bfb7-4c3ff396301a-ovsdbserver-sb\") pod \"dnsmasq-dns-656d4464cc-fm4h2\" (UID: \"2f344b52-c041-4a0b-bfb7-4c3ff396301a\") " pod="openstack/dnsmasq-dns-656d4464cc-fm4h2" Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.564867 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2f344b52-c041-4a0b-bfb7-4c3ff396301a-ovsdbserver-nb\") pod \"dnsmasq-dns-656d4464cc-fm4h2\" (UID: \"2f344b52-c041-4a0b-bfb7-4c3ff396301a\") " pod="openstack/dnsmasq-dns-656d4464cc-fm4h2" Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.564902 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjvpz\" (UniqueName: \"kubernetes.io/projected/2f344b52-c041-4a0b-bfb7-4c3ff396301a-kube-api-access-wjvpz\") pod \"dnsmasq-dns-656d4464cc-fm4h2\" (UID: \"2f344b52-c041-4a0b-bfb7-4c3ff396301a\") " pod="openstack/dnsmasq-dns-656d4464cc-fm4h2" Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.564975 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f344b52-c041-4a0b-bfb7-4c3ff396301a-config\") pod \"dnsmasq-dns-656d4464cc-fm4h2\" (UID: \"2f344b52-c041-4a0b-bfb7-4c3ff396301a\") " pod="openstack/dnsmasq-dns-656d4464cc-fm4h2" Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.565009 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f344b52-c041-4a0b-bfb7-4c3ff396301a-dns-svc\") pod \"dnsmasq-dns-656d4464cc-fm4h2\" (UID: \"2f344b52-c041-4a0b-bfb7-4c3ff396301a\") " pod="openstack/dnsmasq-dns-656d4464cc-fm4h2" Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.565053 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2f344b52-c041-4a0b-bfb7-4c3ff396301a-dns-swift-storage-0\") pod \"dnsmasq-dns-656d4464cc-fm4h2\" (UID: \"2f344b52-c041-4a0b-bfb7-4c3ff396301a\") " pod="openstack/dnsmasq-dns-656d4464cc-fm4h2" Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.566077 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2f344b52-c041-4a0b-bfb7-4c3ff396301a-dns-swift-storage-0\") pod \"dnsmasq-dns-656d4464cc-fm4h2\" (UID: \"2f344b52-c041-4a0b-bfb7-4c3ff396301a\") " pod="openstack/dnsmasq-dns-656d4464cc-fm4h2" Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.566637 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2f344b52-c041-4a0b-bfb7-4c3ff396301a-ovsdbserver-nb\") pod \"dnsmasq-dns-656d4464cc-fm4h2\" (UID: \"2f344b52-c041-4a0b-bfb7-4c3ff396301a\") " pod="openstack/dnsmasq-dns-656d4464cc-fm4h2" Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.567072 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f344b52-c041-4a0b-bfb7-4c3ff396301a-config\") pod \"dnsmasq-dns-656d4464cc-fm4h2\" (UID: \"2f344b52-c041-4a0b-bfb7-4c3ff396301a\") " pod="openstack/dnsmasq-dns-656d4464cc-fm4h2" Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.568206 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2f344b52-c041-4a0b-bfb7-4c3ff396301a-ovsdbserver-sb\") pod \"dnsmasq-dns-656d4464cc-fm4h2\" (UID: \"2f344b52-c041-4a0b-bfb7-4c3ff396301a\") " pod="openstack/dnsmasq-dns-656d4464cc-fm4h2" Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.568683 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f344b52-c041-4a0b-bfb7-4c3ff396301a-dns-svc\") pod \"dnsmasq-dns-656d4464cc-fm4h2\" (UID: \"2f344b52-c041-4a0b-bfb7-4c3ff396301a\") " pod="openstack/dnsmasq-dns-656d4464cc-fm4h2" Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.589274 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjvpz\" (UniqueName: \"kubernetes.io/projected/2f344b52-c041-4a0b-bfb7-4c3ff396301a-kube-api-access-wjvpz\") pod \"dnsmasq-dns-656d4464cc-fm4h2\" (UID: \"2f344b52-c041-4a0b-bfb7-4c3ff396301a\") " pod="openstack/dnsmasq-dns-656d4464cc-fm4h2" Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.599795 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-656d4464cc-fm4h2" Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.726856 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b99fb9575-gk5sx"] Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.734539 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b99fb9575-gk5sx"] Feb 23 17:49:02 crc kubenswrapper[4724]: I0223 17:49:02.962478 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135" path="/var/lib/kubelet/pods/5a197bd2-d9ec-44a2-bbcf-f5f3c8d05135/volumes" Feb 23 17:49:03 crc kubenswrapper[4724]: I0223 17:49:03.102991 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-656d4464cc-fm4h2"] Feb 23 17:49:03 crc kubenswrapper[4724]: I0223 17:49:03.392063 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-zt6pn" event={"ID":"20956a35-60c2-4df4-b475-0a64a3fa11ae","Type":"ContainerStarted","Data":"9c6a1bae99ea621ca5f410f1dd510a271fd827f91fe0b0a36fba0a5600e407a2"} Feb 23 17:49:03 crc kubenswrapper[4724]: I0223 17:49:03.423336 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-zt6pn" podStartSLOduration=3.938885659 podStartE2EDuration="13.423316166s" podCreationTimestamp="2026-02-23 17:48:50 +0000 UTC" firstStartedPulling="2026-02-23 17:48:52.015281708 +0000 UTC m=+1087.831481308" lastFinishedPulling="2026-02-23 17:49:01.499712215 +0000 UTC m=+1097.315911815" observedRunningTime="2026-02-23 17:49:03.416530684 +0000 UTC m=+1099.232730284" watchObservedRunningTime="2026-02-23 17:49:03.423316166 +0000 UTC m=+1099.239515766" Feb 23 17:49:03 crc kubenswrapper[4724]: I0223 17:49:03.439106 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-69qxx" event={"ID":"9686c843-cd47-4a6c-992a-97dd99d4304e","Type":"ContainerStarted","Data":"5a59058eb1fc336cf42338c957b13971843c3de509c90f6fdb13015a7658d4e0"} Feb 23 17:49:03 crc kubenswrapper[4724]: I0223 17:49:03.446784 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-656d4464cc-fm4h2" event={"ID":"2f344b52-c041-4a0b-bfb7-4c3ff396301a","Type":"ContainerStarted","Data":"af6080b38576223629256e5535da76414f44b50f0c8351a0001f4a6fa66d7dcf"} Feb 23 17:49:03 crc kubenswrapper[4724]: I0223 17:49:03.471550 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-sync-69qxx" podStartSLOduration=3.605209164 podStartE2EDuration="13.471524883s" podCreationTimestamp="2026-02-23 17:48:50 +0000 UTC" firstStartedPulling="2026-02-23 17:48:52.006558888 +0000 UTC m=+1087.822758488" lastFinishedPulling="2026-02-23 17:49:01.872874607 +0000 UTC m=+1097.689074207" observedRunningTime="2026-02-23 17:49:03.461839358 +0000 UTC m=+1099.278038978" watchObservedRunningTime="2026-02-23 17:49:03.471524883 +0000 UTC m=+1099.287724493" Feb 23 17:49:04 crc kubenswrapper[4724]: I0223 17:49:04.458962 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"28e2f02b-2d94-4130-8d0a-3443aed25fba","Type":"ContainerStarted","Data":"e526e1201396c66d922fdc87d5ce0caef50f20aa33323edd61b90e687b33d218"} Feb 23 17:49:04 crc kubenswrapper[4724]: I0223 17:49:04.459498 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"28e2f02b-2d94-4130-8d0a-3443aed25fba","Type":"ContainerStarted","Data":"03541e868c1cee934f03d37f17af74660aa55cf1c9141aeb72e6120248c82cfd"} Feb 23 17:49:04 crc kubenswrapper[4724]: I0223 17:49:04.460631 4724 generic.go:334] "Generic (PLEG): container finished" podID="2f344b52-c041-4a0b-bfb7-4c3ff396301a" containerID="5e3f338e3b65e631ec835d14019c5968a69c9c0566a88ab51ac0b8f7c67d984e" exitCode=0 Feb 23 17:49:04 crc kubenswrapper[4724]: I0223 17:49:04.460724 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-656d4464cc-fm4h2" event={"ID":"2f344b52-c041-4a0b-bfb7-4c3ff396301a","Type":"ContainerDied","Data":"5e3f338e3b65e631ec835d14019c5968a69c9c0566a88ab51ac0b8f7c67d984e"} Feb 23 17:49:04 crc kubenswrapper[4724]: I0223 17:49:04.550842 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=21.550817104 podStartE2EDuration="21.550817104s" podCreationTimestamp="2026-02-23 17:48:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:49:04.543975151 +0000 UTC m=+1100.360174751" watchObservedRunningTime="2026-02-23 17:49:04.550817104 +0000 UTC m=+1100.367016704" Feb 23 17:49:04 crc kubenswrapper[4724]: I0223 17:49:04.552631 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 23 17:49:05 crc kubenswrapper[4724]: I0223 17:49:05.471449 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-656d4464cc-fm4h2" event={"ID":"2f344b52-c041-4a0b-bfb7-4c3ff396301a","Type":"ContainerStarted","Data":"ea61a724d4d08ca2a4b5e18fec30a94ab41912d400aed8223c92b4bfbb78d16d"} Feb 23 17:49:05 crc kubenswrapper[4724]: I0223 17:49:05.472388 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-656d4464cc-fm4h2" Feb 23 17:49:05 crc kubenswrapper[4724]: I0223 17:49:05.497863 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-656d4464cc-fm4h2" podStartSLOduration=3.497846956 podStartE2EDuration="3.497846956s" podCreationTimestamp="2026-02-23 17:49:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:49:05.492602164 +0000 UTC m=+1101.308801774" watchObservedRunningTime="2026-02-23 17:49:05.497846956 +0000 UTC m=+1101.314046556" Feb 23 17:49:07 crc kubenswrapper[4724]: I0223 17:49:07.494504 4724 generic.go:334] "Generic (PLEG): container finished" podID="9686c843-cd47-4a6c-992a-97dd99d4304e" containerID="5a59058eb1fc336cf42338c957b13971843c3de509c90f6fdb13015a7658d4e0" exitCode=0 Feb 23 17:49:07 crc kubenswrapper[4724]: I0223 17:49:07.494575 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-69qxx" event={"ID":"9686c843-cd47-4a6c-992a-97dd99d4304e","Type":"ContainerDied","Data":"5a59058eb1fc336cf42338c957b13971843c3de509c90f6fdb13015a7658d4e0"} Feb 23 17:49:08 crc kubenswrapper[4724]: I0223 17:49:08.502688 4724 generic.go:334] "Generic (PLEG): container finished" podID="20956a35-60c2-4df4-b475-0a64a3fa11ae" containerID="9c6a1bae99ea621ca5f410f1dd510a271fd827f91fe0b0a36fba0a5600e407a2" exitCode=0 Feb 23 17:49:08 crc kubenswrapper[4724]: I0223 17:49:08.502779 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-zt6pn" event={"ID":"20956a35-60c2-4df4-b475-0a64a3fa11ae","Type":"ContainerDied","Data":"9c6a1bae99ea621ca5f410f1dd510a271fd827f91fe0b0a36fba0a5600e407a2"} Feb 23 17:49:08 crc kubenswrapper[4724]: I0223 17:49:08.983385 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-69qxx" Feb 23 17:49:09 crc kubenswrapper[4724]: I0223 17:49:09.115631 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97824\" (UniqueName: \"kubernetes.io/projected/9686c843-cd47-4a6c-992a-97dd99d4304e-kube-api-access-97824\") pod \"9686c843-cd47-4a6c-992a-97dd99d4304e\" (UID: \"9686c843-cd47-4a6c-992a-97dd99d4304e\") " Feb 23 17:49:09 crc kubenswrapper[4724]: I0223 17:49:09.115804 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9686c843-cd47-4a6c-992a-97dd99d4304e-combined-ca-bundle\") pod \"9686c843-cd47-4a6c-992a-97dd99d4304e\" (UID: \"9686c843-cd47-4a6c-992a-97dd99d4304e\") " Feb 23 17:49:09 crc kubenswrapper[4724]: I0223 17:49:09.115852 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9686c843-cd47-4a6c-992a-97dd99d4304e-config-data\") pod \"9686c843-cd47-4a6c-992a-97dd99d4304e\" (UID: \"9686c843-cd47-4a6c-992a-97dd99d4304e\") " Feb 23 17:49:09 crc kubenswrapper[4724]: I0223 17:49:09.115902 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9686c843-cd47-4a6c-992a-97dd99d4304e-db-sync-config-data\") pod \"9686c843-cd47-4a6c-992a-97dd99d4304e\" (UID: \"9686c843-cd47-4a6c-992a-97dd99d4304e\") " Feb 23 17:49:09 crc kubenswrapper[4724]: I0223 17:49:09.122573 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9686c843-cd47-4a6c-992a-97dd99d4304e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "9686c843-cd47-4a6c-992a-97dd99d4304e" (UID: "9686c843-cd47-4a6c-992a-97dd99d4304e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:49:09 crc kubenswrapper[4724]: I0223 17:49:09.122712 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9686c843-cd47-4a6c-992a-97dd99d4304e-kube-api-access-97824" (OuterVolumeSpecName: "kube-api-access-97824") pod "9686c843-cd47-4a6c-992a-97dd99d4304e" (UID: "9686c843-cd47-4a6c-992a-97dd99d4304e"). InnerVolumeSpecName "kube-api-access-97824". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:49:09 crc kubenswrapper[4724]: I0223 17:49:09.142200 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9686c843-cd47-4a6c-992a-97dd99d4304e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9686c843-cd47-4a6c-992a-97dd99d4304e" (UID: "9686c843-cd47-4a6c-992a-97dd99d4304e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:49:09 crc kubenswrapper[4724]: I0223 17:49:09.167434 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9686c843-cd47-4a6c-992a-97dd99d4304e-config-data" (OuterVolumeSpecName: "config-data") pod "9686c843-cd47-4a6c-992a-97dd99d4304e" (UID: "9686c843-cd47-4a6c-992a-97dd99d4304e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:49:09 crc kubenswrapper[4724]: I0223 17:49:09.218085 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97824\" (UniqueName: \"kubernetes.io/projected/9686c843-cd47-4a6c-992a-97dd99d4304e-kube-api-access-97824\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:09 crc kubenswrapper[4724]: I0223 17:49:09.218124 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9686c843-cd47-4a6c-992a-97dd99d4304e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:09 crc kubenswrapper[4724]: I0223 17:49:09.218135 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9686c843-cd47-4a6c-992a-97dd99d4304e-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:09 crc kubenswrapper[4724]: I0223 17:49:09.218145 4724 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9686c843-cd47-4a6c-992a-97dd99d4304e-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:09 crc kubenswrapper[4724]: I0223 17:49:09.511324 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-69qxx" Feb 23 17:49:09 crc kubenswrapper[4724]: I0223 17:49:09.512485 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-69qxx" event={"ID":"9686c843-cd47-4a6c-992a-97dd99d4304e","Type":"ContainerDied","Data":"94583b07f3078d56cb1fc59d7580d9e8380458bda5d8e58b4365052af74935e7"} Feb 23 17:49:09 crc kubenswrapper[4724]: I0223 17:49:09.512512 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94583b07f3078d56cb1fc59d7580d9e8380458bda5d8e58b4365052af74935e7" Feb 23 17:49:10 crc kubenswrapper[4724]: I0223 17:49:09.821235 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-zt6pn" Feb 23 17:49:10 crc kubenswrapper[4724]: I0223 17:49:09.932280 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20956a35-60c2-4df4-b475-0a64a3fa11ae-config-data\") pod \"20956a35-60c2-4df4-b475-0a64a3fa11ae\" (UID: \"20956a35-60c2-4df4-b475-0a64a3fa11ae\") " Feb 23 17:49:10 crc kubenswrapper[4724]: I0223 17:49:09.932407 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwvf5\" (UniqueName: \"kubernetes.io/projected/20956a35-60c2-4df4-b475-0a64a3fa11ae-kube-api-access-pwvf5\") pod \"20956a35-60c2-4df4-b475-0a64a3fa11ae\" (UID: \"20956a35-60c2-4df4-b475-0a64a3fa11ae\") " Feb 23 17:49:10 crc kubenswrapper[4724]: I0223 17:49:09.932528 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20956a35-60c2-4df4-b475-0a64a3fa11ae-combined-ca-bundle\") pod \"20956a35-60c2-4df4-b475-0a64a3fa11ae\" (UID: \"20956a35-60c2-4df4-b475-0a64a3fa11ae\") " Feb 23 17:49:10 crc kubenswrapper[4724]: I0223 17:49:09.936226 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20956a35-60c2-4df4-b475-0a64a3fa11ae-kube-api-access-pwvf5" (OuterVolumeSpecName: "kube-api-access-pwvf5") pod "20956a35-60c2-4df4-b475-0a64a3fa11ae" (UID: "20956a35-60c2-4df4-b475-0a64a3fa11ae"). InnerVolumeSpecName "kube-api-access-pwvf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:49:10 crc kubenswrapper[4724]: I0223 17:49:09.964418 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20956a35-60c2-4df4-b475-0a64a3fa11ae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20956a35-60c2-4df4-b475-0a64a3fa11ae" (UID: "20956a35-60c2-4df4-b475-0a64a3fa11ae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:49:10 crc kubenswrapper[4724]: I0223 17:49:09.982315 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20956a35-60c2-4df4-b475-0a64a3fa11ae-config-data" (OuterVolumeSpecName: "config-data") pod "20956a35-60c2-4df4-b475-0a64a3fa11ae" (UID: "20956a35-60c2-4df4-b475-0a64a3fa11ae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:49:10 crc kubenswrapper[4724]: I0223 17:49:10.034722 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20956a35-60c2-4df4-b475-0a64a3fa11ae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:10 crc kubenswrapper[4724]: I0223 17:49:10.034759 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20956a35-60c2-4df4-b475-0a64a3fa11ae-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:10 crc kubenswrapper[4724]: I0223 17:49:10.034772 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwvf5\" (UniqueName: \"kubernetes.io/projected/20956a35-60c2-4df4-b475-0a64a3fa11ae-kube-api-access-pwvf5\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:10 crc kubenswrapper[4724]: I0223 17:49:10.522133 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-zt6pn" event={"ID":"20956a35-60c2-4df4-b475-0a64a3fa11ae","Type":"ContainerDied","Data":"b4ca5f4fff417a6e3dfa78ec8c1d9e3b20995c44220f302177c6b68fa9a0a9b9"} Feb 23 17:49:10 crc kubenswrapper[4724]: I0223 17:49:10.522473 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4ca5f4fff417a6e3dfa78ec8c1d9e3b20995c44220f302177c6b68fa9a0a9b9" Feb 23 17:49:10 crc kubenswrapper[4724]: I0223 17:49:10.522204 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-zt6pn" Feb 23 17:49:10 crc kubenswrapper[4724]: I0223 17:49:10.684872 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-656d4464cc-fm4h2"] Feb 23 17:49:10 crc kubenswrapper[4724]: I0223 17:49:10.685100 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-656d4464cc-fm4h2" podUID="2f344b52-c041-4a0b-bfb7-4c3ff396301a" containerName="dnsmasq-dns" containerID="cri-o://ea61a724d4d08ca2a4b5e18fec30a94ab41912d400aed8223c92b4bfbb78d16d" gracePeriod=10 Feb 23 17:49:10 crc kubenswrapper[4724]: I0223 17:49:10.686681 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-656d4464cc-fm4h2" Feb 23 17:49:10 crc kubenswrapper[4724]: I0223 17:49:10.715318 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-h68sq"] Feb 23 17:49:10 crc kubenswrapper[4724]: E0223 17:49:10.715742 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20956a35-60c2-4df4-b475-0a64a3fa11ae" containerName="keystone-db-sync" Feb 23 17:49:10 crc kubenswrapper[4724]: I0223 17:49:10.715762 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="20956a35-60c2-4df4-b475-0a64a3fa11ae" containerName="keystone-db-sync" Feb 23 17:49:10 crc kubenswrapper[4724]: E0223 17:49:10.715782 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9686c843-cd47-4a6c-992a-97dd99d4304e" containerName="watcher-db-sync" Feb 23 17:49:10 crc kubenswrapper[4724]: I0223 17:49:10.715788 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="9686c843-cd47-4a6c-992a-97dd99d4304e" containerName="watcher-db-sync" Feb 23 17:49:10 crc kubenswrapper[4724]: I0223 17:49:10.715965 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="9686c843-cd47-4a6c-992a-97dd99d4304e" containerName="watcher-db-sync" Feb 23 17:49:10 crc kubenswrapper[4724]: I0223 17:49:10.715985 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="20956a35-60c2-4df4-b475-0a64a3fa11ae" containerName="keystone-db-sync" Feb 23 17:49:10 crc kubenswrapper[4724]: I0223 17:49:10.716585 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-h68sq" Feb 23 17:49:10 crc kubenswrapper[4724]: I0223 17:49:10.725112 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 23 17:49:10 crc kubenswrapper[4724]: I0223 17:49:10.725283 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-8cc4s" Feb 23 17:49:10 crc kubenswrapper[4724]: I0223 17:49:10.725434 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 23 17:49:10 crc kubenswrapper[4724]: I0223 17:49:10.725631 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 23 17:49:10 crc kubenswrapper[4724]: I0223 17:49:10.735136 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-h68sq"] Feb 23 17:49:10 crc kubenswrapper[4724]: I0223 17:49:10.737290 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 23 17:49:10 crc kubenswrapper[4724]: I0223 17:49:10.778959 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-649c5dcfb9-g96zt"] Feb 23 17:49:10 crc kubenswrapper[4724]: I0223 17:49:10.780563 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-649c5dcfb9-g96zt" Feb 23 17:49:10 crc kubenswrapper[4724]: I0223 17:49:10.853428 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e73e94ef-0cac-460a-a61d-05476626544e-credential-keys\") pod \"keystone-bootstrap-h68sq\" (UID: \"e73e94ef-0cac-460a-a61d-05476626544e\") " pod="openstack/keystone-bootstrap-h68sq" Feb 23 17:49:10 crc kubenswrapper[4724]: I0223 17:49:10.853505 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e73e94ef-0cac-460a-a61d-05476626544e-config-data\") pod \"keystone-bootstrap-h68sq\" (UID: \"e73e94ef-0cac-460a-a61d-05476626544e\") " pod="openstack/keystone-bootstrap-h68sq" Feb 23 17:49:10 crc kubenswrapper[4724]: I0223 17:49:10.853596 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e73e94ef-0cac-460a-a61d-05476626544e-combined-ca-bundle\") pod \"keystone-bootstrap-h68sq\" (UID: \"e73e94ef-0cac-460a-a61d-05476626544e\") " pod="openstack/keystone-bootstrap-h68sq" Feb 23 17:49:10 crc kubenswrapper[4724]: I0223 17:49:10.853642 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e73e94ef-0cac-460a-a61d-05476626544e-scripts\") pod \"keystone-bootstrap-h68sq\" (UID: \"e73e94ef-0cac-460a-a61d-05476626544e\") " pod="openstack/keystone-bootstrap-h68sq" Feb 23 17:49:10 crc kubenswrapper[4724]: I0223 17:49:10.853695 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvgv6\" (UniqueName: \"kubernetes.io/projected/e73e94ef-0cac-460a-a61d-05476626544e-kube-api-access-dvgv6\") pod \"keystone-bootstrap-h68sq\" (UID: \"e73e94ef-0cac-460a-a61d-05476626544e\") " pod="openstack/keystone-bootstrap-h68sq" Feb 23 17:49:10 crc kubenswrapper[4724]: I0223 17:49:10.853743 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e73e94ef-0cac-460a-a61d-05476626544e-fernet-keys\") pod \"keystone-bootstrap-h68sq\" (UID: \"e73e94ef-0cac-460a-a61d-05476626544e\") " pod="openstack/keystone-bootstrap-h68sq" Feb 23 17:49:10 crc kubenswrapper[4724]: I0223 17:49:10.867249 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-649c5dcfb9-g96zt"] Feb 23 17:49:10 crc kubenswrapper[4724]: I0223 17:49:10.898177 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 23 17:49:10 crc kubenswrapper[4724]: I0223 17:49:10.901509 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 23 17:49:10 crc kubenswrapper[4724]: I0223 17:49:10.911226 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-5brtw" Feb 23 17:49:10 crc kubenswrapper[4724]: I0223 17:49:10.913908 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Feb 23 17:49:10 crc kubenswrapper[4724]: I0223 17:49:10.947678 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 23 17:49:10 crc kubenswrapper[4724]: I0223 17:49:10.955486 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e73e94ef-0cac-460a-a61d-05476626544e-fernet-keys\") pod \"keystone-bootstrap-h68sq\" (UID: \"e73e94ef-0cac-460a-a61d-05476626544e\") " pod="openstack/keystone-bootstrap-h68sq" Feb 23 17:49:10 crc kubenswrapper[4724]: I0223 17:49:10.955561 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xcwd\" (UniqueName: \"kubernetes.io/projected/8fa009a6-0898-4394-8392-16e4c47c8e9a-kube-api-access-9xcwd\") pod \"dnsmasq-dns-649c5dcfb9-g96zt\" (UID: \"8fa009a6-0898-4394-8392-16e4c47c8e9a\") " pod="openstack/dnsmasq-dns-649c5dcfb9-g96zt" Feb 23 17:49:10 crc kubenswrapper[4724]: I0223 17:49:10.955594 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e73e94ef-0cac-460a-a61d-05476626544e-credential-keys\") pod \"keystone-bootstrap-h68sq\" (UID: \"e73e94ef-0cac-460a-a61d-05476626544e\") " pod="openstack/keystone-bootstrap-h68sq" Feb 23 17:49:10 crc kubenswrapper[4724]: I0223 17:49:10.955619 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8fa009a6-0898-4394-8392-16e4c47c8e9a-dns-swift-storage-0\") pod \"dnsmasq-dns-649c5dcfb9-g96zt\" (UID: \"8fa009a6-0898-4394-8392-16e4c47c8e9a\") " pod="openstack/dnsmasq-dns-649c5dcfb9-g96zt" Feb 23 17:49:10 crc kubenswrapper[4724]: I0223 17:49:10.955648 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8fa009a6-0898-4394-8392-16e4c47c8e9a-dns-svc\") pod \"dnsmasq-dns-649c5dcfb9-g96zt\" (UID: \"8fa009a6-0898-4394-8392-16e4c47c8e9a\") " pod="openstack/dnsmasq-dns-649c5dcfb9-g96zt" Feb 23 17:49:10 crc kubenswrapper[4724]: I0223 17:49:10.955670 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e73e94ef-0cac-460a-a61d-05476626544e-config-data\") pod \"keystone-bootstrap-h68sq\" (UID: \"e73e94ef-0cac-460a-a61d-05476626544e\") " pod="openstack/keystone-bootstrap-h68sq" Feb 23 17:49:10 crc kubenswrapper[4724]: I0223 17:49:10.955699 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fa009a6-0898-4394-8392-16e4c47c8e9a-config\") pod \"dnsmasq-dns-649c5dcfb9-g96zt\" (UID: \"8fa009a6-0898-4394-8392-16e4c47c8e9a\") " pod="openstack/dnsmasq-dns-649c5dcfb9-g96zt" Feb 23 17:49:10 crc kubenswrapper[4724]: I0223 17:49:10.955766 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e73e94ef-0cac-460a-a61d-05476626544e-combined-ca-bundle\") pod \"keystone-bootstrap-h68sq\" (UID: \"e73e94ef-0cac-460a-a61d-05476626544e\") " pod="openstack/keystone-bootstrap-h68sq" Feb 23 17:49:10 crc kubenswrapper[4724]: I0223 17:49:10.955807 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e73e94ef-0cac-460a-a61d-05476626544e-scripts\") pod \"keystone-bootstrap-h68sq\" (UID: \"e73e94ef-0cac-460a-a61d-05476626544e\") " pod="openstack/keystone-bootstrap-h68sq" Feb 23 17:49:10 crc kubenswrapper[4724]: I0223 17:49:10.955842 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8fa009a6-0898-4394-8392-16e4c47c8e9a-ovsdbserver-sb\") pod \"dnsmasq-dns-649c5dcfb9-g96zt\" (UID: \"8fa009a6-0898-4394-8392-16e4c47c8e9a\") " pod="openstack/dnsmasq-dns-649c5dcfb9-g96zt" Feb 23 17:49:10 crc kubenswrapper[4724]: I0223 17:49:10.955873 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvgv6\" (UniqueName: \"kubernetes.io/projected/e73e94ef-0cac-460a-a61d-05476626544e-kube-api-access-dvgv6\") pod \"keystone-bootstrap-h68sq\" (UID: \"e73e94ef-0cac-460a-a61d-05476626544e\") " pod="openstack/keystone-bootstrap-h68sq" Feb 23 17:49:10 crc kubenswrapper[4724]: I0223 17:49:10.955901 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8fa009a6-0898-4394-8392-16e4c47c8e9a-ovsdbserver-nb\") pod \"dnsmasq-dns-649c5dcfb9-g96zt\" (UID: \"8fa009a6-0898-4394-8392-16e4c47c8e9a\") " pod="openstack/dnsmasq-dns-649c5dcfb9-g96zt" Feb 23 17:49:10 crc kubenswrapper[4724]: I0223 17:49:10.962720 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e73e94ef-0cac-460a-a61d-05476626544e-scripts\") pod \"keystone-bootstrap-h68sq\" (UID: \"e73e94ef-0cac-460a-a61d-05476626544e\") " pod="openstack/keystone-bootstrap-h68sq" Feb 23 17:49:10 crc kubenswrapper[4724]: I0223 17:49:10.964123 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e73e94ef-0cac-460a-a61d-05476626544e-fernet-keys\") pod \"keystone-bootstrap-h68sq\" (UID: \"e73e94ef-0cac-460a-a61d-05476626544e\") " pod="openstack/keystone-bootstrap-h68sq" Feb 23 17:49:10 crc kubenswrapper[4724]: I0223 17:49:10.978517 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e73e94ef-0cac-460a-a61d-05476626544e-combined-ca-bundle\") pod \"keystone-bootstrap-h68sq\" (UID: \"e73e94ef-0cac-460a-a61d-05476626544e\") " pod="openstack/keystone-bootstrap-h68sq" Feb 23 17:49:10 crc kubenswrapper[4724]: I0223 17:49:10.987121 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e73e94ef-0cac-460a-a61d-05476626544e-credential-keys\") pod \"keystone-bootstrap-h68sq\" (UID: \"e73e94ef-0cac-460a-a61d-05476626544e\") " pod="openstack/keystone-bootstrap-h68sq" Feb 23 17:49:10 crc kubenswrapper[4724]: I0223 17:49:10.996267 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e73e94ef-0cac-460a-a61d-05476626544e-config-data\") pod \"keystone-bootstrap-h68sq\" (UID: \"e73e94ef-0cac-460a-a61d-05476626544e\") " pod="openstack/keystone-bootstrap-h68sq" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.046886 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-559d5d679f-9vm7m"] Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.048269 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-559d5d679f-9vm7m" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.050989 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvgv6\" (UniqueName: \"kubernetes.io/projected/e73e94ef-0cac-460a-a61d-05476626544e-kube-api-access-dvgv6\") pod \"keystone-bootstrap-h68sq\" (UID: \"e73e94ef-0cac-460a-a61d-05476626544e\") " pod="openstack/keystone-bootstrap-h68sq" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.055696 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.060894 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.061264 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.065534 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xcwd\" (UniqueName: \"kubernetes.io/projected/8fa009a6-0898-4394-8392-16e4c47c8e9a-kube-api-access-9xcwd\") pod \"dnsmasq-dns-649c5dcfb9-g96zt\" (UID: \"8fa009a6-0898-4394-8392-16e4c47c8e9a\") " pod="openstack/dnsmasq-dns-649c5dcfb9-g96zt" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.065587 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8fa009a6-0898-4394-8392-16e4c47c8e9a-dns-swift-storage-0\") pod \"dnsmasq-dns-649c5dcfb9-g96zt\" (UID: \"8fa009a6-0898-4394-8392-16e4c47c8e9a\") " pod="openstack/dnsmasq-dns-649c5dcfb9-g96zt" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.065591 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-k7z97" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.065611 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8fa009a6-0898-4394-8392-16e4c47c8e9a-dns-svc\") pod \"dnsmasq-dns-649c5dcfb9-g96zt\" (UID: \"8fa009a6-0898-4394-8392-16e4c47c8e9a\") " pod="openstack/dnsmasq-dns-649c5dcfb9-g96zt" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.065642 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fa009a6-0898-4394-8392-16e4c47c8e9a-config\") pod \"dnsmasq-dns-649c5dcfb9-g96zt\" (UID: \"8fa009a6-0898-4394-8392-16e4c47c8e9a\") " pod="openstack/dnsmasq-dns-649c5dcfb9-g96zt" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.065694 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86eb7ff0-87b2-4538-8c5b-9126768e810b-logs\") pod \"watcher-decision-engine-0\" (UID: \"86eb7ff0-87b2-4538-8c5b-9126768e810b\") " pod="openstack/watcher-decision-engine-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.065733 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86eb7ff0-87b2-4538-8c5b-9126768e810b-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"86eb7ff0-87b2-4538-8c5b-9126768e810b\") " pod="openstack/watcher-decision-engine-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.065762 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/86eb7ff0-87b2-4538-8c5b-9126768e810b-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"86eb7ff0-87b2-4538-8c5b-9126768e810b\") " pod="openstack/watcher-decision-engine-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.065779 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86eb7ff0-87b2-4538-8c5b-9126768e810b-config-data\") pod \"watcher-decision-engine-0\" (UID: \"86eb7ff0-87b2-4538-8c5b-9126768e810b\") " pod="openstack/watcher-decision-engine-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.065813 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8fa009a6-0898-4394-8392-16e4c47c8e9a-ovsdbserver-sb\") pod \"dnsmasq-dns-649c5dcfb9-g96zt\" (UID: \"8fa009a6-0898-4394-8392-16e4c47c8e9a\") " pod="openstack/dnsmasq-dns-649c5dcfb9-g96zt" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.065841 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcb7c\" (UniqueName: \"kubernetes.io/projected/86eb7ff0-87b2-4538-8c5b-9126768e810b-kube-api-access-mcb7c\") pod \"watcher-decision-engine-0\" (UID: \"86eb7ff0-87b2-4538-8c5b-9126768e810b\") " pod="openstack/watcher-decision-engine-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.065858 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8fa009a6-0898-4394-8392-16e4c47c8e9a-ovsdbserver-nb\") pod \"dnsmasq-dns-649c5dcfb9-g96zt\" (UID: \"8fa009a6-0898-4394-8392-16e4c47c8e9a\") " pod="openstack/dnsmasq-dns-649c5dcfb9-g96zt" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.066661 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fa009a6-0898-4394-8392-16e4c47c8e9a-config\") pod \"dnsmasq-dns-649c5dcfb9-g96zt\" (UID: \"8fa009a6-0898-4394-8392-16e4c47c8e9a\") " pod="openstack/dnsmasq-dns-649c5dcfb9-g96zt" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.066692 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8fa009a6-0898-4394-8392-16e4c47c8e9a-ovsdbserver-nb\") pod \"dnsmasq-dns-649c5dcfb9-g96zt\" (UID: \"8fa009a6-0898-4394-8392-16e4c47c8e9a\") " pod="openstack/dnsmasq-dns-649c5dcfb9-g96zt" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.067360 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8fa009a6-0898-4394-8392-16e4c47c8e9a-dns-swift-storage-0\") pod \"dnsmasq-dns-649c5dcfb9-g96zt\" (UID: \"8fa009a6-0898-4394-8392-16e4c47c8e9a\") " pod="openstack/dnsmasq-dns-649c5dcfb9-g96zt" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.067884 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8fa009a6-0898-4394-8392-16e4c47c8e9a-ovsdbserver-sb\") pod \"dnsmasq-dns-649c5dcfb9-g96zt\" (UID: \"8fa009a6-0898-4394-8392-16e4c47c8e9a\") " pod="openstack/dnsmasq-dns-649c5dcfb9-g96zt" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.068529 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8fa009a6-0898-4394-8392-16e4c47c8e9a-dns-svc\") pod \"dnsmasq-dns-649c5dcfb9-g96zt\" (UID: \"8fa009a6-0898-4394-8392-16e4c47c8e9a\") " pod="openstack/dnsmasq-dns-649c5dcfb9-g96zt" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.081664 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-559d5d679f-9vm7m"] Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.094784 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xcwd\" (UniqueName: \"kubernetes.io/projected/8fa009a6-0898-4394-8392-16e4c47c8e9a-kube-api-access-9xcwd\") pod \"dnsmasq-dns-649c5dcfb9-g96zt\" (UID: \"8fa009a6-0898-4394-8392-16e4c47c8e9a\") " pod="openstack/dnsmasq-dns-649c5dcfb9-g96zt" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.103033 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-h68sq" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.126009 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.127835 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.136185 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.137515 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.138931 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.143182 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.150919 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.167421 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86eb7ff0-87b2-4538-8c5b-9126768e810b-logs\") pod \"watcher-decision-engine-0\" (UID: \"86eb7ff0-87b2-4538-8c5b-9126768e810b\") " pod="openstack/watcher-decision-engine-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.167482 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86eb7ff0-87b2-4538-8c5b-9126768e810b-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"86eb7ff0-87b2-4538-8c5b-9126768e810b\") " pod="openstack/watcher-decision-engine-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.167515 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/86eb7ff0-87b2-4538-8c5b-9126768e810b-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"86eb7ff0-87b2-4538-8c5b-9126768e810b\") " pod="openstack/watcher-decision-engine-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.167532 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86eb7ff0-87b2-4538-8c5b-9126768e810b-config-data\") pod \"watcher-decision-engine-0\" (UID: \"86eb7ff0-87b2-4538-8c5b-9126768e810b\") " pod="openstack/watcher-decision-engine-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.167566 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcb7c\" (UniqueName: \"kubernetes.io/projected/86eb7ff0-87b2-4538-8c5b-9126768e810b-kube-api-access-mcb7c\") pod \"watcher-decision-engine-0\" (UID: \"86eb7ff0-87b2-4538-8c5b-9126768e810b\") " pod="openstack/watcher-decision-engine-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.167613 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d32866e1-5d09-4156-b16e-d2fcff064fba-scripts\") pod \"horizon-559d5d679f-9vm7m\" (UID: \"d32866e1-5d09-4156-b16e-d2fcff064fba\") " pod="openstack/horizon-559d5d679f-9vm7m" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.167636 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p296\" (UniqueName: \"kubernetes.io/projected/d32866e1-5d09-4156-b16e-d2fcff064fba-kube-api-access-4p296\") pod \"horizon-559d5d679f-9vm7m\" (UID: \"d32866e1-5d09-4156-b16e-d2fcff064fba\") " pod="openstack/horizon-559d5d679f-9vm7m" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.167682 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d32866e1-5d09-4156-b16e-d2fcff064fba-config-data\") pod \"horizon-559d5d679f-9vm7m\" (UID: \"d32866e1-5d09-4156-b16e-d2fcff064fba\") " pod="openstack/horizon-559d5d679f-9vm7m" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.167700 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d32866e1-5d09-4156-b16e-d2fcff064fba-horizon-secret-key\") pod \"horizon-559d5d679f-9vm7m\" (UID: \"d32866e1-5d09-4156-b16e-d2fcff064fba\") " pod="openstack/horizon-559d5d679f-9vm7m" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.167715 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d32866e1-5d09-4156-b16e-d2fcff064fba-logs\") pod \"horizon-559d5d679f-9vm7m\" (UID: \"d32866e1-5d09-4156-b16e-d2fcff064fba\") " pod="openstack/horizon-559d5d679f-9vm7m" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.168157 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86eb7ff0-87b2-4538-8c5b-9126768e810b-logs\") pod \"watcher-decision-engine-0\" (UID: \"86eb7ff0-87b2-4538-8c5b-9126768e810b\") " pod="openstack/watcher-decision-engine-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.195142 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86eb7ff0-87b2-4538-8c5b-9126768e810b-config-data\") pod \"watcher-decision-engine-0\" (UID: \"86eb7ff0-87b2-4538-8c5b-9126768e810b\") " pod="openstack/watcher-decision-engine-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.195568 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/86eb7ff0-87b2-4538-8c5b-9126768e810b-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"86eb7ff0-87b2-4538-8c5b-9126768e810b\") " pod="openstack/watcher-decision-engine-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.196055 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86eb7ff0-87b2-4538-8c5b-9126768e810b-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"86eb7ff0-87b2-4538-8c5b-9126768e810b\") " pod="openstack/watcher-decision-engine-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.198840 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-q2ssq"] Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.208630 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-q2ssq" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.224873 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-649c5dcfb9-g96zt" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.274934 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcb7c\" (UniqueName: \"kubernetes.io/projected/86eb7ff0-87b2-4538-8c5b-9126768e810b-kube-api-access-mcb7c\") pod \"watcher-decision-engine-0\" (UID: \"86eb7ff0-87b2-4538-8c5b-9126768e810b\") " pod="openstack/watcher-decision-engine-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.275464 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.275653 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-zmpb8" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.275759 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.277129 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/098a7e4d-3eea-40f5-861c-9c026433186b-logs\") pod \"watcher-applier-0\" (UID: \"098a7e4d-3eea-40f5-861c-9c026433186b\") " pod="openstack/watcher-applier-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.277151 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857-logs\") pod \"watcher-api-0\" (UID: \"5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857\") " pod="openstack/watcher-api-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.277166 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/098a7e4d-3eea-40f5-861c-9c026433186b-config-data\") pod \"watcher-applier-0\" (UID: \"098a7e4d-3eea-40f5-861c-9c026433186b\") " pod="openstack/watcher-applier-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.277180 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7421067a-d596-4a56-82f2-39eabd33567c-logs\") pod \"placement-db-sync-q2ssq\" (UID: \"7421067a-d596-4a56-82f2-39eabd33567c\") " pod="openstack/placement-db-sync-q2ssq" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.277207 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7421067a-d596-4a56-82f2-39eabd33567c-config-data\") pod \"placement-db-sync-q2ssq\" (UID: \"7421067a-d596-4a56-82f2-39eabd33567c\") " pod="openstack/placement-db-sync-q2ssq" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.277221 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857\") " pod="openstack/watcher-api-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.277262 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chng2\" (UniqueName: \"kubernetes.io/projected/098a7e4d-3eea-40f5-861c-9c026433186b-kube-api-access-chng2\") pod \"watcher-applier-0\" (UID: \"098a7e4d-3eea-40f5-861c-9c026433186b\") " pod="openstack/watcher-applier-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.277278 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv5tn\" (UniqueName: \"kubernetes.io/projected/5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857-kube-api-access-bv5tn\") pod \"watcher-api-0\" (UID: \"5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857\") " pod="openstack/watcher-api-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.277307 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d32866e1-5d09-4156-b16e-d2fcff064fba-scripts\") pod \"horizon-559d5d679f-9vm7m\" (UID: \"d32866e1-5d09-4156-b16e-d2fcff064fba\") " pod="openstack/horizon-559d5d679f-9vm7m" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.277334 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4p296\" (UniqueName: \"kubernetes.io/projected/d32866e1-5d09-4156-b16e-d2fcff064fba-kube-api-access-4p296\") pod \"horizon-559d5d679f-9vm7m\" (UID: \"d32866e1-5d09-4156-b16e-d2fcff064fba\") " pod="openstack/horizon-559d5d679f-9vm7m" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.277372 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857\") " pod="openstack/watcher-api-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.277445 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d32866e1-5d09-4156-b16e-d2fcff064fba-config-data\") pod \"horizon-559d5d679f-9vm7m\" (UID: \"d32866e1-5d09-4156-b16e-d2fcff064fba\") " pod="openstack/horizon-559d5d679f-9vm7m" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.277464 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d32866e1-5d09-4156-b16e-d2fcff064fba-horizon-secret-key\") pod \"horizon-559d5d679f-9vm7m\" (UID: \"d32866e1-5d09-4156-b16e-d2fcff064fba\") " pod="openstack/horizon-559d5d679f-9vm7m" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.277497 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d32866e1-5d09-4156-b16e-d2fcff064fba-logs\") pod \"horizon-559d5d679f-9vm7m\" (UID: \"d32866e1-5d09-4156-b16e-d2fcff064fba\") " pod="openstack/horizon-559d5d679f-9vm7m" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.277523 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/098a7e4d-3eea-40f5-861c-9c026433186b-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"098a7e4d-3eea-40f5-861c-9c026433186b\") " pod="openstack/watcher-applier-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.277550 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7421067a-d596-4a56-82f2-39eabd33567c-combined-ca-bundle\") pod \"placement-db-sync-q2ssq\" (UID: \"7421067a-d596-4a56-82f2-39eabd33567c\") " pod="openstack/placement-db-sync-q2ssq" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.277566 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7421067a-d596-4a56-82f2-39eabd33567c-scripts\") pod \"placement-db-sync-q2ssq\" (UID: \"7421067a-d596-4a56-82f2-39eabd33567c\") " pod="openstack/placement-db-sync-q2ssq" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.277580 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857-config-data\") pod \"watcher-api-0\" (UID: \"5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857\") " pod="openstack/watcher-api-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.277602 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcrsg\" (UniqueName: \"kubernetes.io/projected/7421067a-d596-4a56-82f2-39eabd33567c-kube-api-access-xcrsg\") pod \"placement-db-sync-q2ssq\" (UID: \"7421067a-d596-4a56-82f2-39eabd33567c\") " pod="openstack/placement-db-sync-q2ssq" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.278948 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d32866e1-5d09-4156-b16e-d2fcff064fba-scripts\") pod \"horizon-559d5d679f-9vm7m\" (UID: \"d32866e1-5d09-4156-b16e-d2fcff064fba\") " pod="openstack/horizon-559d5d679f-9vm7m" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.279084 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.282559 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d32866e1-5d09-4156-b16e-d2fcff064fba-config-data\") pod \"horizon-559d5d679f-9vm7m\" (UID: \"d32866e1-5d09-4156-b16e-d2fcff064fba\") " pod="openstack/horizon-559d5d679f-9vm7m" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.306740 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d32866e1-5d09-4156-b16e-d2fcff064fba-logs\") pod \"horizon-559d5d679f-9vm7m\" (UID: \"d32866e1-5d09-4156-b16e-d2fcff064fba\") " pod="openstack/horizon-559d5d679f-9vm7m" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.319812 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d32866e1-5d09-4156-b16e-d2fcff064fba-horizon-secret-key\") pod \"horizon-559d5d679f-9vm7m\" (UID: \"d32866e1-5d09-4156-b16e-d2fcff064fba\") " pod="openstack/horizon-559d5d679f-9vm7m" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.321554 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-q2ssq"] Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.344268 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4p296\" (UniqueName: \"kubernetes.io/projected/d32866e1-5d09-4156-b16e-d2fcff064fba-kube-api-access-4p296\") pod \"horizon-559d5d679f-9vm7m\" (UID: \"d32866e1-5d09-4156-b16e-d2fcff064fba\") " pod="openstack/horizon-559d5d679f-9vm7m" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.364057 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-stfh8"] Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.365161 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-stfh8" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.368891 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.370939 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.371609 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-4w99c" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.378844 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/098a7e4d-3eea-40f5-861c-9c026433186b-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"098a7e4d-3eea-40f5-861c-9c026433186b\") " pod="openstack/watcher-applier-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.378889 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7421067a-d596-4a56-82f2-39eabd33567c-combined-ca-bundle\") pod \"placement-db-sync-q2ssq\" (UID: \"7421067a-d596-4a56-82f2-39eabd33567c\") " pod="openstack/placement-db-sync-q2ssq" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.378909 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7421067a-d596-4a56-82f2-39eabd33567c-scripts\") pod \"placement-db-sync-q2ssq\" (UID: \"7421067a-d596-4a56-82f2-39eabd33567c\") " pod="openstack/placement-db-sync-q2ssq" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.378924 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857-config-data\") pod \"watcher-api-0\" (UID: \"5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857\") " pod="openstack/watcher-api-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.378943 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcrsg\" (UniqueName: \"kubernetes.io/projected/7421067a-d596-4a56-82f2-39eabd33567c-kube-api-access-xcrsg\") pod \"placement-db-sync-q2ssq\" (UID: \"7421067a-d596-4a56-82f2-39eabd33567c\") " pod="openstack/placement-db-sync-q2ssq" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.378967 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/098a7e4d-3eea-40f5-861c-9c026433186b-logs\") pod \"watcher-applier-0\" (UID: \"098a7e4d-3eea-40f5-861c-9c026433186b\") " pod="openstack/watcher-applier-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.378980 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857-logs\") pod \"watcher-api-0\" (UID: \"5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857\") " pod="openstack/watcher-api-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.378994 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/098a7e4d-3eea-40f5-861c-9c026433186b-config-data\") pod \"watcher-applier-0\" (UID: \"098a7e4d-3eea-40f5-861c-9c026433186b\") " pod="openstack/watcher-applier-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.379008 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7421067a-d596-4a56-82f2-39eabd33567c-logs\") pod \"placement-db-sync-q2ssq\" (UID: \"7421067a-d596-4a56-82f2-39eabd33567c\") " pod="openstack/placement-db-sync-q2ssq" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.379029 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7421067a-d596-4a56-82f2-39eabd33567c-config-data\") pod \"placement-db-sync-q2ssq\" (UID: \"7421067a-d596-4a56-82f2-39eabd33567c\") " pod="openstack/placement-db-sync-q2ssq" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.379044 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857\") " pod="openstack/watcher-api-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.379082 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chng2\" (UniqueName: \"kubernetes.io/projected/098a7e4d-3eea-40f5-861c-9c026433186b-kube-api-access-chng2\") pod \"watcher-applier-0\" (UID: \"098a7e4d-3eea-40f5-861c-9c026433186b\") " pod="openstack/watcher-applier-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.379099 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bv5tn\" (UniqueName: \"kubernetes.io/projected/5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857-kube-api-access-bv5tn\") pod \"watcher-api-0\" (UID: \"5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857\") " pod="openstack/watcher-api-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.379159 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857\") " pod="openstack/watcher-api-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.380953 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7421067a-d596-4a56-82f2-39eabd33567c-logs\") pod \"placement-db-sync-q2ssq\" (UID: \"7421067a-d596-4a56-82f2-39eabd33567c\") " pod="openstack/placement-db-sync-q2ssq" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.383916 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857-logs\") pod \"watcher-api-0\" (UID: \"5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857\") " pod="openstack/watcher-api-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.384166 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/098a7e4d-3eea-40f5-861c-9c026433186b-logs\") pod \"watcher-applier-0\" (UID: \"098a7e4d-3eea-40f5-861c-9c026433186b\") " pod="openstack/watcher-applier-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.391156 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857\") " pod="openstack/watcher-api-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.393059 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857\") " pod="openstack/watcher-api-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.393176 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-stfh8"] Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.393194 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857-config-data\") pod \"watcher-api-0\" (UID: \"5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857\") " pod="openstack/watcher-api-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.393968 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7421067a-d596-4a56-82f2-39eabd33567c-config-data\") pod \"placement-db-sync-q2ssq\" (UID: \"7421067a-d596-4a56-82f2-39eabd33567c\") " pod="openstack/placement-db-sync-q2ssq" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.397412 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/098a7e4d-3eea-40f5-861c-9c026433186b-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"098a7e4d-3eea-40f5-861c-9c026433186b\") " pod="openstack/watcher-applier-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.401666 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7421067a-d596-4a56-82f2-39eabd33567c-scripts\") pod \"placement-db-sync-q2ssq\" (UID: \"7421067a-d596-4a56-82f2-39eabd33567c\") " pod="openstack/placement-db-sync-q2ssq" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.413098 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/098a7e4d-3eea-40f5-861c-9c026433186b-config-data\") pod \"watcher-applier-0\" (UID: \"098a7e4d-3eea-40f5-861c-9c026433186b\") " pod="openstack/watcher-applier-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.418784 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7421067a-d596-4a56-82f2-39eabd33567c-combined-ca-bundle\") pod \"placement-db-sync-q2ssq\" (UID: \"7421067a-d596-4a56-82f2-39eabd33567c\") " pod="openstack/placement-db-sync-q2ssq" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.443094 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chng2\" (UniqueName: \"kubernetes.io/projected/098a7e4d-3eea-40f5-861c-9c026433186b-kube-api-access-chng2\") pod \"watcher-applier-0\" (UID: \"098a7e4d-3eea-40f5-861c-9c026433186b\") " pod="openstack/watcher-applier-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.449781 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-649c5dcfb9-g96zt"] Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.466343 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcrsg\" (UniqueName: \"kubernetes.io/projected/7421067a-d596-4a56-82f2-39eabd33567c-kube-api-access-xcrsg\") pod \"placement-db-sync-q2ssq\" (UID: \"7421067a-d596-4a56-82f2-39eabd33567c\") " pod="openstack/placement-db-sync-q2ssq" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.466427 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-747cd567fc-7lvv6"] Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.467893 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-747cd567fc-7lvv6" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.474351 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bv5tn\" (UniqueName: \"kubernetes.io/projected/5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857-kube-api-access-bv5tn\") pod \"watcher-api-0\" (UID: \"5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857\") " pod="openstack/watcher-api-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.481437 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23123829-c64d-4376-8be6-660e7892a057-combined-ca-bundle\") pod \"neutron-db-sync-stfh8\" (UID: \"23123829-c64d-4376-8be6-660e7892a057\") " pod="openstack/neutron-db-sync-stfh8" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.481607 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q57hr\" (UniqueName: \"kubernetes.io/projected/23123829-c64d-4376-8be6-660e7892a057-kube-api-access-q57hr\") pod \"neutron-db-sync-stfh8\" (UID: \"23123829-c64d-4376-8be6-660e7892a057\") " pod="openstack/neutron-db-sync-stfh8" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.481684 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/23123829-c64d-4376-8be6-660e7892a057-config\") pod \"neutron-db-sync-stfh8\" (UID: \"23123829-c64d-4376-8be6-660e7892a057\") " pod="openstack/neutron-db-sync-stfh8" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.496607 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-747cd567fc-7lvv6"] Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.524628 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-856b879ffc-m4wq9"] Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.526349 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-856b879ffc-m4wq9" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.554337 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-kbqzq"] Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.556237 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.556837 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-559d5d679f-9vm7m" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.556904 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-kbqzq" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.563048 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.563254 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-d6rts" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.563841 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.582641 4724 generic.go:334] "Generic (PLEG): container finished" podID="2f344b52-c041-4a0b-bfb7-4c3ff396301a" containerID="ea61a724d4d08ca2a4b5e18fec30a94ab41912d400aed8223c92b4bfbb78d16d" exitCode=0 Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.582685 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-656d4464cc-fm4h2" event={"ID":"2f344b52-c041-4a0b-bfb7-4c3ff396301a","Type":"ContainerDied","Data":"ea61a724d4d08ca2a4b5e18fec30a94ab41912d400aed8223c92b4bfbb78d16d"} Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.583823 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d459z\" (UniqueName: \"kubernetes.io/projected/ad41c323-5f1b-4d58-bb6c-f54a4730090a-kube-api-access-d459z\") pod \"horizon-747cd567fc-7lvv6\" (UID: \"ad41c323-5f1b-4d58-bb6c-f54a4730090a\") " pod="openstack/horizon-747cd567fc-7lvv6" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.583865 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ad41c323-5f1b-4d58-bb6c-f54a4730090a-horizon-secret-key\") pod \"horizon-747cd567fc-7lvv6\" (UID: \"ad41c323-5f1b-4d58-bb6c-f54a4730090a\") " pod="openstack/horizon-747cd567fc-7lvv6" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.583894 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/23123829-c64d-4376-8be6-660e7892a057-config\") pod \"neutron-db-sync-stfh8\" (UID: \"23123829-c64d-4376-8be6-660e7892a057\") " pod="openstack/neutron-db-sync-stfh8" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.583914 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc252ed4-e739-4270-b189-1b35bd5a3533-ovsdbserver-nb\") pod \"dnsmasq-dns-856b879ffc-m4wq9\" (UID: \"cc252ed4-e739-4270-b189-1b35bd5a3533\") " pod="openstack/dnsmasq-dns-856b879ffc-m4wq9" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.583954 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc252ed4-e739-4270-b189-1b35bd5a3533-config\") pod \"dnsmasq-dns-856b879ffc-m4wq9\" (UID: \"cc252ed4-e739-4270-b189-1b35bd5a3533\") " pod="openstack/dnsmasq-dns-856b879ffc-m4wq9" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.583980 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23123829-c64d-4376-8be6-660e7892a057-combined-ca-bundle\") pod \"neutron-db-sync-stfh8\" (UID: \"23123829-c64d-4376-8be6-660e7892a057\") " pod="openstack/neutron-db-sync-stfh8" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.583994 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad41c323-5f1b-4d58-bb6c-f54a4730090a-logs\") pod \"horizon-747cd567fc-7lvv6\" (UID: \"ad41c323-5f1b-4d58-bb6c-f54a4730090a\") " pod="openstack/horizon-747cd567fc-7lvv6" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.584044 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ad41c323-5f1b-4d58-bb6c-f54a4730090a-scripts\") pod \"horizon-747cd567fc-7lvv6\" (UID: \"ad41c323-5f1b-4d58-bb6c-f54a4730090a\") " pod="openstack/horizon-747cd567fc-7lvv6" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.584072 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc252ed4-e739-4270-b189-1b35bd5a3533-ovsdbserver-sb\") pod \"dnsmasq-dns-856b879ffc-m4wq9\" (UID: \"cc252ed4-e739-4270-b189-1b35bd5a3533\") " pod="openstack/dnsmasq-dns-856b879ffc-m4wq9" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.584096 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ad41c323-5f1b-4d58-bb6c-f54a4730090a-config-data\") pod \"horizon-747cd567fc-7lvv6\" (UID: \"ad41c323-5f1b-4d58-bb6c-f54a4730090a\") " pod="openstack/horizon-747cd567fc-7lvv6" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.584117 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q57hr\" (UniqueName: \"kubernetes.io/projected/23123829-c64d-4376-8be6-660e7892a057-kube-api-access-q57hr\") pod \"neutron-db-sync-stfh8\" (UID: \"23123829-c64d-4376-8be6-660e7892a057\") " pod="openstack/neutron-db-sync-stfh8" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.584136 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc252ed4-e739-4270-b189-1b35bd5a3533-dns-svc\") pod \"dnsmasq-dns-856b879ffc-m4wq9\" (UID: \"cc252ed4-e739-4270-b189-1b35bd5a3533\") " pod="openstack/dnsmasq-dns-856b879ffc-m4wq9" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.584175 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc252ed4-e739-4270-b189-1b35bd5a3533-dns-swift-storage-0\") pod \"dnsmasq-dns-856b879ffc-m4wq9\" (UID: \"cc252ed4-e739-4270-b189-1b35bd5a3533\") " pod="openstack/dnsmasq-dns-856b879ffc-m4wq9" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.584194 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68gbq\" (UniqueName: \"kubernetes.io/projected/cc252ed4-e739-4270-b189-1b35bd5a3533-kube-api-access-68gbq\") pod \"dnsmasq-dns-856b879ffc-m4wq9\" (UID: \"cc252ed4-e739-4270-b189-1b35bd5a3533\") " pod="openstack/dnsmasq-dns-856b879ffc-m4wq9" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.590958 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/23123829-c64d-4376-8be6-660e7892a057-config\") pod \"neutron-db-sync-stfh8\" (UID: \"23123829-c64d-4376-8be6-660e7892a057\") " pod="openstack/neutron-db-sync-stfh8" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.594287 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23123829-c64d-4376-8be6-660e7892a057-combined-ca-bundle\") pod \"neutron-db-sync-stfh8\" (UID: \"23123829-c64d-4376-8be6-660e7892a057\") " pod="openstack/neutron-db-sync-stfh8" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.614075 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-k8sd8"] Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.615711 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-k8sd8" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.618480 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-s4fjm" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.621533 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.622210 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q57hr\" (UniqueName: \"kubernetes.io/projected/23123829-c64d-4376-8be6-660e7892a057-kube-api-access-q57hr\") pod \"neutron-db-sync-stfh8\" (UID: \"23123829-c64d-4376-8be6-660e7892a057\") " pod="openstack/neutron-db-sync-stfh8" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.624216 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.630929 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-856b879ffc-m4wq9"] Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.665662 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-kbqzq"] Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.679760 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.686220 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ad41c323-5f1b-4d58-bb6c-f54a4730090a-scripts\") pod \"horizon-747cd567fc-7lvv6\" (UID: \"ad41c323-5f1b-4d58-bb6c-f54a4730090a\") " pod="openstack/horizon-747cd567fc-7lvv6" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.686274 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc252ed4-e739-4270-b189-1b35bd5a3533-ovsdbserver-sb\") pod \"dnsmasq-dns-856b879ffc-m4wq9\" (UID: \"cc252ed4-e739-4270-b189-1b35bd5a3533\") " pod="openstack/dnsmasq-dns-856b879ffc-m4wq9" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.686305 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ad41c323-5f1b-4d58-bb6c-f54a4730090a-config-data\") pod \"horizon-747cd567fc-7lvv6\" (UID: \"ad41c323-5f1b-4d58-bb6c-f54a4730090a\") " pod="openstack/horizon-747cd567fc-7lvv6" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.686326 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc252ed4-e739-4270-b189-1b35bd5a3533-dns-svc\") pod \"dnsmasq-dns-856b879ffc-m4wq9\" (UID: \"cc252ed4-e739-4270-b189-1b35bd5a3533\") " pod="openstack/dnsmasq-dns-856b879ffc-m4wq9" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.686372 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc252ed4-e739-4270-b189-1b35bd5a3533-dns-swift-storage-0\") pod \"dnsmasq-dns-856b879ffc-m4wq9\" (UID: \"cc252ed4-e739-4270-b189-1b35bd5a3533\") " pod="openstack/dnsmasq-dns-856b879ffc-m4wq9" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.686554 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68gbq\" (UniqueName: \"kubernetes.io/projected/cc252ed4-e739-4270-b189-1b35bd5a3533-kube-api-access-68gbq\") pod \"dnsmasq-dns-856b879ffc-m4wq9\" (UID: \"cc252ed4-e739-4270-b189-1b35bd5a3533\") " pod="openstack/dnsmasq-dns-856b879ffc-m4wq9" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.686700 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d459z\" (UniqueName: \"kubernetes.io/projected/ad41c323-5f1b-4d58-bb6c-f54a4730090a-kube-api-access-d459z\") pod \"horizon-747cd567fc-7lvv6\" (UID: \"ad41c323-5f1b-4d58-bb6c-f54a4730090a\") " pod="openstack/horizon-747cd567fc-7lvv6" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.686725 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/987df27c-52c5-4950-be0d-72bbd4164ea6-config-data\") pod \"cinder-db-sync-kbqzq\" (UID: \"987df27c-52c5-4950-be0d-72bbd4164ea6\") " pod="openstack/cinder-db-sync-kbqzq" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.687319 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc252ed4-e739-4270-b189-1b35bd5a3533-ovsdbserver-sb\") pod \"dnsmasq-dns-856b879ffc-m4wq9\" (UID: \"cc252ed4-e739-4270-b189-1b35bd5a3533\") " pod="openstack/dnsmasq-dns-856b879ffc-m4wq9" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.687385 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc252ed4-e739-4270-b189-1b35bd5a3533-dns-swift-storage-0\") pod \"dnsmasq-dns-856b879ffc-m4wq9\" (UID: \"cc252ed4-e739-4270-b189-1b35bd5a3533\") " pod="openstack/dnsmasq-dns-856b879ffc-m4wq9" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.686740 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05e85ad1-bec3-4a1d-b77e-cac6dab7c9fd-combined-ca-bundle\") pod \"barbican-db-sync-k8sd8\" (UID: \"05e85ad1-bec3-4a1d-b77e-cac6dab7c9fd\") " pod="openstack/barbican-db-sync-k8sd8" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.687506 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ad41c323-5f1b-4d58-bb6c-f54a4730090a-horizon-secret-key\") pod \"horizon-747cd567fc-7lvv6\" (UID: \"ad41c323-5f1b-4d58-bb6c-f54a4730090a\") " pod="openstack/horizon-747cd567fc-7lvv6" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.687536 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/05e85ad1-bec3-4a1d-b77e-cac6dab7c9fd-db-sync-config-data\") pod \"barbican-db-sync-k8sd8\" (UID: \"05e85ad1-bec3-4a1d-b77e-cac6dab7c9fd\") " pod="openstack/barbican-db-sync-k8sd8" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.687681 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc252ed4-e739-4270-b189-1b35bd5a3533-ovsdbserver-nb\") pod \"dnsmasq-dns-856b879ffc-m4wq9\" (UID: \"cc252ed4-e739-4270-b189-1b35bd5a3533\") " pod="openstack/dnsmasq-dns-856b879ffc-m4wq9" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.687979 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc252ed4-e739-4270-b189-1b35bd5a3533-dns-svc\") pod \"dnsmasq-dns-856b879ffc-m4wq9\" (UID: \"cc252ed4-e739-4270-b189-1b35bd5a3533\") " pod="openstack/dnsmasq-dns-856b879ffc-m4wq9" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.688272 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmpm9\" (UniqueName: \"kubernetes.io/projected/05e85ad1-bec3-4a1d-b77e-cac6dab7c9fd-kube-api-access-pmpm9\") pod \"barbican-db-sync-k8sd8\" (UID: \"05e85ad1-bec3-4a1d-b77e-cac6dab7c9fd\") " pod="openstack/barbican-db-sync-k8sd8" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.688329 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc252ed4-e739-4270-b189-1b35bd5a3533-config\") pod \"dnsmasq-dns-856b879ffc-m4wq9\" (UID: \"cc252ed4-e739-4270-b189-1b35bd5a3533\") " pod="openstack/dnsmasq-dns-856b879ffc-m4wq9" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.688373 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/987df27c-52c5-4950-be0d-72bbd4164ea6-db-sync-config-data\") pod \"cinder-db-sync-kbqzq\" (UID: \"987df27c-52c5-4950-be0d-72bbd4164ea6\") " pod="openstack/cinder-db-sync-kbqzq" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.688416 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtjkq\" (UniqueName: \"kubernetes.io/projected/987df27c-52c5-4950-be0d-72bbd4164ea6-kube-api-access-dtjkq\") pod \"cinder-db-sync-kbqzq\" (UID: \"987df27c-52c5-4950-be0d-72bbd4164ea6\") " pod="openstack/cinder-db-sync-kbqzq" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.688442 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad41c323-5f1b-4d58-bb6c-f54a4730090a-logs\") pod \"horizon-747cd567fc-7lvv6\" (UID: \"ad41c323-5f1b-4d58-bb6c-f54a4730090a\") " pod="openstack/horizon-747cd567fc-7lvv6" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.688473 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/987df27c-52c5-4950-be0d-72bbd4164ea6-combined-ca-bundle\") pod \"cinder-db-sync-kbqzq\" (UID: \"987df27c-52c5-4950-be0d-72bbd4164ea6\") " pod="openstack/cinder-db-sync-kbqzq" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.688513 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/987df27c-52c5-4950-be0d-72bbd4164ea6-scripts\") pod \"cinder-db-sync-kbqzq\" (UID: \"987df27c-52c5-4950-be0d-72bbd4164ea6\") " pod="openstack/cinder-db-sync-kbqzq" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.688600 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/987df27c-52c5-4950-be0d-72bbd4164ea6-etc-machine-id\") pod \"cinder-db-sync-kbqzq\" (UID: \"987df27c-52c5-4950-be0d-72bbd4164ea6\") " pod="openstack/cinder-db-sync-kbqzq" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.689167 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc252ed4-e739-4270-b189-1b35bd5a3533-ovsdbserver-nb\") pod \"dnsmasq-dns-856b879ffc-m4wq9\" (UID: \"cc252ed4-e739-4270-b189-1b35bd5a3533\") " pod="openstack/dnsmasq-dns-856b879ffc-m4wq9" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.690103 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc252ed4-e739-4270-b189-1b35bd5a3533-config\") pod \"dnsmasq-dns-856b879ffc-m4wq9\" (UID: \"cc252ed4-e739-4270-b189-1b35bd5a3533\") " pod="openstack/dnsmasq-dns-856b879ffc-m4wq9" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.690817 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad41c323-5f1b-4d58-bb6c-f54a4730090a-logs\") pod \"horizon-747cd567fc-7lvv6\" (UID: \"ad41c323-5f1b-4d58-bb6c-f54a4730090a\") " pod="openstack/horizon-747cd567fc-7lvv6" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.691003 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ad41c323-5f1b-4d58-bb6c-f54a4730090a-config-data\") pod \"horizon-747cd567fc-7lvv6\" (UID: \"ad41c323-5f1b-4d58-bb6c-f54a4730090a\") " pod="openstack/horizon-747cd567fc-7lvv6" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.691107 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ad41c323-5f1b-4d58-bb6c-f54a4730090a-scripts\") pod \"horizon-747cd567fc-7lvv6\" (UID: \"ad41c323-5f1b-4d58-bb6c-f54a4730090a\") " pod="openstack/horizon-747cd567fc-7lvv6" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.695286 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-q2ssq" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.700080 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ad41c323-5f1b-4d58-bb6c-f54a4730090a-horizon-secret-key\") pod \"horizon-747cd567fc-7lvv6\" (UID: \"ad41c323-5f1b-4d58-bb6c-f54a4730090a\") " pod="openstack/horizon-747cd567fc-7lvv6" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.707780 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.709254 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68gbq\" (UniqueName: \"kubernetes.io/projected/cc252ed4-e739-4270-b189-1b35bd5a3533-kube-api-access-68gbq\") pod \"dnsmasq-dns-856b879ffc-m4wq9\" (UID: \"cc252ed4-e739-4270-b189-1b35bd5a3533\") " pod="openstack/dnsmasq-dns-856b879ffc-m4wq9" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.712517 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.712794 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d459z\" (UniqueName: \"kubernetes.io/projected/ad41c323-5f1b-4d58-bb6c-f54a4730090a-kube-api-access-d459z\") pod \"horizon-747cd567fc-7lvv6\" (UID: \"ad41c323-5f1b-4d58-bb6c-f54a4730090a\") " pod="openstack/horizon-747cd567fc-7lvv6" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.712855 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-656d4464cc-fm4h2" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.716667 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.716842 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.716984 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.721190 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-k8sd8"] Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.728770 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-5xnsd" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.737761 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-stfh8" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.742035 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.753874 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 17:49:11 crc kubenswrapper[4724]: E0223 17:49:11.754210 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f344b52-c041-4a0b-bfb7-4c3ff396301a" containerName="dnsmasq-dns" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.754227 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f344b52-c041-4a0b-bfb7-4c3ff396301a" containerName="dnsmasq-dns" Feb 23 17:49:11 crc kubenswrapper[4724]: E0223 17:49:11.754248 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f344b52-c041-4a0b-bfb7-4c3ff396301a" containerName="init" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.754255 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f344b52-c041-4a0b-bfb7-4c3ff396301a" containerName="init" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.755690 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f344b52-c041-4a0b-bfb7-4c3ff396301a" containerName="dnsmasq-dns" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.756840 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.759069 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.759272 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.786411 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.789225 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2f344b52-c041-4a0b-bfb7-4c3ff396301a-dns-swift-storage-0\") pod \"2f344b52-c041-4a0b-bfb7-4c3ff396301a\" (UID: \"2f344b52-c041-4a0b-bfb7-4c3ff396301a\") " Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.789291 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjvpz\" (UniqueName: \"kubernetes.io/projected/2f344b52-c041-4a0b-bfb7-4c3ff396301a-kube-api-access-wjvpz\") pod \"2f344b52-c041-4a0b-bfb7-4c3ff396301a\" (UID: \"2f344b52-c041-4a0b-bfb7-4c3ff396301a\") " Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.789349 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f344b52-c041-4a0b-bfb7-4c3ff396301a-dns-svc\") pod \"2f344b52-c041-4a0b-bfb7-4c3ff396301a\" (UID: \"2f344b52-c041-4a0b-bfb7-4c3ff396301a\") " Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.789375 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2f344b52-c041-4a0b-bfb7-4c3ff396301a-ovsdbserver-sb\") pod \"2f344b52-c041-4a0b-bfb7-4c3ff396301a\" (UID: \"2f344b52-c041-4a0b-bfb7-4c3ff396301a\") " Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.789495 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2f344b52-c041-4a0b-bfb7-4c3ff396301a-ovsdbserver-nb\") pod \"2f344b52-c041-4a0b-bfb7-4c3ff396301a\" (UID: \"2f344b52-c041-4a0b-bfb7-4c3ff396301a\") " Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.789537 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f344b52-c041-4a0b-bfb7-4c3ff396301a-config\") pod \"2f344b52-c041-4a0b-bfb7-4c3ff396301a\" (UID: \"2f344b52-c041-4a0b-bfb7-4c3ff396301a\") " Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.790005 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bff85059-9b40-450b-889d-1911c2d13b35-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"bff85059-9b40-450b-889d-1911c2d13b35\") " pod="openstack/glance-default-external-api-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.790038 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/987df27c-52c5-4950-be0d-72bbd4164ea6-combined-ca-bundle\") pod \"cinder-db-sync-kbqzq\" (UID: \"987df27c-52c5-4950-be0d-72bbd4164ea6\") " pod="openstack/cinder-db-sync-kbqzq" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.790066 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/987df27c-52c5-4950-be0d-72bbd4164ea6-scripts\") pod \"cinder-db-sync-kbqzq\" (UID: \"987df27c-52c5-4950-be0d-72bbd4164ea6\") " pod="openstack/cinder-db-sync-kbqzq" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.790084 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bff85059-9b40-450b-889d-1911c2d13b35-scripts\") pod \"glance-default-external-api-0\" (UID: \"bff85059-9b40-450b-889d-1911c2d13b35\") " pod="openstack/glance-default-external-api-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.790115 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/987df27c-52c5-4950-be0d-72bbd4164ea6-etc-machine-id\") pod \"cinder-db-sync-kbqzq\" (UID: \"987df27c-52c5-4950-be0d-72bbd4164ea6\") " pod="openstack/cinder-db-sync-kbqzq" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.790159 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bff85059-9b40-450b-889d-1911c2d13b35-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"bff85059-9b40-450b-889d-1911c2d13b35\") " pod="openstack/glance-default-external-api-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.790178 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bff85059-9b40-450b-889d-1911c2d13b35-logs\") pod \"glance-default-external-api-0\" (UID: \"bff85059-9b40-450b-889d-1911c2d13b35\") " pod="openstack/glance-default-external-api-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.790193 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdhp2\" (UniqueName: \"kubernetes.io/projected/bff85059-9b40-450b-889d-1911c2d13b35-kube-api-access-kdhp2\") pod \"glance-default-external-api-0\" (UID: \"bff85059-9b40-450b-889d-1911c2d13b35\") " pod="openstack/glance-default-external-api-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.790209 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bff85059-9b40-450b-889d-1911c2d13b35-config-data\") pod \"glance-default-external-api-0\" (UID: \"bff85059-9b40-450b-889d-1911c2d13b35\") " pod="openstack/glance-default-external-api-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.790250 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bff85059-9b40-450b-889d-1911c2d13b35-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"bff85059-9b40-450b-889d-1911c2d13b35\") " pod="openstack/glance-default-external-api-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.790275 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"bff85059-9b40-450b-889d-1911c2d13b35\") " pod="openstack/glance-default-external-api-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.790299 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/987df27c-52c5-4950-be0d-72bbd4164ea6-config-data\") pod \"cinder-db-sync-kbqzq\" (UID: \"987df27c-52c5-4950-be0d-72bbd4164ea6\") " pod="openstack/cinder-db-sync-kbqzq" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.790315 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05e85ad1-bec3-4a1d-b77e-cac6dab7c9fd-combined-ca-bundle\") pod \"barbican-db-sync-k8sd8\" (UID: \"05e85ad1-bec3-4a1d-b77e-cac6dab7c9fd\") " pod="openstack/barbican-db-sync-k8sd8" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.790336 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/05e85ad1-bec3-4a1d-b77e-cac6dab7c9fd-db-sync-config-data\") pod \"barbican-db-sync-k8sd8\" (UID: \"05e85ad1-bec3-4a1d-b77e-cac6dab7c9fd\") " pod="openstack/barbican-db-sync-k8sd8" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.790376 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmpm9\" (UniqueName: \"kubernetes.io/projected/05e85ad1-bec3-4a1d-b77e-cac6dab7c9fd-kube-api-access-pmpm9\") pod \"barbican-db-sync-k8sd8\" (UID: \"05e85ad1-bec3-4a1d-b77e-cac6dab7c9fd\") " pod="openstack/barbican-db-sync-k8sd8" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.790427 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/987df27c-52c5-4950-be0d-72bbd4164ea6-db-sync-config-data\") pod \"cinder-db-sync-kbqzq\" (UID: \"987df27c-52c5-4950-be0d-72bbd4164ea6\") " pod="openstack/cinder-db-sync-kbqzq" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.790444 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtjkq\" (UniqueName: \"kubernetes.io/projected/987df27c-52c5-4950-be0d-72bbd4164ea6-kube-api-access-dtjkq\") pod \"cinder-db-sync-kbqzq\" (UID: \"987df27c-52c5-4950-be0d-72bbd4164ea6\") " pod="openstack/cinder-db-sync-kbqzq" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.791502 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/987df27c-52c5-4950-be0d-72bbd4164ea6-etc-machine-id\") pod \"cinder-db-sync-kbqzq\" (UID: \"987df27c-52c5-4950-be0d-72bbd4164ea6\") " pod="openstack/cinder-db-sync-kbqzq" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.817061 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05e85ad1-bec3-4a1d-b77e-cac6dab7c9fd-combined-ca-bundle\") pod \"barbican-db-sync-k8sd8\" (UID: \"05e85ad1-bec3-4a1d-b77e-cac6dab7c9fd\") " pod="openstack/barbican-db-sync-k8sd8" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.817070 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f344b52-c041-4a0b-bfb7-4c3ff396301a-kube-api-access-wjvpz" (OuterVolumeSpecName: "kube-api-access-wjvpz") pod "2f344b52-c041-4a0b-bfb7-4c3ff396301a" (UID: "2f344b52-c041-4a0b-bfb7-4c3ff396301a"). InnerVolumeSpecName "kube-api-access-wjvpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.817866 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/987df27c-52c5-4950-be0d-72bbd4164ea6-scripts\") pod \"cinder-db-sync-kbqzq\" (UID: \"987df27c-52c5-4950-be0d-72bbd4164ea6\") " pod="openstack/cinder-db-sync-kbqzq" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.818882 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/987df27c-52c5-4950-be0d-72bbd4164ea6-combined-ca-bundle\") pod \"cinder-db-sync-kbqzq\" (UID: \"987df27c-52c5-4950-be0d-72bbd4164ea6\") " pod="openstack/cinder-db-sync-kbqzq" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.819526 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/987df27c-52c5-4950-be0d-72bbd4164ea6-config-data\") pod \"cinder-db-sync-kbqzq\" (UID: \"987df27c-52c5-4950-be0d-72bbd4164ea6\") " pod="openstack/cinder-db-sync-kbqzq" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.823167 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtjkq\" (UniqueName: \"kubernetes.io/projected/987df27c-52c5-4950-be0d-72bbd4164ea6-kube-api-access-dtjkq\") pod \"cinder-db-sync-kbqzq\" (UID: \"987df27c-52c5-4950-be0d-72bbd4164ea6\") " pod="openstack/cinder-db-sync-kbqzq" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.823581 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-747cd567fc-7lvv6" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.827451 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmpm9\" (UniqueName: \"kubernetes.io/projected/05e85ad1-bec3-4a1d-b77e-cac6dab7c9fd-kube-api-access-pmpm9\") pod \"barbican-db-sync-k8sd8\" (UID: \"05e85ad1-bec3-4a1d-b77e-cac6dab7c9fd\") " pod="openstack/barbican-db-sync-k8sd8" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.827587 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/05e85ad1-bec3-4a1d-b77e-cac6dab7c9fd-db-sync-config-data\") pod \"barbican-db-sync-k8sd8\" (UID: \"05e85ad1-bec3-4a1d-b77e-cac6dab7c9fd\") " pod="openstack/barbican-db-sync-k8sd8" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.830289 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.832478 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.839664 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.841836 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/987df27c-52c5-4950-be0d-72bbd4164ea6-db-sync-config-data\") pod \"cinder-db-sync-kbqzq\" (UID: \"987df27c-52c5-4950-be0d-72bbd4164ea6\") " pod="openstack/cinder-db-sync-kbqzq" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.847007 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.852511 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.894908 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7a589efc-e414-47aa-90d8-14b2ad1f542e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7a589efc-e414-47aa-90d8-14b2ad1f542e\") " pod="openstack/ceilometer-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.894990 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bff85059-9b40-450b-889d-1911c2d13b35-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"bff85059-9b40-450b-889d-1911c2d13b35\") " pod="openstack/glance-default-external-api-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.895015 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7a589efc-e414-47aa-90d8-14b2ad1f542e-log-httpd\") pod \"ceilometer-0\" (UID: \"7a589efc-e414-47aa-90d8-14b2ad1f542e\") " pod="openstack/ceilometer-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.895034 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a589efc-e414-47aa-90d8-14b2ad1f542e-config-data\") pod \"ceilometer-0\" (UID: \"7a589efc-e414-47aa-90d8-14b2ad1f542e\") " pod="openstack/ceilometer-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.895056 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a589efc-e414-47aa-90d8-14b2ad1f542e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7a589efc-e414-47aa-90d8-14b2ad1f542e\") " pod="openstack/ceilometer-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.895079 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4d6m\" (UniqueName: \"kubernetes.io/projected/7a589efc-e414-47aa-90d8-14b2ad1f542e-kube-api-access-s4d6m\") pod \"ceilometer-0\" (UID: \"7a589efc-e414-47aa-90d8-14b2ad1f542e\") " pod="openstack/ceilometer-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.895105 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"bff85059-9b40-450b-889d-1911c2d13b35\") " pod="openstack/glance-default-external-api-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.895131 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/05587f8a-86e1-40f7-82ff-9d5909739c1c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"05587f8a-86e1-40f7-82ff-9d5909739c1c\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.895152 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn9c9\" (UniqueName: \"kubernetes.io/projected/05587f8a-86e1-40f7-82ff-9d5909739c1c-kube-api-access-tn9c9\") pod \"glance-default-internal-api-0\" (UID: \"05587f8a-86e1-40f7-82ff-9d5909739c1c\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.895181 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05587f8a-86e1-40f7-82ff-9d5909739c1c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"05587f8a-86e1-40f7-82ff-9d5909739c1c\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.895255 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bff85059-9b40-450b-889d-1911c2d13b35-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"bff85059-9b40-450b-889d-1911c2d13b35\") " pod="openstack/glance-default-external-api-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.895286 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05587f8a-86e1-40f7-82ff-9d5909739c1c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"05587f8a-86e1-40f7-82ff-9d5909739c1c\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.895308 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05587f8a-86e1-40f7-82ff-9d5909739c1c-logs\") pod \"glance-default-internal-api-0\" (UID: \"05587f8a-86e1-40f7-82ff-9d5909739c1c\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.895333 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bff85059-9b40-450b-889d-1911c2d13b35-scripts\") pod \"glance-default-external-api-0\" (UID: \"bff85059-9b40-450b-889d-1911c2d13b35\") " pod="openstack/glance-default-external-api-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.895362 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05587f8a-86e1-40f7-82ff-9d5909739c1c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"05587f8a-86e1-40f7-82ff-9d5909739c1c\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.895432 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05587f8a-86e1-40f7-82ff-9d5909739c1c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"05587f8a-86e1-40f7-82ff-9d5909739c1c\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.895474 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bff85059-9b40-450b-889d-1911c2d13b35-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"bff85059-9b40-450b-889d-1911c2d13b35\") " pod="openstack/glance-default-external-api-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.895494 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bff85059-9b40-450b-889d-1911c2d13b35-logs\") pod \"glance-default-external-api-0\" (UID: \"bff85059-9b40-450b-889d-1911c2d13b35\") " pod="openstack/glance-default-external-api-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.895509 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7a589efc-e414-47aa-90d8-14b2ad1f542e-run-httpd\") pod \"ceilometer-0\" (UID: \"7a589efc-e414-47aa-90d8-14b2ad1f542e\") " pod="openstack/ceilometer-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.895506 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bff85059-9b40-450b-889d-1911c2d13b35-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"bff85059-9b40-450b-889d-1911c2d13b35\") " pod="openstack/glance-default-external-api-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.895537 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdhp2\" (UniqueName: \"kubernetes.io/projected/bff85059-9b40-450b-889d-1911c2d13b35-kube-api-access-kdhp2\") pod \"glance-default-external-api-0\" (UID: \"bff85059-9b40-450b-889d-1911c2d13b35\") " pod="openstack/glance-default-external-api-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.895568 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bff85059-9b40-450b-889d-1911c2d13b35-config-data\") pod \"glance-default-external-api-0\" (UID: \"bff85059-9b40-450b-889d-1911c2d13b35\") " pod="openstack/glance-default-external-api-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.895588 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a589efc-e414-47aa-90d8-14b2ad1f542e-scripts\") pod \"ceilometer-0\" (UID: \"7a589efc-e414-47aa-90d8-14b2ad1f542e\") " pod="openstack/ceilometer-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.895633 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"05587f8a-86e1-40f7-82ff-9d5909739c1c\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.895724 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjvpz\" (UniqueName: \"kubernetes.io/projected/2f344b52-c041-4a0b-bfb7-4c3ff396301a-kube-api-access-wjvpz\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.896493 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bff85059-9b40-450b-889d-1911c2d13b35-logs\") pod \"glance-default-external-api-0\" (UID: \"bff85059-9b40-450b-889d-1911c2d13b35\") " pod="openstack/glance-default-external-api-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.899985 4724 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"bff85059-9b40-450b-889d-1911c2d13b35\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.918883 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-kbqzq" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.921177 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f344b52-c041-4a0b-bfb7-4c3ff396301a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2f344b52-c041-4a0b-bfb7-4c3ff396301a" (UID: "2f344b52-c041-4a0b-bfb7-4c3ff396301a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.921737 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-856b879ffc-m4wq9" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.930581 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f344b52-c041-4a0b-bfb7-4c3ff396301a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2f344b52-c041-4a0b-bfb7-4c3ff396301a" (UID: "2f344b52-c041-4a0b-bfb7-4c3ff396301a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.935042 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f344b52-c041-4a0b-bfb7-4c3ff396301a-config" (OuterVolumeSpecName: "config") pod "2f344b52-c041-4a0b-bfb7-4c3ff396301a" (UID: "2f344b52-c041-4a0b-bfb7-4c3ff396301a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.944280 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bff85059-9b40-450b-889d-1911c2d13b35-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"bff85059-9b40-450b-889d-1911c2d13b35\") " pod="openstack/glance-default-external-api-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.944780 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bff85059-9b40-450b-889d-1911c2d13b35-config-data\") pod \"glance-default-external-api-0\" (UID: \"bff85059-9b40-450b-889d-1911c2d13b35\") " pod="openstack/glance-default-external-api-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.945057 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bff85059-9b40-450b-889d-1911c2d13b35-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"bff85059-9b40-450b-889d-1911c2d13b35\") " pod="openstack/glance-default-external-api-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.945083 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bff85059-9b40-450b-889d-1911c2d13b35-scripts\") pod \"glance-default-external-api-0\" (UID: \"bff85059-9b40-450b-889d-1911c2d13b35\") " pod="openstack/glance-default-external-api-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.953933 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"bff85059-9b40-450b-889d-1911c2d13b35\") " pod="openstack/glance-default-external-api-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.955021 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdhp2\" (UniqueName: \"kubernetes.io/projected/bff85059-9b40-450b-889d-1911c2d13b35-kube-api-access-kdhp2\") pod \"glance-default-external-api-0\" (UID: \"bff85059-9b40-450b-889d-1911c2d13b35\") " pod="openstack/glance-default-external-api-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.971948 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-k8sd8" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.997004 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05587f8a-86e1-40f7-82ff-9d5909739c1c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"05587f8a-86e1-40f7-82ff-9d5909739c1c\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.997038 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05587f8a-86e1-40f7-82ff-9d5909739c1c-logs\") pod \"glance-default-internal-api-0\" (UID: \"05587f8a-86e1-40f7-82ff-9d5909739c1c\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.997059 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05587f8a-86e1-40f7-82ff-9d5909739c1c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"05587f8a-86e1-40f7-82ff-9d5909739c1c\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.997100 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05587f8a-86e1-40f7-82ff-9d5909739c1c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"05587f8a-86e1-40f7-82ff-9d5909739c1c\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.997128 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7a589efc-e414-47aa-90d8-14b2ad1f542e-run-httpd\") pod \"ceilometer-0\" (UID: \"7a589efc-e414-47aa-90d8-14b2ad1f542e\") " pod="openstack/ceilometer-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.997143 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a589efc-e414-47aa-90d8-14b2ad1f542e-scripts\") pod \"ceilometer-0\" (UID: \"7a589efc-e414-47aa-90d8-14b2ad1f542e\") " pod="openstack/ceilometer-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.997171 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"05587f8a-86e1-40f7-82ff-9d5909739c1c\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.997195 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7a589efc-e414-47aa-90d8-14b2ad1f542e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7a589efc-e414-47aa-90d8-14b2ad1f542e\") " pod="openstack/ceilometer-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.997213 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7a589efc-e414-47aa-90d8-14b2ad1f542e-log-httpd\") pod \"ceilometer-0\" (UID: \"7a589efc-e414-47aa-90d8-14b2ad1f542e\") " pod="openstack/ceilometer-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.997228 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a589efc-e414-47aa-90d8-14b2ad1f542e-config-data\") pod \"ceilometer-0\" (UID: \"7a589efc-e414-47aa-90d8-14b2ad1f542e\") " pod="openstack/ceilometer-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.997242 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a589efc-e414-47aa-90d8-14b2ad1f542e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7a589efc-e414-47aa-90d8-14b2ad1f542e\") " pod="openstack/ceilometer-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.997259 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4d6m\" (UniqueName: \"kubernetes.io/projected/7a589efc-e414-47aa-90d8-14b2ad1f542e-kube-api-access-s4d6m\") pod \"ceilometer-0\" (UID: \"7a589efc-e414-47aa-90d8-14b2ad1f542e\") " pod="openstack/ceilometer-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.997281 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/05587f8a-86e1-40f7-82ff-9d5909739c1c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"05587f8a-86e1-40f7-82ff-9d5909739c1c\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.997296 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tn9c9\" (UniqueName: \"kubernetes.io/projected/05587f8a-86e1-40f7-82ff-9d5909739c1c-kube-api-access-tn9c9\") pod \"glance-default-internal-api-0\" (UID: \"05587f8a-86e1-40f7-82ff-9d5909739c1c\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.997324 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05587f8a-86e1-40f7-82ff-9d5909739c1c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"05587f8a-86e1-40f7-82ff-9d5909739c1c\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.997483 4724 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f344b52-c041-4a0b-bfb7-4c3ff396301a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.997498 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f344b52-c041-4a0b-bfb7-4c3ff396301a-config\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:11 crc kubenswrapper[4724]: I0223 17:49:11.997510 4724 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2f344b52-c041-4a0b-bfb7-4c3ff396301a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:12 crc kubenswrapper[4724]: I0223 17:49:12.001219 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7a589efc-e414-47aa-90d8-14b2ad1f542e-log-httpd\") pod \"ceilometer-0\" (UID: \"7a589efc-e414-47aa-90d8-14b2ad1f542e\") " pod="openstack/ceilometer-0" Feb 23 17:49:12 crc kubenswrapper[4724]: I0223 17:49:12.018055 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7a589efc-e414-47aa-90d8-14b2ad1f542e-run-httpd\") pod \"ceilometer-0\" (UID: \"7a589efc-e414-47aa-90d8-14b2ad1f542e\") " pod="openstack/ceilometer-0" Feb 23 17:49:12 crc kubenswrapper[4724]: I0223 17:49:12.018363 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/05587f8a-86e1-40f7-82ff-9d5909739c1c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"05587f8a-86e1-40f7-82ff-9d5909739c1c\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:49:12 crc kubenswrapper[4724]: I0223 17:49:12.019316 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05587f8a-86e1-40f7-82ff-9d5909739c1c-logs\") pod \"glance-default-internal-api-0\" (UID: \"05587f8a-86e1-40f7-82ff-9d5909739c1c\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:49:12 crc kubenswrapper[4724]: I0223 17:49:12.021157 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7a589efc-e414-47aa-90d8-14b2ad1f542e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7a589efc-e414-47aa-90d8-14b2ad1f542e\") " pod="openstack/ceilometer-0" Feb 23 17:49:12 crc kubenswrapper[4724]: I0223 17:49:12.021948 4724 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"05587f8a-86e1-40f7-82ff-9d5909739c1c\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Feb 23 17:49:12 crc kubenswrapper[4724]: I0223 17:49:12.025843 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a589efc-e414-47aa-90d8-14b2ad1f542e-scripts\") pod \"ceilometer-0\" (UID: \"7a589efc-e414-47aa-90d8-14b2ad1f542e\") " pod="openstack/ceilometer-0" Feb 23 17:49:12 crc kubenswrapper[4724]: I0223 17:49:12.026758 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a589efc-e414-47aa-90d8-14b2ad1f542e-config-data\") pod \"ceilometer-0\" (UID: \"7a589efc-e414-47aa-90d8-14b2ad1f542e\") " pod="openstack/ceilometer-0" Feb 23 17:49:12 crc kubenswrapper[4724]: I0223 17:49:12.034236 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05587f8a-86e1-40f7-82ff-9d5909739c1c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"05587f8a-86e1-40f7-82ff-9d5909739c1c\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:49:12 crc kubenswrapper[4724]: I0223 17:49:12.034273 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a589efc-e414-47aa-90d8-14b2ad1f542e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7a589efc-e414-47aa-90d8-14b2ad1f542e\") " pod="openstack/ceilometer-0" Feb 23 17:49:12 crc kubenswrapper[4724]: I0223 17:49:12.036325 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05587f8a-86e1-40f7-82ff-9d5909739c1c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"05587f8a-86e1-40f7-82ff-9d5909739c1c\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:49:12 crc kubenswrapper[4724]: I0223 17:49:12.038191 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 23 17:49:12 crc kubenswrapper[4724]: I0223 17:49:12.039746 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05587f8a-86e1-40f7-82ff-9d5909739c1c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"05587f8a-86e1-40f7-82ff-9d5909739c1c\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:49:12 crc kubenswrapper[4724]: I0223 17:49:12.044665 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05587f8a-86e1-40f7-82ff-9d5909739c1c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"05587f8a-86e1-40f7-82ff-9d5909739c1c\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:49:12 crc kubenswrapper[4724]: I0223 17:49:12.051162 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4d6m\" (UniqueName: \"kubernetes.io/projected/7a589efc-e414-47aa-90d8-14b2ad1f542e-kube-api-access-s4d6m\") pod \"ceilometer-0\" (UID: \"7a589efc-e414-47aa-90d8-14b2ad1f542e\") " pod="openstack/ceilometer-0" Feb 23 17:49:12 crc kubenswrapper[4724]: I0223 17:49:12.061860 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tn9c9\" (UniqueName: \"kubernetes.io/projected/05587f8a-86e1-40f7-82ff-9d5909739c1c-kube-api-access-tn9c9\") pod \"glance-default-internal-api-0\" (UID: \"05587f8a-86e1-40f7-82ff-9d5909739c1c\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:49:12 crc kubenswrapper[4724]: I0223 17:49:12.112555 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f344b52-c041-4a0b-bfb7-4c3ff396301a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2f344b52-c041-4a0b-bfb7-4c3ff396301a" (UID: "2f344b52-c041-4a0b-bfb7-4c3ff396301a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:49:12 crc kubenswrapper[4724]: I0223 17:49:12.118459 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f344b52-c041-4a0b-bfb7-4c3ff396301a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2f344b52-c041-4a0b-bfb7-4c3ff396301a" (UID: "2f344b52-c041-4a0b-bfb7-4c3ff396301a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:49:12 crc kubenswrapper[4724]: I0223 17:49:12.126334 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"05587f8a-86e1-40f7-82ff-9d5909739c1c\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:49:12 crc kubenswrapper[4724]: I0223 17:49:12.136728 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-h68sq"] Feb 23 17:49:12 crc kubenswrapper[4724]: I0223 17:49:12.176262 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 17:49:12 crc kubenswrapper[4724]: I0223 17:49:12.206211 4724 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2f344b52-c041-4a0b-bfb7-4c3ff396301a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:12 crc kubenswrapper[4724]: I0223 17:49:12.206240 4724 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2f344b52-c041-4a0b-bfb7-4c3ff396301a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:12 crc kubenswrapper[4724]: I0223 17:49:12.364338 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-649c5dcfb9-g96zt"] Feb 23 17:49:12 crc kubenswrapper[4724]: I0223 17:49:12.390852 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 23 17:49:12 crc kubenswrapper[4724]: I0223 17:49:12.629636 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-656d4464cc-fm4h2" event={"ID":"2f344b52-c041-4a0b-bfb7-4c3ff396301a","Type":"ContainerDied","Data":"af6080b38576223629256e5535da76414f44b50f0c8351a0001f4a6fa66d7dcf"} Feb 23 17:49:12 crc kubenswrapper[4724]: I0223 17:49:12.629704 4724 scope.go:117] "RemoveContainer" containerID="ea61a724d4d08ca2a4b5e18fec30a94ab41912d400aed8223c92b4bfbb78d16d" Feb 23 17:49:12 crc kubenswrapper[4724]: I0223 17:49:12.629725 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-656d4464cc-fm4h2" Feb 23 17:49:12 crc kubenswrapper[4724]: I0223 17:49:12.657694 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-649c5dcfb9-g96zt" event={"ID":"8fa009a6-0898-4394-8392-16e4c47c8e9a","Type":"ContainerStarted","Data":"c770bebcd4fb30f4a684710b6f9df0c817f5dff8e58e99e922be9a1cdfec57e9"} Feb 23 17:49:12 crc kubenswrapper[4724]: I0223 17:49:12.659314 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-h68sq" event={"ID":"e73e94ef-0cac-460a-a61d-05476626544e","Type":"ContainerStarted","Data":"2fe0519264336390c9dbf3ecdf18ec2aacb13ac30a0fa232d98f75dd2ce0ae8b"} Feb 23 17:49:12 crc kubenswrapper[4724]: I0223 17:49:12.719936 4724 scope.go:117] "RemoveContainer" containerID="5e3f338e3b65e631ec835d14019c5968a69c9c0566a88ab51ac0b8f7c67d984e" Feb 23 17:49:12 crc kubenswrapper[4724]: I0223 17:49:12.721238 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-656d4464cc-fm4h2"] Feb 23 17:49:12 crc kubenswrapper[4724]: I0223 17:49:12.729559 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-656d4464cc-fm4h2"] Feb 23 17:49:12 crc kubenswrapper[4724]: I0223 17:49:12.988610 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f344b52-c041-4a0b-bfb7-4c3ff396301a" path="/var/lib/kubelet/pods/2f344b52-c041-4a0b-bfb7-4c3ff396301a/volumes" Feb 23 17:49:13 crc kubenswrapper[4724]: I0223 17:49:13.019511 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-559d5d679f-9vm7m"] Feb 23 17:49:13 crc kubenswrapper[4724]: I0223 17:49:13.514917 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-kbqzq"] Feb 23 17:49:13 crc kubenswrapper[4724]: I0223 17:49:13.526050 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-k8sd8"] Feb 23 17:49:13 crc kubenswrapper[4724]: I0223 17:49:13.557641 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 23 17:49:13 crc kubenswrapper[4724]: I0223 17:49:13.569167 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 23 17:49:13 crc kubenswrapper[4724]: I0223 17:49:13.612305 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-747cd567fc-7lvv6"] Feb 23 17:49:13 crc kubenswrapper[4724]: I0223 17:49:13.665202 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 23 17:49:13 crc kubenswrapper[4724]: I0223 17:49:13.690684 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-stfh8"] Feb 23 17:49:13 crc kubenswrapper[4724]: I0223 17:49:13.694148 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-k8sd8" event={"ID":"05e85ad1-bec3-4a1d-b77e-cac6dab7c9fd","Type":"ContainerStarted","Data":"ad6e956b650ca4b1a7afe57d8fc90748987e407794c8b544f7dc31c618cf7859"} Feb 23 17:49:13 crc kubenswrapper[4724]: I0223 17:49:13.710513 4724 generic.go:334] "Generic (PLEG): container finished" podID="8fa009a6-0898-4394-8392-16e4c47c8e9a" containerID="2555913cbaa8ebf3663a484b129c84ee622ed49c7e2240fd8124c04b518cc336" exitCode=0 Feb 23 17:49:13 crc kubenswrapper[4724]: I0223 17:49:13.710648 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-649c5dcfb9-g96zt" event={"ID":"8fa009a6-0898-4394-8392-16e4c47c8e9a","Type":"ContainerDied","Data":"2555913cbaa8ebf3663a484b129c84ee622ed49c7e2240fd8124c04b518cc336"} Feb 23 17:49:13 crc kubenswrapper[4724]: I0223 17:49:13.714093 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-q2ssq" event={"ID":"7421067a-d596-4a56-82f2-39eabd33567c","Type":"ContainerStarted","Data":"b812d931894e6f2efb0282c358edc9f7218b25f2c22756d51534a43dccdb3105"} Feb 23 17:49:13 crc kubenswrapper[4724]: I0223 17:49:13.716117 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-q2ssq"] Feb 23 17:49:13 crc kubenswrapper[4724]: I0223 17:49:13.725724 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"86eb7ff0-87b2-4538-8c5b-9126768e810b","Type":"ContainerStarted","Data":"096be882dd2c9ce1636d5343c6cff8a7494ec3928dcd07af220ff82278694312"} Feb 23 17:49:13 crc kubenswrapper[4724]: I0223 17:49:13.759674 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-747cd567fc-7lvv6" event={"ID":"ad41c323-5f1b-4d58-bb6c-f54a4730090a","Type":"ContainerStarted","Data":"8f624d93daca90dbedff1ed8e52f6164ecf102f54a613ea7d709c1e7767de6b3"} Feb 23 17:49:13 crc kubenswrapper[4724]: I0223 17:49:13.765713 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857","Type":"ContainerStarted","Data":"7e03b4ba902cf950e11f2eaf4bdfa18fd834c3d1819eecddf568f86833f943c9"} Feb 23 17:49:13 crc kubenswrapper[4724]: I0223 17:49:13.787844 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-stfh8" event={"ID":"23123829-c64d-4376-8be6-660e7892a057","Type":"ContainerStarted","Data":"d00a24e02fbd571881af2ed30b56e30bf6e94b63811f42050632f7fe22ef2de8"} Feb 23 17:49:13 crc kubenswrapper[4724]: I0223 17:49:13.794943 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-559d5d679f-9vm7m" event={"ID":"d32866e1-5d09-4156-b16e-d2fcff064fba","Type":"ContainerStarted","Data":"a4015e31fd7f151eddc23e83dbd4d29c378c6f048320cca7419896c03986bedd"} Feb 23 17:49:13 crc kubenswrapper[4724]: I0223 17:49:13.817617 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"098a7e4d-3eea-40f5-861c-9c026433186b","Type":"ContainerStarted","Data":"9e086010bb3538445a17bf5e5166408e5f2a2de408c7965d0422bc794c4d280a"} Feb 23 17:49:13 crc kubenswrapper[4724]: I0223 17:49:13.824241 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-h68sq" event={"ID":"e73e94ef-0cac-460a-a61d-05476626544e","Type":"ContainerStarted","Data":"cdbb62ec359ed7fd99915fc8f1c2c8c13ec554bbd52b65d843ff2bb7d478290c"} Feb 23 17:49:13 crc kubenswrapper[4724]: I0223 17:49:13.830318 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-kbqzq" event={"ID":"987df27c-52c5-4950-be0d-72bbd4164ea6","Type":"ContainerStarted","Data":"c0d761e6ef001c122b42c8fdf345dd69477b5b41f6dc60b888a2cc584dafce72"} Feb 23 17:49:13 crc kubenswrapper[4724]: I0223 17:49:13.872189 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-h68sq" podStartSLOduration=3.872170273 podStartE2EDuration="3.872170273s" podCreationTimestamp="2026-02-23 17:49:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:49:13.852849715 +0000 UTC m=+1109.669049325" watchObservedRunningTime="2026-02-23 17:49:13.872170273 +0000 UTC m=+1109.688369873" Feb 23 17:49:13 crc kubenswrapper[4724]: I0223 17:49:13.909184 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-856b879ffc-m4wq9"] Feb 23 17:49:13 crc kubenswrapper[4724]: I0223 17:49:13.924345 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.053639 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.091891 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.203919 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 17:49:14 crc kubenswrapper[4724]: W0223 17:49:14.233818 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbff85059_9b40_450b_889d_1911c2d13b35.slice/crio-6a453d3169276e5c6104a09bc4a672101e9ad98d5e958e058f2db3c54f386722 WatchSource:0}: Error finding container 6a453d3169276e5c6104a09bc4a672101e9ad98d5e958e058f2db3c54f386722: Status 404 returned error can't find the container with id 6a453d3169276e5c6104a09bc4a672101e9ad98d5e958e058f2db3c54f386722 Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.269352 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.344446 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-559d5d679f-9vm7m"] Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.373771 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.401914 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5895bb9769-7j24f"] Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.403801 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5895bb9769-7j24f" Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.422219 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.447923 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5895bb9769-7j24f"] Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.493852 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-649c5dcfb9-g96zt" Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.548583 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.564988 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.601934 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8fa009a6-0898-4394-8392-16e4c47c8e9a-dns-swift-storage-0\") pod \"8fa009a6-0898-4394-8392-16e4c47c8e9a\" (UID: \"8fa009a6-0898-4394-8392-16e4c47c8e9a\") " Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.601979 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fa009a6-0898-4394-8392-16e4c47c8e9a-config\") pod \"8fa009a6-0898-4394-8392-16e4c47c8e9a\" (UID: \"8fa009a6-0898-4394-8392-16e4c47c8e9a\") " Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.602051 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8fa009a6-0898-4394-8392-16e4c47c8e9a-ovsdbserver-nb\") pod \"8fa009a6-0898-4394-8392-16e4c47c8e9a\" (UID: \"8fa009a6-0898-4394-8392-16e4c47c8e9a\") " Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.602135 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xcwd\" (UniqueName: \"kubernetes.io/projected/8fa009a6-0898-4394-8392-16e4c47c8e9a-kube-api-access-9xcwd\") pod \"8fa009a6-0898-4394-8392-16e4c47c8e9a\" (UID: \"8fa009a6-0898-4394-8392-16e4c47c8e9a\") " Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.602165 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8fa009a6-0898-4394-8392-16e4c47c8e9a-ovsdbserver-sb\") pod \"8fa009a6-0898-4394-8392-16e4c47c8e9a\" (UID: \"8fa009a6-0898-4394-8392-16e4c47c8e9a\") " Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.602245 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8fa009a6-0898-4394-8392-16e4c47c8e9a-dns-svc\") pod \"8fa009a6-0898-4394-8392-16e4c47c8e9a\" (UID: \"8fa009a6-0898-4394-8392-16e4c47c8e9a\") " Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.602513 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93-logs\") pod \"horizon-5895bb9769-7j24f\" (UID: \"6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93\") " pod="openstack/horizon-5895bb9769-7j24f" Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.602572 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93-config-data\") pod \"horizon-5895bb9769-7j24f\" (UID: \"6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93\") " pod="openstack/horizon-5895bb9769-7j24f" Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.602617 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggp5r\" (UniqueName: \"kubernetes.io/projected/6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93-kube-api-access-ggp5r\") pod \"horizon-5895bb9769-7j24f\" (UID: \"6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93\") " pod="openstack/horizon-5895bb9769-7j24f" Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.602645 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93-scripts\") pod \"horizon-5895bb9769-7j24f\" (UID: \"6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93\") " pod="openstack/horizon-5895bb9769-7j24f" Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.602666 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93-horizon-secret-key\") pod \"horizon-5895bb9769-7j24f\" (UID: \"6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93\") " pod="openstack/horizon-5895bb9769-7j24f" Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.623121 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fa009a6-0898-4394-8392-16e4c47c8e9a-kube-api-access-9xcwd" (OuterVolumeSpecName: "kube-api-access-9xcwd") pod "8fa009a6-0898-4394-8392-16e4c47c8e9a" (UID: "8fa009a6-0898-4394-8392-16e4c47c8e9a"). InnerVolumeSpecName "kube-api-access-9xcwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.632657 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fa009a6-0898-4394-8392-16e4c47c8e9a-config" (OuterVolumeSpecName: "config") pod "8fa009a6-0898-4394-8392-16e4c47c8e9a" (UID: "8fa009a6-0898-4394-8392-16e4c47c8e9a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.632834 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fa009a6-0898-4394-8392-16e4c47c8e9a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8fa009a6-0898-4394-8392-16e4c47c8e9a" (UID: "8fa009a6-0898-4394-8392-16e4c47c8e9a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.633290 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fa009a6-0898-4394-8392-16e4c47c8e9a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8fa009a6-0898-4394-8392-16e4c47c8e9a" (UID: "8fa009a6-0898-4394-8392-16e4c47c8e9a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.639373 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fa009a6-0898-4394-8392-16e4c47c8e9a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8fa009a6-0898-4394-8392-16e4c47c8e9a" (UID: "8fa009a6-0898-4394-8392-16e4c47c8e9a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.660836 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fa009a6-0898-4394-8392-16e4c47c8e9a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8fa009a6-0898-4394-8392-16e4c47c8e9a" (UID: "8fa009a6-0898-4394-8392-16e4c47c8e9a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.704234 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93-logs\") pod \"horizon-5895bb9769-7j24f\" (UID: \"6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93\") " pod="openstack/horizon-5895bb9769-7j24f" Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.704440 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93-config-data\") pod \"horizon-5895bb9769-7j24f\" (UID: \"6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93\") " pod="openstack/horizon-5895bb9769-7j24f" Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.704857 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggp5r\" (UniqueName: \"kubernetes.io/projected/6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93-kube-api-access-ggp5r\") pod \"horizon-5895bb9769-7j24f\" (UID: \"6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93\") " pod="openstack/horizon-5895bb9769-7j24f" Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.704961 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93-scripts\") pod \"horizon-5895bb9769-7j24f\" (UID: \"6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93\") " pod="openstack/horizon-5895bb9769-7j24f" Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.705040 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93-horizon-secret-key\") pod \"horizon-5895bb9769-7j24f\" (UID: \"6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93\") " pod="openstack/horizon-5895bb9769-7j24f" Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.705367 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93-logs\") pod \"horizon-5895bb9769-7j24f\" (UID: \"6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93\") " pod="openstack/horizon-5895bb9769-7j24f" Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.706806 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93-scripts\") pod \"horizon-5895bb9769-7j24f\" (UID: \"6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93\") " pod="openstack/horizon-5895bb9769-7j24f" Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.707641 4724 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8fa009a6-0898-4394-8392-16e4c47c8e9a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.707662 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xcwd\" (UniqueName: \"kubernetes.io/projected/8fa009a6-0898-4394-8392-16e4c47c8e9a-kube-api-access-9xcwd\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.707673 4724 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8fa009a6-0898-4394-8392-16e4c47c8e9a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.707683 4724 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8fa009a6-0898-4394-8392-16e4c47c8e9a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.707693 4724 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8fa009a6-0898-4394-8392-16e4c47c8e9a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.707701 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fa009a6-0898-4394-8392-16e4c47c8e9a-config\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.708426 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93-config-data\") pod \"horizon-5895bb9769-7j24f\" (UID: \"6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93\") " pod="openstack/horizon-5895bb9769-7j24f" Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.709309 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93-horizon-secret-key\") pod \"horizon-5895bb9769-7j24f\" (UID: \"6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93\") " pod="openstack/horizon-5895bb9769-7j24f" Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.726267 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggp5r\" (UniqueName: \"kubernetes.io/projected/6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93-kube-api-access-ggp5r\") pod \"horizon-5895bb9769-7j24f\" (UID: \"6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93\") " pod="openstack/horizon-5895bb9769-7j24f" Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.748018 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5895bb9769-7j24f" Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.877174 4724 generic.go:334] "Generic (PLEG): container finished" podID="cc252ed4-e739-4270-b189-1b35bd5a3533" containerID="b7bbe500fd57f46c0775448e5d5d5c3ebaee9d0f8d97a05f5869f8e43f275452" exitCode=0 Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.877254 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-856b879ffc-m4wq9" event={"ID":"cc252ed4-e739-4270-b189-1b35bd5a3533","Type":"ContainerDied","Data":"b7bbe500fd57f46c0775448e5d5d5c3ebaee9d0f8d97a05f5869f8e43f275452"} Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.877286 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-856b879ffc-m4wq9" event={"ID":"cc252ed4-e739-4270-b189-1b35bd5a3533","Type":"ContainerStarted","Data":"23245bf0e2bf83e0a09be5c9ef4af00e2630ba3cb416663dfe31a5b2019a2a1b"} Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.889128 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-649c5dcfb9-g96zt" event={"ID":"8fa009a6-0898-4394-8392-16e4c47c8e9a","Type":"ContainerDied","Data":"c770bebcd4fb30f4a684710b6f9df0c817f5dff8e58e99e922be9a1cdfec57e9"} Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.889173 4724 scope.go:117] "RemoveContainer" containerID="2555913cbaa8ebf3663a484b129c84ee622ed49c7e2240fd8124c04b518cc336" Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.889286 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-649c5dcfb9-g96zt" Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.896842 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7a589efc-e414-47aa-90d8-14b2ad1f542e","Type":"ContainerStarted","Data":"6cb63541959b5fde6a783cfd26fd51ab7fb584c31c987591a57c4c4b17889b55"} Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.901220 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857","Type":"ContainerStarted","Data":"fb492313bded3525683787416d1463c506739ef103a7abf059baf18d6f79a5a7"} Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.901264 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857","Type":"ContainerStarted","Data":"ce7b1da5555bfa7ecdeec681b29b25dcff01141018ebb033d8e4ec8fd435b299"} Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.901376 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857" containerName="watcher-api-log" containerID="cri-o://ce7b1da5555bfa7ecdeec681b29b25dcff01141018ebb033d8e4ec8fd435b299" gracePeriod=30 Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.901798 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857" containerName="watcher-api" containerID="cri-o://fb492313bded3525683787416d1463c506739ef103a7abf059baf18d6f79a5a7" gracePeriod=30 Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.901994 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.917677 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.156:9322/\": dial tcp 10.217.0.156:9322: connect: connection refused" Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.926293 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bff85059-9b40-450b-889d-1911c2d13b35","Type":"ContainerStarted","Data":"6a453d3169276e5c6104a09bc4a672101e9ad98d5e958e058f2db3c54f386722"} Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.928774 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"05587f8a-86e1-40f7-82ff-9d5909739c1c","Type":"ContainerStarted","Data":"00b66c6b7ede5b5398a5d1f77d729aff5481f2467481886402e5c6410525d8be"} Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.930667 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-stfh8" event={"ID":"23123829-c64d-4376-8be6-660e7892a057","Type":"ContainerStarted","Data":"d5b80fec05b3057ddd89553615912d9121562cc5a9aae14eccf80b88544ea6e4"} Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.935280 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.965203 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=4.965187681 podStartE2EDuration="4.965187681s" podCreationTimestamp="2026-02-23 17:49:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:49:14.944473248 +0000 UTC m=+1110.760672848" watchObservedRunningTime="2026-02-23 17:49:14.965187681 +0000 UTC m=+1110.781387281" Feb 23 17:49:14 crc kubenswrapper[4724]: I0223 17:49:14.965887 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-stfh8" podStartSLOduration=3.965881509 podStartE2EDuration="3.965881509s" podCreationTimestamp="2026-02-23 17:49:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:49:14.963303774 +0000 UTC m=+1110.779503374" watchObservedRunningTime="2026-02-23 17:49:14.965881509 +0000 UTC m=+1110.782081109" Feb 23 17:49:15 crc kubenswrapper[4724]: I0223 17:49:15.218113 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-649c5dcfb9-g96zt"] Feb 23 17:49:15 crc kubenswrapper[4724]: I0223 17:49:15.227009 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-649c5dcfb9-g96zt"] Feb 23 17:49:15 crc kubenswrapper[4724]: I0223 17:49:15.530582 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5895bb9769-7j24f"] Feb 23 17:49:15 crc kubenswrapper[4724]: I0223 17:49:15.944996 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5895bb9769-7j24f" event={"ID":"6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93","Type":"ContainerStarted","Data":"b8447d4501b15ac3e7a1cf3684827cbb43216f9f666ce9d80f1e492157869ea9"} Feb 23 17:49:15 crc kubenswrapper[4724]: I0223 17:49:15.951997 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"05587f8a-86e1-40f7-82ff-9d5909739c1c","Type":"ContainerStarted","Data":"5556df7539f41b14c7a9d8a89bd4577aea3ff7948ad3aba229004fab3546f11f"} Feb 23 17:49:15 crc kubenswrapper[4724]: I0223 17:49:15.956200 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-856b879ffc-m4wq9" event={"ID":"cc252ed4-e739-4270-b189-1b35bd5a3533","Type":"ContainerStarted","Data":"540cc4dad8054d7db9adbd63abe3367aaa4b9cc3c0d0f17d6296210bd132d60d"} Feb 23 17:49:15 crc kubenswrapper[4724]: I0223 17:49:15.956336 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-856b879ffc-m4wq9" Feb 23 17:49:15 crc kubenswrapper[4724]: I0223 17:49:15.962902 4724 generic.go:334] "Generic (PLEG): container finished" podID="5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857" containerID="ce7b1da5555bfa7ecdeec681b29b25dcff01141018ebb033d8e4ec8fd435b299" exitCode=143 Feb 23 17:49:15 crc kubenswrapper[4724]: I0223 17:49:15.962966 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857","Type":"ContainerDied","Data":"ce7b1da5555bfa7ecdeec681b29b25dcff01141018ebb033d8e4ec8fd435b299"} Feb 23 17:49:15 crc kubenswrapper[4724]: I0223 17:49:15.967935 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bff85059-9b40-450b-889d-1911c2d13b35","Type":"ContainerStarted","Data":"de3c51d3f6780335c6f0a1b3523910133e25baf24c4804042f50183120681a7e"} Feb 23 17:49:15 crc kubenswrapper[4724]: I0223 17:49:15.984297 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-856b879ffc-m4wq9" podStartSLOduration=4.984277842 podStartE2EDuration="4.984277842s" podCreationTimestamp="2026-02-23 17:49:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:49:15.977749977 +0000 UTC m=+1111.793949597" watchObservedRunningTime="2026-02-23 17:49:15.984277842 +0000 UTC m=+1111.800477442" Feb 23 17:49:16 crc kubenswrapper[4724]: I0223 17:49:16.682153 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 23 17:49:16 crc kubenswrapper[4724]: I0223 17:49:16.964006 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fa009a6-0898-4394-8392-16e4c47c8e9a" path="/var/lib/kubelet/pods/8fa009a6-0898-4394-8392-16e4c47c8e9a/volumes" Feb 23 17:49:19 crc kubenswrapper[4724]: I0223 17:49:19.001111 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 23 17:49:20 crc kubenswrapper[4724]: I0223 17:49:20.529950 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-747cd567fc-7lvv6"] Feb 23 17:49:20 crc kubenswrapper[4724]: I0223 17:49:20.558738 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-74674fd4f8-mmmpd"] Feb 23 17:49:20 crc kubenswrapper[4724]: E0223 17:49:20.559152 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fa009a6-0898-4394-8392-16e4c47c8e9a" containerName="init" Feb 23 17:49:20 crc kubenswrapper[4724]: I0223 17:49:20.559177 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fa009a6-0898-4394-8392-16e4c47c8e9a" containerName="init" Feb 23 17:49:20 crc kubenswrapper[4724]: I0223 17:49:20.559482 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fa009a6-0898-4394-8392-16e4c47c8e9a" containerName="init" Feb 23 17:49:20 crc kubenswrapper[4724]: I0223 17:49:20.565264 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-74674fd4f8-mmmpd" Feb 23 17:49:20 crc kubenswrapper[4724]: I0223 17:49:20.569758 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Feb 23 17:49:20 crc kubenswrapper[4724]: I0223 17:49:20.580191 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-74674fd4f8-mmmpd"] Feb 23 17:49:20 crc kubenswrapper[4724]: I0223 17:49:20.646314 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5895bb9769-7j24f"] Feb 23 17:49:20 crc kubenswrapper[4724]: I0223 17:49:20.675068 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5b4b6c94fb-ttctl"] Feb 23 17:49:20 crc kubenswrapper[4724]: I0223 17:49:20.676590 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b4b6c94fb-ttctl" Feb 23 17:49:20 crc kubenswrapper[4724]: I0223 17:49:20.679342 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-529jm\" (UniqueName: \"kubernetes.io/projected/df53406b-fb3c-41f5-86af-b78ac8d5df6d-kube-api-access-529jm\") pod \"horizon-74674fd4f8-mmmpd\" (UID: \"df53406b-fb3c-41f5-86af-b78ac8d5df6d\") " pod="openstack/horizon-74674fd4f8-mmmpd" Feb 23 17:49:20 crc kubenswrapper[4724]: I0223 17:49:20.679414 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/df53406b-fb3c-41f5-86af-b78ac8d5df6d-horizon-tls-certs\") pod \"horizon-74674fd4f8-mmmpd\" (UID: \"df53406b-fb3c-41f5-86af-b78ac8d5df6d\") " pod="openstack/horizon-74674fd4f8-mmmpd" Feb 23 17:49:20 crc kubenswrapper[4724]: I0223 17:49:20.679471 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/df53406b-fb3c-41f5-86af-b78ac8d5df6d-config-data\") pod \"horizon-74674fd4f8-mmmpd\" (UID: \"df53406b-fb3c-41f5-86af-b78ac8d5df6d\") " pod="openstack/horizon-74674fd4f8-mmmpd" Feb 23 17:49:20 crc kubenswrapper[4724]: I0223 17:49:20.679493 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df53406b-fb3c-41f5-86af-b78ac8d5df6d-combined-ca-bundle\") pod \"horizon-74674fd4f8-mmmpd\" (UID: \"df53406b-fb3c-41f5-86af-b78ac8d5df6d\") " pod="openstack/horizon-74674fd4f8-mmmpd" Feb 23 17:49:20 crc kubenswrapper[4724]: I0223 17:49:20.679515 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/df53406b-fb3c-41f5-86af-b78ac8d5df6d-horizon-secret-key\") pod \"horizon-74674fd4f8-mmmpd\" (UID: \"df53406b-fb3c-41f5-86af-b78ac8d5df6d\") " pod="openstack/horizon-74674fd4f8-mmmpd" Feb 23 17:49:20 crc kubenswrapper[4724]: I0223 17:49:20.679593 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df53406b-fb3c-41f5-86af-b78ac8d5df6d-logs\") pod \"horizon-74674fd4f8-mmmpd\" (UID: \"df53406b-fb3c-41f5-86af-b78ac8d5df6d\") " pod="openstack/horizon-74674fd4f8-mmmpd" Feb 23 17:49:20 crc kubenswrapper[4724]: I0223 17:49:20.679619 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/df53406b-fb3c-41f5-86af-b78ac8d5df6d-scripts\") pod \"horizon-74674fd4f8-mmmpd\" (UID: \"df53406b-fb3c-41f5-86af-b78ac8d5df6d\") " pod="openstack/horizon-74674fd4f8-mmmpd" Feb 23 17:49:20 crc kubenswrapper[4724]: I0223 17:49:20.704129 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5b4b6c94fb-ttctl"] Feb 23 17:49:20 crc kubenswrapper[4724]: I0223 17:49:20.781258 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqp5k\" (UniqueName: \"kubernetes.io/projected/07785399-35e6-432b-8835-4412fa3ff02b-kube-api-access-xqp5k\") pod \"horizon-5b4b6c94fb-ttctl\" (UID: \"07785399-35e6-432b-8835-4412fa3ff02b\") " pod="openstack/horizon-5b4b6c94fb-ttctl" Feb 23 17:49:20 crc kubenswrapper[4724]: I0223 17:49:20.781457 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07785399-35e6-432b-8835-4412fa3ff02b-logs\") pod \"horizon-5b4b6c94fb-ttctl\" (UID: \"07785399-35e6-432b-8835-4412fa3ff02b\") " pod="openstack/horizon-5b4b6c94fb-ttctl" Feb 23 17:49:20 crc kubenswrapper[4724]: I0223 17:49:20.781512 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df53406b-fb3c-41f5-86af-b78ac8d5df6d-logs\") pod \"horizon-74674fd4f8-mmmpd\" (UID: \"df53406b-fb3c-41f5-86af-b78ac8d5df6d\") " pod="openstack/horizon-74674fd4f8-mmmpd" Feb 23 17:49:20 crc kubenswrapper[4724]: I0223 17:49:20.781547 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/07785399-35e6-432b-8835-4412fa3ff02b-horizon-tls-certs\") pod \"horizon-5b4b6c94fb-ttctl\" (UID: \"07785399-35e6-432b-8835-4412fa3ff02b\") " pod="openstack/horizon-5b4b6c94fb-ttctl" Feb 23 17:49:20 crc kubenswrapper[4724]: I0223 17:49:20.781575 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/df53406b-fb3c-41f5-86af-b78ac8d5df6d-scripts\") pod \"horizon-74674fd4f8-mmmpd\" (UID: \"df53406b-fb3c-41f5-86af-b78ac8d5df6d\") " pod="openstack/horizon-74674fd4f8-mmmpd" Feb 23 17:49:20 crc kubenswrapper[4724]: I0223 17:49:20.781601 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/07785399-35e6-432b-8835-4412fa3ff02b-scripts\") pod \"horizon-5b4b6c94fb-ttctl\" (UID: \"07785399-35e6-432b-8835-4412fa3ff02b\") " pod="openstack/horizon-5b4b6c94fb-ttctl" Feb 23 17:49:20 crc kubenswrapper[4724]: I0223 17:49:20.781637 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-529jm\" (UniqueName: \"kubernetes.io/projected/df53406b-fb3c-41f5-86af-b78ac8d5df6d-kube-api-access-529jm\") pod \"horizon-74674fd4f8-mmmpd\" (UID: \"df53406b-fb3c-41f5-86af-b78ac8d5df6d\") " pod="openstack/horizon-74674fd4f8-mmmpd" Feb 23 17:49:20 crc kubenswrapper[4724]: I0223 17:49:20.781672 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/df53406b-fb3c-41f5-86af-b78ac8d5df6d-horizon-tls-certs\") pod \"horizon-74674fd4f8-mmmpd\" (UID: \"df53406b-fb3c-41f5-86af-b78ac8d5df6d\") " pod="openstack/horizon-74674fd4f8-mmmpd" Feb 23 17:49:20 crc kubenswrapper[4724]: I0223 17:49:20.781717 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07785399-35e6-432b-8835-4412fa3ff02b-combined-ca-bundle\") pod \"horizon-5b4b6c94fb-ttctl\" (UID: \"07785399-35e6-432b-8835-4412fa3ff02b\") " pod="openstack/horizon-5b4b6c94fb-ttctl" Feb 23 17:49:20 crc kubenswrapper[4724]: I0223 17:49:20.781776 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/df53406b-fb3c-41f5-86af-b78ac8d5df6d-config-data\") pod \"horizon-74674fd4f8-mmmpd\" (UID: \"df53406b-fb3c-41f5-86af-b78ac8d5df6d\") " pod="openstack/horizon-74674fd4f8-mmmpd" Feb 23 17:49:20 crc kubenswrapper[4724]: I0223 17:49:20.781806 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/07785399-35e6-432b-8835-4412fa3ff02b-horizon-secret-key\") pod \"horizon-5b4b6c94fb-ttctl\" (UID: \"07785399-35e6-432b-8835-4412fa3ff02b\") " pod="openstack/horizon-5b4b6c94fb-ttctl" Feb 23 17:49:20 crc kubenswrapper[4724]: I0223 17:49:20.781837 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df53406b-fb3c-41f5-86af-b78ac8d5df6d-combined-ca-bundle\") pod \"horizon-74674fd4f8-mmmpd\" (UID: \"df53406b-fb3c-41f5-86af-b78ac8d5df6d\") " pod="openstack/horizon-74674fd4f8-mmmpd" Feb 23 17:49:20 crc kubenswrapper[4724]: I0223 17:49:20.781876 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/df53406b-fb3c-41f5-86af-b78ac8d5df6d-horizon-secret-key\") pod \"horizon-74674fd4f8-mmmpd\" (UID: \"df53406b-fb3c-41f5-86af-b78ac8d5df6d\") " pod="openstack/horizon-74674fd4f8-mmmpd" Feb 23 17:49:20 crc kubenswrapper[4724]: I0223 17:49:20.781966 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/07785399-35e6-432b-8835-4412fa3ff02b-config-data\") pod \"horizon-5b4b6c94fb-ttctl\" (UID: \"07785399-35e6-432b-8835-4412fa3ff02b\") " pod="openstack/horizon-5b4b6c94fb-ttctl" Feb 23 17:49:20 crc kubenswrapper[4724]: I0223 17:49:20.783542 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/df53406b-fb3c-41f5-86af-b78ac8d5df6d-config-data\") pod \"horizon-74674fd4f8-mmmpd\" (UID: \"df53406b-fb3c-41f5-86af-b78ac8d5df6d\") " pod="openstack/horizon-74674fd4f8-mmmpd" Feb 23 17:49:20 crc kubenswrapper[4724]: I0223 17:49:20.783790 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df53406b-fb3c-41f5-86af-b78ac8d5df6d-logs\") pod \"horizon-74674fd4f8-mmmpd\" (UID: \"df53406b-fb3c-41f5-86af-b78ac8d5df6d\") " pod="openstack/horizon-74674fd4f8-mmmpd" Feb 23 17:49:20 crc kubenswrapper[4724]: I0223 17:49:20.784665 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/df53406b-fb3c-41f5-86af-b78ac8d5df6d-scripts\") pod \"horizon-74674fd4f8-mmmpd\" (UID: \"df53406b-fb3c-41f5-86af-b78ac8d5df6d\") " pod="openstack/horizon-74674fd4f8-mmmpd" Feb 23 17:49:20 crc kubenswrapper[4724]: I0223 17:49:20.796155 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/df53406b-fb3c-41f5-86af-b78ac8d5df6d-horizon-secret-key\") pod \"horizon-74674fd4f8-mmmpd\" (UID: \"df53406b-fb3c-41f5-86af-b78ac8d5df6d\") " pod="openstack/horizon-74674fd4f8-mmmpd" Feb 23 17:49:20 crc kubenswrapper[4724]: I0223 17:49:20.796346 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df53406b-fb3c-41f5-86af-b78ac8d5df6d-combined-ca-bundle\") pod \"horizon-74674fd4f8-mmmpd\" (UID: \"df53406b-fb3c-41f5-86af-b78ac8d5df6d\") " pod="openstack/horizon-74674fd4f8-mmmpd" Feb 23 17:49:20 crc kubenswrapper[4724]: I0223 17:49:20.796500 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/df53406b-fb3c-41f5-86af-b78ac8d5df6d-horizon-tls-certs\") pod \"horizon-74674fd4f8-mmmpd\" (UID: \"df53406b-fb3c-41f5-86af-b78ac8d5df6d\") " pod="openstack/horizon-74674fd4f8-mmmpd" Feb 23 17:49:20 crc kubenswrapper[4724]: I0223 17:49:20.801113 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-529jm\" (UniqueName: \"kubernetes.io/projected/df53406b-fb3c-41f5-86af-b78ac8d5df6d-kube-api-access-529jm\") pod \"horizon-74674fd4f8-mmmpd\" (UID: \"df53406b-fb3c-41f5-86af-b78ac8d5df6d\") " pod="openstack/horizon-74674fd4f8-mmmpd" Feb 23 17:49:20 crc kubenswrapper[4724]: I0223 17:49:20.884288 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07785399-35e6-432b-8835-4412fa3ff02b-logs\") pod \"horizon-5b4b6c94fb-ttctl\" (UID: \"07785399-35e6-432b-8835-4412fa3ff02b\") " pod="openstack/horizon-5b4b6c94fb-ttctl" Feb 23 17:49:20 crc kubenswrapper[4724]: I0223 17:49:20.884673 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/07785399-35e6-432b-8835-4412fa3ff02b-horizon-tls-certs\") pod \"horizon-5b4b6c94fb-ttctl\" (UID: \"07785399-35e6-432b-8835-4412fa3ff02b\") " pod="openstack/horizon-5b4b6c94fb-ttctl" Feb 23 17:49:20 crc kubenswrapper[4724]: I0223 17:49:20.884713 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/07785399-35e6-432b-8835-4412fa3ff02b-scripts\") pod \"horizon-5b4b6c94fb-ttctl\" (UID: \"07785399-35e6-432b-8835-4412fa3ff02b\") " pod="openstack/horizon-5b4b6c94fb-ttctl" Feb 23 17:49:20 crc kubenswrapper[4724]: I0223 17:49:20.884745 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07785399-35e6-432b-8835-4412fa3ff02b-logs\") pod \"horizon-5b4b6c94fb-ttctl\" (UID: \"07785399-35e6-432b-8835-4412fa3ff02b\") " pod="openstack/horizon-5b4b6c94fb-ttctl" Feb 23 17:49:20 crc kubenswrapper[4724]: I0223 17:49:20.884775 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07785399-35e6-432b-8835-4412fa3ff02b-combined-ca-bundle\") pod \"horizon-5b4b6c94fb-ttctl\" (UID: \"07785399-35e6-432b-8835-4412fa3ff02b\") " pod="openstack/horizon-5b4b6c94fb-ttctl" Feb 23 17:49:20 crc kubenswrapper[4724]: I0223 17:49:20.884832 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/07785399-35e6-432b-8835-4412fa3ff02b-horizon-secret-key\") pod \"horizon-5b4b6c94fb-ttctl\" (UID: \"07785399-35e6-432b-8835-4412fa3ff02b\") " pod="openstack/horizon-5b4b6c94fb-ttctl" Feb 23 17:49:20 crc kubenswrapper[4724]: I0223 17:49:20.884880 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/07785399-35e6-432b-8835-4412fa3ff02b-config-data\") pod \"horizon-5b4b6c94fb-ttctl\" (UID: \"07785399-35e6-432b-8835-4412fa3ff02b\") " pod="openstack/horizon-5b4b6c94fb-ttctl" Feb 23 17:49:20 crc kubenswrapper[4724]: I0223 17:49:20.884920 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqp5k\" (UniqueName: \"kubernetes.io/projected/07785399-35e6-432b-8835-4412fa3ff02b-kube-api-access-xqp5k\") pod \"horizon-5b4b6c94fb-ttctl\" (UID: \"07785399-35e6-432b-8835-4412fa3ff02b\") " pod="openstack/horizon-5b4b6c94fb-ttctl" Feb 23 17:49:20 crc kubenswrapper[4724]: I0223 17:49:20.885469 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/07785399-35e6-432b-8835-4412fa3ff02b-scripts\") pod \"horizon-5b4b6c94fb-ttctl\" (UID: \"07785399-35e6-432b-8835-4412fa3ff02b\") " pod="openstack/horizon-5b4b6c94fb-ttctl" Feb 23 17:49:20 crc kubenswrapper[4724]: I0223 17:49:20.886699 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/07785399-35e6-432b-8835-4412fa3ff02b-config-data\") pod \"horizon-5b4b6c94fb-ttctl\" (UID: \"07785399-35e6-432b-8835-4412fa3ff02b\") " pod="openstack/horizon-5b4b6c94fb-ttctl" Feb 23 17:49:20 crc kubenswrapper[4724]: I0223 17:49:20.889694 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/07785399-35e6-432b-8835-4412fa3ff02b-horizon-tls-certs\") pod \"horizon-5b4b6c94fb-ttctl\" (UID: \"07785399-35e6-432b-8835-4412fa3ff02b\") " pod="openstack/horizon-5b4b6c94fb-ttctl" Feb 23 17:49:20 crc kubenswrapper[4724]: I0223 17:49:20.889905 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07785399-35e6-432b-8835-4412fa3ff02b-combined-ca-bundle\") pod \"horizon-5b4b6c94fb-ttctl\" (UID: \"07785399-35e6-432b-8835-4412fa3ff02b\") " pod="openstack/horizon-5b4b6c94fb-ttctl" Feb 23 17:49:20 crc kubenswrapper[4724]: I0223 17:49:20.896865 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/07785399-35e6-432b-8835-4412fa3ff02b-horizon-secret-key\") pod \"horizon-5b4b6c94fb-ttctl\" (UID: \"07785399-35e6-432b-8835-4412fa3ff02b\") " pod="openstack/horizon-5b4b6c94fb-ttctl" Feb 23 17:49:20 crc kubenswrapper[4724]: I0223 17:49:20.900826 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-74674fd4f8-mmmpd" Feb 23 17:49:20 crc kubenswrapper[4724]: I0223 17:49:20.903909 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqp5k\" (UniqueName: \"kubernetes.io/projected/07785399-35e6-432b-8835-4412fa3ff02b-kube-api-access-xqp5k\") pod \"horizon-5b4b6c94fb-ttctl\" (UID: \"07785399-35e6-432b-8835-4412fa3ff02b\") " pod="openstack/horizon-5b4b6c94fb-ttctl" Feb 23 17:49:21 crc kubenswrapper[4724]: I0223 17:49:21.005977 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b4b6c94fb-ttctl" Feb 23 17:49:21 crc kubenswrapper[4724]: I0223 17:49:21.057003 4724 generic.go:334] "Generic (PLEG): container finished" podID="e73e94ef-0cac-460a-a61d-05476626544e" containerID="cdbb62ec359ed7fd99915fc8f1c2c8c13ec554bbd52b65d843ff2bb7d478290c" exitCode=0 Feb 23 17:49:21 crc kubenswrapper[4724]: I0223 17:49:21.057054 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-h68sq" event={"ID":"e73e94ef-0cac-460a-a61d-05476626544e","Type":"ContainerDied","Data":"cdbb62ec359ed7fd99915fc8f1c2c8c13ec554bbd52b65d843ff2bb7d478290c"} Feb 23 17:49:21 crc kubenswrapper[4724]: I0223 17:49:21.926385 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-856b879ffc-m4wq9" Feb 23 17:49:21 crc kubenswrapper[4724]: I0223 17:49:21.993144 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c4949dfdc-glzsk"] Feb 23 17:49:21 crc kubenswrapper[4724]: I0223 17:49:21.996077 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c4949dfdc-glzsk" podUID="e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4" containerName="dnsmasq-dns" containerID="cri-o://ddf58b4e407180f5020e980b08117ce07de994e115c26f57d05d0cbeb961bf5e" gracePeriod=10 Feb 23 17:49:23 crc kubenswrapper[4724]: I0223 17:49:23.074172 4724 generic.go:334] "Generic (PLEG): container finished" podID="e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4" containerID="ddf58b4e407180f5020e980b08117ce07de994e115c26f57d05d0cbeb961bf5e" exitCode=0 Feb 23 17:49:23 crc kubenswrapper[4724]: I0223 17:49:23.074237 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c4949dfdc-glzsk" event={"ID":"e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4","Type":"ContainerDied","Data":"ddf58b4e407180f5020e980b08117ce07de994e115c26f57d05d0cbeb961bf5e"} Feb 23 17:49:25 crc kubenswrapper[4724]: I0223 17:49:25.723175 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-h68sq" Feb 23 17:49:25 crc kubenswrapper[4724]: I0223 17:49:25.782824 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e73e94ef-0cac-460a-a61d-05476626544e-credential-keys\") pod \"e73e94ef-0cac-460a-a61d-05476626544e\" (UID: \"e73e94ef-0cac-460a-a61d-05476626544e\") " Feb 23 17:49:25 crc kubenswrapper[4724]: I0223 17:49:25.782869 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e73e94ef-0cac-460a-a61d-05476626544e-combined-ca-bundle\") pod \"e73e94ef-0cac-460a-a61d-05476626544e\" (UID: \"e73e94ef-0cac-460a-a61d-05476626544e\") " Feb 23 17:49:25 crc kubenswrapper[4724]: I0223 17:49:25.783009 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e73e94ef-0cac-460a-a61d-05476626544e-fernet-keys\") pod \"e73e94ef-0cac-460a-a61d-05476626544e\" (UID: \"e73e94ef-0cac-460a-a61d-05476626544e\") " Feb 23 17:49:25 crc kubenswrapper[4724]: I0223 17:49:25.783109 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e73e94ef-0cac-460a-a61d-05476626544e-config-data\") pod \"e73e94ef-0cac-460a-a61d-05476626544e\" (UID: \"e73e94ef-0cac-460a-a61d-05476626544e\") " Feb 23 17:49:25 crc kubenswrapper[4724]: I0223 17:49:25.783168 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvgv6\" (UniqueName: \"kubernetes.io/projected/e73e94ef-0cac-460a-a61d-05476626544e-kube-api-access-dvgv6\") pod \"e73e94ef-0cac-460a-a61d-05476626544e\" (UID: \"e73e94ef-0cac-460a-a61d-05476626544e\") " Feb 23 17:49:25 crc kubenswrapper[4724]: I0223 17:49:25.783331 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e73e94ef-0cac-460a-a61d-05476626544e-scripts\") pod \"e73e94ef-0cac-460a-a61d-05476626544e\" (UID: \"e73e94ef-0cac-460a-a61d-05476626544e\") " Feb 23 17:49:25 crc kubenswrapper[4724]: I0223 17:49:25.794041 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e73e94ef-0cac-460a-a61d-05476626544e-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e73e94ef-0cac-460a-a61d-05476626544e" (UID: "e73e94ef-0cac-460a-a61d-05476626544e"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:49:25 crc kubenswrapper[4724]: I0223 17:49:25.794159 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e73e94ef-0cac-460a-a61d-05476626544e-kube-api-access-dvgv6" (OuterVolumeSpecName: "kube-api-access-dvgv6") pod "e73e94ef-0cac-460a-a61d-05476626544e" (UID: "e73e94ef-0cac-460a-a61d-05476626544e"). InnerVolumeSpecName "kube-api-access-dvgv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:49:25 crc kubenswrapper[4724]: I0223 17:49:25.794562 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e73e94ef-0cac-460a-a61d-05476626544e-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "e73e94ef-0cac-460a-a61d-05476626544e" (UID: "e73e94ef-0cac-460a-a61d-05476626544e"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:49:25 crc kubenswrapper[4724]: I0223 17:49:25.807564 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e73e94ef-0cac-460a-a61d-05476626544e-scripts" (OuterVolumeSpecName: "scripts") pod "e73e94ef-0cac-460a-a61d-05476626544e" (UID: "e73e94ef-0cac-460a-a61d-05476626544e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:49:25 crc kubenswrapper[4724]: I0223 17:49:25.830677 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e73e94ef-0cac-460a-a61d-05476626544e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e73e94ef-0cac-460a-a61d-05476626544e" (UID: "e73e94ef-0cac-460a-a61d-05476626544e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:49:25 crc kubenswrapper[4724]: I0223 17:49:25.835676 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e73e94ef-0cac-460a-a61d-05476626544e-config-data" (OuterVolumeSpecName: "config-data") pod "e73e94ef-0cac-460a-a61d-05476626544e" (UID: "e73e94ef-0cac-460a-a61d-05476626544e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:49:25 crc kubenswrapper[4724]: I0223 17:49:25.885739 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e73e94ef-0cac-460a-a61d-05476626544e-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:25 crc kubenswrapper[4724]: I0223 17:49:25.885774 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvgv6\" (UniqueName: \"kubernetes.io/projected/e73e94ef-0cac-460a-a61d-05476626544e-kube-api-access-dvgv6\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:25 crc kubenswrapper[4724]: I0223 17:49:25.885785 4724 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e73e94ef-0cac-460a-a61d-05476626544e-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:25 crc kubenswrapper[4724]: I0223 17:49:25.885794 4724 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e73e94ef-0cac-460a-a61d-05476626544e-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:25 crc kubenswrapper[4724]: I0223 17:49:25.885802 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e73e94ef-0cac-460a-a61d-05476626544e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:25 crc kubenswrapper[4724]: I0223 17:49:25.885810 4724 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e73e94ef-0cac-460a-a61d-05476626544e-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:26 crc kubenswrapper[4724]: I0223 17:49:26.102413 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-h68sq" event={"ID":"e73e94ef-0cac-460a-a61d-05476626544e","Type":"ContainerDied","Data":"2fe0519264336390c9dbf3ecdf18ec2aacb13ac30a0fa232d98f75dd2ce0ae8b"} Feb 23 17:49:26 crc kubenswrapper[4724]: I0223 17:49:26.102772 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fe0519264336390c9dbf3ecdf18ec2aacb13ac30a0fa232d98f75dd2ce0ae8b" Feb 23 17:49:26 crc kubenswrapper[4724]: I0223 17:49:26.102456 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-h68sq" Feb 23 17:49:26 crc kubenswrapper[4724]: I0223 17:49:26.800966 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-h68sq"] Feb 23 17:49:26 crc kubenswrapper[4724]: I0223 17:49:26.812896 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-h68sq"] Feb 23 17:49:26 crc kubenswrapper[4724]: I0223 17:49:26.907963 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-2tlht"] Feb 23 17:49:26 crc kubenswrapper[4724]: E0223 17:49:26.908538 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e73e94ef-0cac-460a-a61d-05476626544e" containerName="keystone-bootstrap" Feb 23 17:49:26 crc kubenswrapper[4724]: I0223 17:49:26.908557 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="e73e94ef-0cac-460a-a61d-05476626544e" containerName="keystone-bootstrap" Feb 23 17:49:26 crc kubenswrapper[4724]: I0223 17:49:26.908746 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="e73e94ef-0cac-460a-a61d-05476626544e" containerName="keystone-bootstrap" Feb 23 17:49:26 crc kubenswrapper[4724]: I0223 17:49:26.909635 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2tlht" Feb 23 17:49:26 crc kubenswrapper[4724]: I0223 17:49:26.911853 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-8cc4s" Feb 23 17:49:26 crc kubenswrapper[4724]: I0223 17:49:26.912035 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 23 17:49:26 crc kubenswrapper[4724]: I0223 17:49:26.912164 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 23 17:49:26 crc kubenswrapper[4724]: I0223 17:49:26.912211 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 23 17:49:26 crc kubenswrapper[4724]: I0223 17:49:26.913993 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 23 17:49:26 crc kubenswrapper[4724]: I0223 17:49:26.926013 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-2tlht"] Feb 23 17:49:26 crc kubenswrapper[4724]: I0223 17:49:26.963523 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e73e94ef-0cac-460a-a61d-05476626544e" path="/var/lib/kubelet/pods/e73e94ef-0cac-460a-a61d-05476626544e/volumes" Feb 23 17:49:27 crc kubenswrapper[4724]: I0223 17:49:27.006749 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcrp6\" (UniqueName: \"kubernetes.io/projected/3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b-kube-api-access-bcrp6\") pod \"keystone-bootstrap-2tlht\" (UID: \"3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b\") " pod="openstack/keystone-bootstrap-2tlht" Feb 23 17:49:27 crc kubenswrapper[4724]: I0223 17:49:27.006806 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b-credential-keys\") pod \"keystone-bootstrap-2tlht\" (UID: \"3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b\") " pod="openstack/keystone-bootstrap-2tlht" Feb 23 17:49:27 crc kubenswrapper[4724]: I0223 17:49:27.006831 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b-scripts\") pod \"keystone-bootstrap-2tlht\" (UID: \"3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b\") " pod="openstack/keystone-bootstrap-2tlht" Feb 23 17:49:27 crc kubenswrapper[4724]: I0223 17:49:27.007188 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b-fernet-keys\") pod \"keystone-bootstrap-2tlht\" (UID: \"3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b\") " pod="openstack/keystone-bootstrap-2tlht" Feb 23 17:49:27 crc kubenswrapper[4724]: I0223 17:49:27.007329 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b-config-data\") pod \"keystone-bootstrap-2tlht\" (UID: \"3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b\") " pod="openstack/keystone-bootstrap-2tlht" Feb 23 17:49:27 crc kubenswrapper[4724]: I0223 17:49:27.007427 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b-combined-ca-bundle\") pod \"keystone-bootstrap-2tlht\" (UID: \"3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b\") " pod="openstack/keystone-bootstrap-2tlht" Feb 23 17:49:27 crc kubenswrapper[4724]: I0223 17:49:27.110697 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b-config-data\") pod \"keystone-bootstrap-2tlht\" (UID: \"3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b\") " pod="openstack/keystone-bootstrap-2tlht" Feb 23 17:49:27 crc kubenswrapper[4724]: I0223 17:49:27.110756 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b-combined-ca-bundle\") pod \"keystone-bootstrap-2tlht\" (UID: \"3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b\") " pod="openstack/keystone-bootstrap-2tlht" Feb 23 17:49:27 crc kubenswrapper[4724]: I0223 17:49:27.110842 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcrp6\" (UniqueName: \"kubernetes.io/projected/3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b-kube-api-access-bcrp6\") pod \"keystone-bootstrap-2tlht\" (UID: \"3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b\") " pod="openstack/keystone-bootstrap-2tlht" Feb 23 17:49:27 crc kubenswrapper[4724]: I0223 17:49:27.110877 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b-credential-keys\") pod \"keystone-bootstrap-2tlht\" (UID: \"3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b\") " pod="openstack/keystone-bootstrap-2tlht" Feb 23 17:49:27 crc kubenswrapper[4724]: I0223 17:49:27.110899 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b-scripts\") pod \"keystone-bootstrap-2tlht\" (UID: \"3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b\") " pod="openstack/keystone-bootstrap-2tlht" Feb 23 17:49:27 crc kubenswrapper[4724]: I0223 17:49:27.110978 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b-fernet-keys\") pod \"keystone-bootstrap-2tlht\" (UID: \"3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b\") " pod="openstack/keystone-bootstrap-2tlht" Feb 23 17:49:27 crc kubenswrapper[4724]: I0223 17:49:27.116671 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b-combined-ca-bundle\") pod \"keystone-bootstrap-2tlht\" (UID: \"3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b\") " pod="openstack/keystone-bootstrap-2tlht" Feb 23 17:49:27 crc kubenswrapper[4724]: I0223 17:49:27.117228 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b-scripts\") pod \"keystone-bootstrap-2tlht\" (UID: \"3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b\") " pod="openstack/keystone-bootstrap-2tlht" Feb 23 17:49:27 crc kubenswrapper[4724]: I0223 17:49:27.118929 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b-config-data\") pod \"keystone-bootstrap-2tlht\" (UID: \"3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b\") " pod="openstack/keystone-bootstrap-2tlht" Feb 23 17:49:27 crc kubenswrapper[4724]: I0223 17:49:27.124048 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b-credential-keys\") pod \"keystone-bootstrap-2tlht\" (UID: \"3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b\") " pod="openstack/keystone-bootstrap-2tlht" Feb 23 17:49:27 crc kubenswrapper[4724]: I0223 17:49:27.132431 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b-fernet-keys\") pod \"keystone-bootstrap-2tlht\" (UID: \"3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b\") " pod="openstack/keystone-bootstrap-2tlht" Feb 23 17:49:27 crc kubenswrapper[4724]: I0223 17:49:27.134855 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcrp6\" (UniqueName: \"kubernetes.io/projected/3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b-kube-api-access-bcrp6\") pod \"keystone-bootstrap-2tlht\" (UID: \"3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b\") " pod="openstack/keystone-bootstrap-2tlht" Feb 23 17:49:27 crc kubenswrapper[4724]: I0223 17:49:27.230100 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2tlht" Feb 23 17:49:27 crc kubenswrapper[4724]: I0223 17:49:27.751681 4724 patch_prober.go:28] interesting pod/machine-config-daemon-rw78r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 17:49:27 crc kubenswrapper[4724]: I0223 17:49:27.751738 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 17:49:27 crc kubenswrapper[4724]: I0223 17:49:27.751787 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" Feb 23 17:49:27 crc kubenswrapper[4724]: I0223 17:49:27.752499 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9dc23005496a1839d115f25e420d8012af50267d7439025ce701b41626936c3c"} pod="openshift-machine-config-operator/machine-config-daemon-rw78r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 17:49:27 crc kubenswrapper[4724]: I0223 17:49:27.752554 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" containerID="cri-o://9dc23005496a1839d115f25e420d8012af50267d7439025ce701b41626936c3c" gracePeriod=600 Feb 23 17:49:28 crc kubenswrapper[4724]: I0223 17:49:28.134322 4724 generic.go:334] "Generic (PLEG): container finished" podID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerID="9dc23005496a1839d115f25e420d8012af50267d7439025ce701b41626936c3c" exitCode=0 Feb 23 17:49:28 crc kubenswrapper[4724]: I0223 17:49:28.134440 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" event={"ID":"a065b197-b354-4d9b-b2e9-7d4882a3d1a2","Type":"ContainerDied","Data":"9dc23005496a1839d115f25e420d8012af50267d7439025ce701b41626936c3c"} Feb 23 17:49:28 crc kubenswrapper[4724]: I0223 17:49:28.134524 4724 scope.go:117] "RemoveContainer" containerID="558f0555580cf65f49e1db87e25baa9b3fcbcc94e63b57b3a835c127120a597f" Feb 23 17:49:31 crc kubenswrapper[4724]: E0223 17:49:31.112333 4724 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.147:5001/podified-master-centos10/openstack-horizon:watcher_latest" Feb 23 17:49:31 crc kubenswrapper[4724]: E0223 17:49:31.112660 4724 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.147:5001/podified-master-centos10/openstack-horizon:watcher_latest" Feb 23 17:49:31 crc kubenswrapper[4724]: E0223 17:49:31.112804 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.147:5001/podified-master-centos10/openstack-horizon:watcher_latest,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5fdh67h586h679h5f7h5c7h674h65dhfh64dh5bdh694h5b7h79h5cch5d4h668h5f6h646h5d7h558h656h588hb8h659h4h577hcch8dhbbh58ch544q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4p296,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-559d5d679f-9vm7m_openstack(d32866e1-5d09-4156-b16e-d2fcff064fba): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 17:49:31 crc kubenswrapper[4724]: E0223 17:49:31.116244 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.147:5001/podified-master-centos10/openstack-horizon:watcher_latest\\\"\"]" pod="openstack/horizon-559d5d679f-9vm7m" podUID="d32866e1-5d09-4156-b16e-d2fcff064fba" Feb 23 17:49:31 crc kubenswrapper[4724]: E0223 17:49:31.130988 4724 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.147:5001/podified-master-centos10/openstack-horizon:watcher_latest" Feb 23 17:49:31 crc kubenswrapper[4724]: E0223 17:49:31.131038 4724 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.147:5001/podified-master-centos10/openstack-horizon:watcher_latest" Feb 23 17:49:31 crc kubenswrapper[4724]: E0223 17:49:31.131236 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.147:5001/podified-master-centos10/openstack-horizon:watcher_latest,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n544hcdh7h686h87h8bhfdh9ch58ch688hcdh59bh5cchc6hc5h5c8h54ch557h5fbh576hb7h58dh5b5h5bch76h8fhfch5dfhf5h78h65h85q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ggp5r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-5895bb9769-7j24f_openstack(6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 17:49:31 crc kubenswrapper[4724]: E0223 17:49:31.133249 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.147:5001/podified-master-centos10/openstack-horizon:watcher_latest\\\"\"]" pod="openstack/horizon-5895bb9769-7j24f" podUID="6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93" Feb 23 17:49:31 crc kubenswrapper[4724]: E0223 17:49:31.145913 4724 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.147:5001/podified-master-centos10/openstack-horizon:watcher_latest" Feb 23 17:49:31 crc kubenswrapper[4724]: E0223 17:49:31.145963 4724 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.147:5001/podified-master-centos10/openstack-horizon:watcher_latest" Feb 23 17:49:31 crc kubenswrapper[4724]: E0223 17:49:31.146060 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.147:5001/podified-master-centos10/openstack-horizon:watcher_latest,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n64h686hdch5d6h65ch55ch67bh5c9h9hd5h6ch59ch9bh5c6h564h68fh5fh546h689h699h699hfch566h568h668h8dhd5h5b9h547h5h97h7cq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d459z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-747cd567fc-7lvv6_openstack(ad41c323-5f1b-4d58-bb6c-f54a4730090a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 17:49:31 crc kubenswrapper[4724]: E0223 17:49:31.148214 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.147:5001/podified-master-centos10/openstack-horizon:watcher_latest\\\"\"]" pod="openstack/horizon-747cd567fc-7lvv6" podUID="ad41c323-5f1b-4d58-bb6c-f54a4730090a" Feb 23 17:49:31 crc kubenswrapper[4724]: I0223 17:49:31.540017 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c4949dfdc-glzsk" podUID="e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.149:5353: i/o timeout" Feb 23 17:49:36 crc kubenswrapper[4724]: I0223 17:49:36.541156 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c4949dfdc-glzsk" podUID="e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.149:5353: i/o timeout" Feb 23 17:49:41 crc kubenswrapper[4724]: I0223 17:49:41.543088 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c4949dfdc-glzsk" podUID="e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.149:5353: i/o timeout" Feb 23 17:49:41 crc kubenswrapper[4724]: I0223 17:49:41.543861 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c4949dfdc-glzsk" Feb 23 17:49:42 crc kubenswrapper[4724]: I0223 17:49:42.829253 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c4949dfdc-glzsk" Feb 23 17:49:42 crc kubenswrapper[4724]: I0223 17:49:42.838961 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-747cd567fc-7lvv6" Feb 23 17:49:42 crc kubenswrapper[4724]: I0223 17:49:42.862905 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5895bb9769-7j24f" Feb 23 17:49:42 crc kubenswrapper[4724]: I0223 17:49:42.915533 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmh9p\" (UniqueName: \"kubernetes.io/projected/e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4-kube-api-access-tmh9p\") pod \"e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4\" (UID: \"e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4\") " Feb 23 17:49:42 crc kubenswrapper[4724]: I0223 17:49:42.915622 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d459z\" (UniqueName: \"kubernetes.io/projected/ad41c323-5f1b-4d58-bb6c-f54a4730090a-kube-api-access-d459z\") pod \"ad41c323-5f1b-4d58-bb6c-f54a4730090a\" (UID: \"ad41c323-5f1b-4d58-bb6c-f54a4730090a\") " Feb 23 17:49:42 crc kubenswrapper[4724]: I0223 17:49:42.915656 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ad41c323-5f1b-4d58-bb6c-f54a4730090a-horizon-secret-key\") pod \"ad41c323-5f1b-4d58-bb6c-f54a4730090a\" (UID: \"ad41c323-5f1b-4d58-bb6c-f54a4730090a\") " Feb 23 17:49:42 crc kubenswrapper[4724]: I0223 17:49:42.915689 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4-dns-swift-storage-0\") pod \"e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4\" (UID: \"e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4\") " Feb 23 17:49:42 crc kubenswrapper[4724]: I0223 17:49:42.915739 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93-horizon-secret-key\") pod \"6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93\" (UID: \"6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93\") " Feb 23 17:49:42 crc kubenswrapper[4724]: I0223 17:49:42.915775 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4-dns-svc\") pod \"e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4\" (UID: \"e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4\") " Feb 23 17:49:42 crc kubenswrapper[4724]: I0223 17:49:42.915838 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ad41c323-5f1b-4d58-bb6c-f54a4730090a-scripts\") pod \"ad41c323-5f1b-4d58-bb6c-f54a4730090a\" (UID: \"ad41c323-5f1b-4d58-bb6c-f54a4730090a\") " Feb 23 17:49:42 crc kubenswrapper[4724]: I0223 17:49:42.915864 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad41c323-5f1b-4d58-bb6c-f54a4730090a-logs\") pod \"ad41c323-5f1b-4d58-bb6c-f54a4730090a\" (UID: \"ad41c323-5f1b-4d58-bb6c-f54a4730090a\") " Feb 23 17:49:42 crc kubenswrapper[4724]: I0223 17:49:42.915895 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4-ovsdbserver-sb\") pod \"e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4\" (UID: \"e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4\") " Feb 23 17:49:42 crc kubenswrapper[4724]: I0223 17:49:42.915942 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ad41c323-5f1b-4d58-bb6c-f54a4730090a-config-data\") pod \"ad41c323-5f1b-4d58-bb6c-f54a4730090a\" (UID: \"ad41c323-5f1b-4d58-bb6c-f54a4730090a\") " Feb 23 17:49:42 crc kubenswrapper[4724]: I0223 17:49:42.915995 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93-scripts\") pod \"6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93\" (UID: \"6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93\") " Feb 23 17:49:42 crc kubenswrapper[4724]: I0223 17:49:42.916091 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggp5r\" (UniqueName: \"kubernetes.io/projected/6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93-kube-api-access-ggp5r\") pod \"6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93\" (UID: \"6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93\") " Feb 23 17:49:42 crc kubenswrapper[4724]: I0223 17:49:42.916121 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93-logs\") pod \"6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93\" (UID: \"6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93\") " Feb 23 17:49:42 crc kubenswrapper[4724]: I0223 17:49:42.916183 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4-ovsdbserver-nb\") pod \"e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4\" (UID: \"e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4\") " Feb 23 17:49:42 crc kubenswrapper[4724]: I0223 17:49:42.916237 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4-config\") pod \"e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4\" (UID: \"e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4\") " Feb 23 17:49:42 crc kubenswrapper[4724]: I0223 17:49:42.916266 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93-config-data\") pod \"6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93\" (UID: \"6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93\") " Feb 23 17:49:42 crc kubenswrapper[4724]: I0223 17:49:42.917636 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad41c323-5f1b-4d58-bb6c-f54a4730090a-logs" (OuterVolumeSpecName: "logs") pod "ad41c323-5f1b-4d58-bb6c-f54a4730090a" (UID: "ad41c323-5f1b-4d58-bb6c-f54a4730090a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:49:42 crc kubenswrapper[4724]: I0223 17:49:42.918555 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad41c323-5f1b-4d58-bb6c-f54a4730090a-scripts" (OuterVolumeSpecName: "scripts") pod "ad41c323-5f1b-4d58-bb6c-f54a4730090a" (UID: "ad41c323-5f1b-4d58-bb6c-f54a4730090a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:49:42 crc kubenswrapper[4724]: I0223 17:49:42.918850 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93-logs" (OuterVolumeSpecName: "logs") pod "6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93" (UID: "6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:49:42 crc kubenswrapper[4724]: I0223 17:49:42.919301 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-559d5d679f-9vm7m" Feb 23 17:49:42 crc kubenswrapper[4724]: I0223 17:49:42.919357 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93-scripts" (OuterVolumeSpecName: "scripts") pod "6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93" (UID: "6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:49:42 crc kubenswrapper[4724]: I0223 17:49:42.919445 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad41c323-5f1b-4d58-bb6c-f54a4730090a-config-data" (OuterVolumeSpecName: "config-data") pod "ad41c323-5f1b-4d58-bb6c-f54a4730090a" (UID: "ad41c323-5f1b-4d58-bb6c-f54a4730090a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:49:42 crc kubenswrapper[4724]: I0223 17:49:42.921117 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93-config-data" (OuterVolumeSpecName: "config-data") pod "6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93" (UID: "6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:49:42 crc kubenswrapper[4724]: I0223 17:49:42.921166 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93-kube-api-access-ggp5r" (OuterVolumeSpecName: "kube-api-access-ggp5r") pod "6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93" (UID: "6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93"). InnerVolumeSpecName "kube-api-access-ggp5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:49:42 crc kubenswrapper[4724]: I0223 17:49:42.922041 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4-kube-api-access-tmh9p" (OuterVolumeSpecName: "kube-api-access-tmh9p") pod "e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4" (UID: "e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4"). InnerVolumeSpecName "kube-api-access-tmh9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:49:42 crc kubenswrapper[4724]: I0223 17:49:42.923060 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad41c323-5f1b-4d58-bb6c-f54a4730090a-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "ad41c323-5f1b-4d58-bb6c-f54a4730090a" (UID: "ad41c323-5f1b-4d58-bb6c-f54a4730090a"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:49:42 crc kubenswrapper[4724]: I0223 17:49:42.927490 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93" (UID: "6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:49:42 crc kubenswrapper[4724]: I0223 17:49:42.952942 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad41c323-5f1b-4d58-bb6c-f54a4730090a-kube-api-access-d459z" (OuterVolumeSpecName: "kube-api-access-d459z") pod "ad41c323-5f1b-4d58-bb6c-f54a4730090a" (UID: "ad41c323-5f1b-4d58-bb6c-f54a4730090a"). InnerVolumeSpecName "kube-api-access-d459z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:49:42 crc kubenswrapper[4724]: I0223 17:49:42.980315 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4" (UID: "e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:49:42 crc kubenswrapper[4724]: I0223 17:49:42.989673 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4" (UID: "e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:49:42 crc kubenswrapper[4724]: I0223 17:49:42.994233 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4" (UID: "e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:49:43 crc kubenswrapper[4724]: I0223 17:49:43.002294 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4" (UID: "e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:49:43 crc kubenswrapper[4724]: I0223 17:49:43.005767 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4-config" (OuterVolumeSpecName: "config") pod "e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4" (UID: "e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:49:43 crc kubenswrapper[4724]: I0223 17:49:43.018460 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d32866e1-5d09-4156-b16e-d2fcff064fba-scripts\") pod \"d32866e1-5d09-4156-b16e-d2fcff064fba\" (UID: \"d32866e1-5d09-4156-b16e-d2fcff064fba\") " Feb 23 17:49:43 crc kubenswrapper[4724]: I0223 17:49:43.018647 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4p296\" (UniqueName: \"kubernetes.io/projected/d32866e1-5d09-4156-b16e-d2fcff064fba-kube-api-access-4p296\") pod \"d32866e1-5d09-4156-b16e-d2fcff064fba\" (UID: \"d32866e1-5d09-4156-b16e-d2fcff064fba\") " Feb 23 17:49:43 crc kubenswrapper[4724]: I0223 17:49:43.018751 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d32866e1-5d09-4156-b16e-d2fcff064fba-config-data\") pod \"d32866e1-5d09-4156-b16e-d2fcff064fba\" (UID: \"d32866e1-5d09-4156-b16e-d2fcff064fba\") " Feb 23 17:49:43 crc kubenswrapper[4724]: I0223 17:49:43.018856 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d32866e1-5d09-4156-b16e-d2fcff064fba-horizon-secret-key\") pod \"d32866e1-5d09-4156-b16e-d2fcff064fba\" (UID: \"d32866e1-5d09-4156-b16e-d2fcff064fba\") " Feb 23 17:49:43 crc kubenswrapper[4724]: I0223 17:49:43.018935 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d32866e1-5d09-4156-b16e-d2fcff064fba-logs\") pod \"d32866e1-5d09-4156-b16e-d2fcff064fba\" (UID: \"d32866e1-5d09-4156-b16e-d2fcff064fba\") " Feb 23 17:49:43 crc kubenswrapper[4724]: I0223 17:49:43.019029 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d32866e1-5d09-4156-b16e-d2fcff064fba-scripts" (OuterVolumeSpecName: "scripts") pod "d32866e1-5d09-4156-b16e-d2fcff064fba" (UID: "d32866e1-5d09-4156-b16e-d2fcff064fba"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:49:43 crc kubenswrapper[4724]: I0223 17:49:43.019799 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmh9p\" (UniqueName: \"kubernetes.io/projected/e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4-kube-api-access-tmh9p\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:43 crc kubenswrapper[4724]: I0223 17:49:43.019845 4724 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d32866e1-5d09-4156-b16e-d2fcff064fba-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:43 crc kubenswrapper[4724]: I0223 17:49:43.019861 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d459z\" (UniqueName: \"kubernetes.io/projected/ad41c323-5f1b-4d58-bb6c-f54a4730090a-kube-api-access-d459z\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:43 crc kubenswrapper[4724]: I0223 17:49:43.019871 4724 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ad41c323-5f1b-4d58-bb6c-f54a4730090a-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:43 crc kubenswrapper[4724]: I0223 17:49:43.019885 4724 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:43 crc kubenswrapper[4724]: I0223 17:49:43.019921 4724 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:43 crc kubenswrapper[4724]: I0223 17:49:43.019935 4724 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:43 crc kubenswrapper[4724]: I0223 17:49:43.019946 4724 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ad41c323-5f1b-4d58-bb6c-f54a4730090a-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:43 crc kubenswrapper[4724]: I0223 17:49:43.019957 4724 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad41c323-5f1b-4d58-bb6c-f54a4730090a-logs\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:43 crc kubenswrapper[4724]: I0223 17:49:43.019968 4724 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:43 crc kubenswrapper[4724]: I0223 17:49:43.019978 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ad41c323-5f1b-4d58-bb6c-f54a4730090a-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:43 crc kubenswrapper[4724]: I0223 17:49:43.020007 4724 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:43 crc kubenswrapper[4724]: I0223 17:49:43.020148 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggp5r\" (UniqueName: \"kubernetes.io/projected/6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93-kube-api-access-ggp5r\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:43 crc kubenswrapper[4724]: I0223 17:49:43.020157 4724 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93-logs\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:43 crc kubenswrapper[4724]: I0223 17:49:43.020187 4724 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:43 crc kubenswrapper[4724]: I0223 17:49:43.020195 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4-config\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:43 crc kubenswrapper[4724]: I0223 17:49:43.020203 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:43 crc kubenswrapper[4724]: I0223 17:49:43.020076 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d32866e1-5d09-4156-b16e-d2fcff064fba-config-data" (OuterVolumeSpecName: "config-data") pod "d32866e1-5d09-4156-b16e-d2fcff064fba" (UID: "d32866e1-5d09-4156-b16e-d2fcff064fba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:49:43 crc kubenswrapper[4724]: I0223 17:49:43.020991 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d32866e1-5d09-4156-b16e-d2fcff064fba-logs" (OuterVolumeSpecName: "logs") pod "d32866e1-5d09-4156-b16e-d2fcff064fba" (UID: "d32866e1-5d09-4156-b16e-d2fcff064fba"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:49:43 crc kubenswrapper[4724]: I0223 17:49:43.023442 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d32866e1-5d09-4156-b16e-d2fcff064fba-kube-api-access-4p296" (OuterVolumeSpecName: "kube-api-access-4p296") pod "d32866e1-5d09-4156-b16e-d2fcff064fba" (UID: "d32866e1-5d09-4156-b16e-d2fcff064fba"). InnerVolumeSpecName "kube-api-access-4p296". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:49:43 crc kubenswrapper[4724]: I0223 17:49:43.023761 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d32866e1-5d09-4156-b16e-d2fcff064fba-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "d32866e1-5d09-4156-b16e-d2fcff064fba" (UID: "d32866e1-5d09-4156-b16e-d2fcff064fba"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:49:43 crc kubenswrapper[4724]: I0223 17:49:43.121908 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d32866e1-5d09-4156-b16e-d2fcff064fba-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:43 crc kubenswrapper[4724]: I0223 17:49:43.122239 4724 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d32866e1-5d09-4156-b16e-d2fcff064fba-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:43 crc kubenswrapper[4724]: I0223 17:49:43.122253 4724 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d32866e1-5d09-4156-b16e-d2fcff064fba-logs\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:43 crc kubenswrapper[4724]: I0223 17:49:43.122262 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4p296\" (UniqueName: \"kubernetes.io/projected/d32866e1-5d09-4156-b16e-d2fcff064fba-kube-api-access-4p296\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:43 crc kubenswrapper[4724]: E0223 17:49:43.158382 4724 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad41c323_5f1b_4d58_bb6c_f54a4730090a.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad41c323_5f1b_4d58_bb6c_f54a4730090a.slice/crio-8f624d93daca90dbedff1ed8e52f6164ecf102f54a613ea7d709c1e7767de6b3\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6cd5dbc3_b1da_45d2_a7c5_3ac5d2742f93.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd32866e1_5d09_4156_b16e_d2fcff064fba.slice\": RecentStats: unable to find data in memory cache]" Feb 23 17:49:43 crc kubenswrapper[4724]: I0223 17:49:43.273031 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-559d5d679f-9vm7m" event={"ID":"d32866e1-5d09-4156-b16e-d2fcff064fba","Type":"ContainerDied","Data":"a4015e31fd7f151eddc23e83dbd4d29c378c6f048320cca7419896c03986bedd"} Feb 23 17:49:43 crc kubenswrapper[4724]: I0223 17:49:43.273170 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-559d5d679f-9vm7m" Feb 23 17:49:43 crc kubenswrapper[4724]: I0223 17:49:43.291168 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c4949dfdc-glzsk" event={"ID":"e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4","Type":"ContainerDied","Data":"e965a91ca09412a1ac66e18b3fb15d5fede78961149007444e9ee2c8da031e86"} Feb 23 17:49:43 crc kubenswrapper[4724]: I0223 17:49:43.291850 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c4949dfdc-glzsk" Feb 23 17:49:43 crc kubenswrapper[4724]: I0223 17:49:43.294610 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-747cd567fc-7lvv6" event={"ID":"ad41c323-5f1b-4d58-bb6c-f54a4730090a","Type":"ContainerDied","Data":"8f624d93daca90dbedff1ed8e52f6164ecf102f54a613ea7d709c1e7767de6b3"} Feb 23 17:49:43 crc kubenswrapper[4724]: I0223 17:49:43.294661 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-747cd567fc-7lvv6" Feb 23 17:49:43 crc kubenswrapper[4724]: I0223 17:49:43.296122 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5895bb9769-7j24f" event={"ID":"6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93","Type":"ContainerDied","Data":"b8447d4501b15ac3e7a1cf3684827cbb43216f9f666ce9d80f1e492157869ea9"} Feb 23 17:49:43 crc kubenswrapper[4724]: I0223 17:49:43.296219 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5895bb9769-7j24f" Feb 23 17:49:43 crc kubenswrapper[4724]: I0223 17:49:43.338146 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-559d5d679f-9vm7m"] Feb 23 17:49:43 crc kubenswrapper[4724]: I0223 17:49:43.346380 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-559d5d679f-9vm7m"] Feb 23 17:49:43 crc kubenswrapper[4724]: I0223 17:49:43.370821 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5895bb9769-7j24f"] Feb 23 17:49:43 crc kubenswrapper[4724]: I0223 17:49:43.378562 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5895bb9769-7j24f"] Feb 23 17:49:43 crc kubenswrapper[4724]: I0223 17:49:43.414084 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-747cd567fc-7lvv6"] Feb 23 17:49:43 crc kubenswrapper[4724]: I0223 17:49:43.424326 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-747cd567fc-7lvv6"] Feb 23 17:49:43 crc kubenswrapper[4724]: I0223 17:49:43.434263 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c4949dfdc-glzsk"] Feb 23 17:49:43 crc kubenswrapper[4724]: I0223 17:49:43.443647 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c4949dfdc-glzsk"] Feb 23 17:49:43 crc kubenswrapper[4724]: E0223 17:49:43.492238 4724 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.147:5001/podified-master-centos10/openstack-barbican-api:watcher_latest" Feb 23 17:49:43 crc kubenswrapper[4724]: E0223 17:49:43.492295 4724 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.147:5001/podified-master-centos10/openstack-barbican-api:watcher_latest" Feb 23 17:49:43 crc kubenswrapper[4724]: E0223 17:49:43.492430 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:38.102.83.147:5001/podified-master-centos10/openstack-barbican-api:watcher_latest,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pmpm9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-k8sd8_openstack(05e85ad1-bec3-4a1d-b77e-cac6dab7c9fd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 17:49:43 crc kubenswrapper[4724]: E0223 17:49:43.493990 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-k8sd8" podUID="05e85ad1-bec3-4a1d-b77e-cac6dab7c9fd" Feb 23 17:49:44 crc kubenswrapper[4724]: E0223 17:49:44.305188 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.147:5001/podified-master-centos10/openstack-barbican-api:watcher_latest\\\"\"" pod="openstack/barbican-db-sync-k8sd8" podUID="05e85ad1-bec3-4a1d-b77e-cac6dab7c9fd" Feb 23 17:49:44 crc kubenswrapper[4724]: I0223 17:49:44.505019 4724 scope.go:117] "RemoveContainer" containerID="ddf58b4e407180f5020e980b08117ce07de994e115c26f57d05d0cbeb961bf5e" Feb 23 17:49:44 crc kubenswrapper[4724]: E0223 17:49:44.509609 4724 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.147:5001/podified-master-centos10/openstack-cinder-api:watcher_latest" Feb 23 17:49:44 crc kubenswrapper[4724]: E0223 17:49:44.509644 4724 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.147:5001/podified-master-centos10/openstack-cinder-api:watcher_latest" Feb 23 17:49:44 crc kubenswrapper[4724]: E0223 17:49:44.509962 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:38.102.83.147:5001/podified-master-centos10/openstack-cinder-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dtjkq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-kbqzq_openstack(987df27c-52c5-4950-be0d-72bbd4164ea6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 17:49:44 crc kubenswrapper[4724]: E0223 17:49:44.511685 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-kbqzq" podUID="987df27c-52c5-4950-be0d-72bbd4164ea6" Feb 23 17:49:44 crc kubenswrapper[4724]: I0223 17:49:44.771585 4724 scope.go:117] "RemoveContainer" containerID="78101d8fa9f487ff449ed71d66b10b2d053d213d358cf8736056e3004cdbcd59" Feb 23 17:49:44 crc kubenswrapper[4724]: I0223 17:49:44.966371 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93" path="/var/lib/kubelet/pods/6cd5dbc3-b1da-45d2-a7c5-3ac5d2742f93/volumes" Feb 23 17:49:44 crc kubenswrapper[4724]: I0223 17:49:44.967602 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad41c323-5f1b-4d58-bb6c-f54a4730090a" path="/var/lib/kubelet/pods/ad41c323-5f1b-4d58-bb6c-f54a4730090a/volumes" Feb 23 17:49:44 crc kubenswrapper[4724]: I0223 17:49:44.968033 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d32866e1-5d09-4156-b16e-d2fcff064fba" path="/var/lib/kubelet/pods/d32866e1-5d09-4156-b16e-d2fcff064fba/volumes" Feb 23 17:49:44 crc kubenswrapper[4724]: I0223 17:49:44.968628 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4" path="/var/lib/kubelet/pods/e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4/volumes" Feb 23 17:49:45 crc kubenswrapper[4724]: I0223 17:49:45.009304 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-74674fd4f8-mmmpd"] Feb 23 17:49:45 crc kubenswrapper[4724]: I0223 17:49:45.021134 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5b4b6c94fb-ttctl"] Feb 23 17:49:45 crc kubenswrapper[4724]: W0223 17:49:45.037537 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf53406b_fb3c_41f5_86af_b78ac8d5df6d.slice/crio-655a790b2a5fc56d6a2ccd75d9f2c86566b7cd4eef511372c724e4ee48743d8e WatchSource:0}: Error finding container 655a790b2a5fc56d6a2ccd75d9f2c86566b7cd4eef511372c724e4ee48743d8e: Status 404 returned error can't find the container with id 655a790b2a5fc56d6a2ccd75d9f2c86566b7cd4eef511372c724e4ee48743d8e Feb 23 17:49:45 crc kubenswrapper[4724]: I0223 17:49:45.134318 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-2tlht"] Feb 23 17:49:45 crc kubenswrapper[4724]: W0223 17:49:45.139322 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3cc5a19d_05b2_4ca5_bf8e_0274d62c9a0b.slice/crio-13629885143ce824f3279a46941fa7dccb8dbff471dedbe80b8e70c5b0c38684 WatchSource:0}: Error finding container 13629885143ce824f3279a46941fa7dccb8dbff471dedbe80b8e70c5b0c38684: Status 404 returned error can't find the container with id 13629885143ce824f3279a46941fa7dccb8dbff471dedbe80b8e70c5b0c38684 Feb 23 17:49:45 crc kubenswrapper[4724]: I0223 17:49:45.145978 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 23 17:49:45 crc kubenswrapper[4724]: I0223 17:49:45.324298 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2tlht" event={"ID":"3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b","Type":"ContainerStarted","Data":"13629885143ce824f3279a46941fa7dccb8dbff471dedbe80b8e70c5b0c38684"} Feb 23 17:49:45 crc kubenswrapper[4724]: I0223 17:49:45.327992 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7a589efc-e414-47aa-90d8-14b2ad1f542e","Type":"ContainerStarted","Data":"d9fecb18242066d76feca02682eee3c73ddfba742dc5358eaf55e3998693314e"} Feb 23 17:49:45 crc kubenswrapper[4724]: I0223 17:49:45.329812 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"86eb7ff0-87b2-4538-8c5b-9126768e810b","Type":"ContainerStarted","Data":"7e4682491ba39455c36ca3a4ae47e601edfb68796561c3f91cd736b629c55dd4"} Feb 23 17:49:45 crc kubenswrapper[4724]: I0223 17:49:45.349496 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b4b6c94fb-ttctl" event={"ID":"07785399-35e6-432b-8835-4412fa3ff02b","Type":"ContainerStarted","Data":"381475c8662f4e8bd644f247fa05e793e463393669b514dc6ae137b05c612e1a"} Feb 23 17:49:45 crc kubenswrapper[4724]: I0223 17:49:45.353356 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=6.258684191 podStartE2EDuration="35.353342587s" podCreationTimestamp="2026-02-23 17:49:10 +0000 UTC" firstStartedPulling="2026-02-23 17:49:13.610831494 +0000 UTC m=+1109.427031094" lastFinishedPulling="2026-02-23 17:49:42.7054899 +0000 UTC m=+1138.521689490" observedRunningTime="2026-02-23 17:49:45.352978038 +0000 UTC m=+1141.169177638" watchObservedRunningTime="2026-02-23 17:49:45.353342587 +0000 UTC m=+1141.169542187" Feb 23 17:49:45 crc kubenswrapper[4724]: I0223 17:49:45.366296 4724 generic.go:334] "Generic (PLEG): container finished" podID="5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857" containerID="fb492313bded3525683787416d1463c506739ef103a7abf059baf18d6f79a5a7" exitCode=137 Feb 23 17:49:45 crc kubenswrapper[4724]: I0223 17:49:45.366497 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857","Type":"ContainerDied","Data":"fb492313bded3525683787416d1463c506739ef103a7abf059baf18d6f79a5a7"} Feb 23 17:49:45 crc kubenswrapper[4724]: I0223 17:49:45.370030 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74674fd4f8-mmmpd" event={"ID":"df53406b-fb3c-41f5-86af-b78ac8d5df6d","Type":"ContainerStarted","Data":"655a790b2a5fc56d6a2ccd75d9f2c86566b7cd4eef511372c724e4ee48743d8e"} Feb 23 17:49:45 crc kubenswrapper[4724]: I0223 17:49:45.377642 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="05587f8a-86e1-40f7-82ff-9d5909739c1c" containerName="glance-log" containerID="cri-o://5556df7539f41b14c7a9d8a89bd4577aea3ff7948ad3aba229004fab3546f11f" gracePeriod=30 Feb 23 17:49:45 crc kubenswrapper[4724]: I0223 17:49:45.377817 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"05587f8a-86e1-40f7-82ff-9d5909739c1c","Type":"ContainerStarted","Data":"7cb829588d3599c3b8413ccc7b6b867cbf7ed232d68f1a6b509c003ac967b150"} Feb 23 17:49:45 crc kubenswrapper[4724]: I0223 17:49:45.377806 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="05587f8a-86e1-40f7-82ff-9d5909739c1c" containerName="glance-httpd" containerID="cri-o://7cb829588d3599c3b8413ccc7b6b867cbf7ed232d68f1a6b509c003ac967b150" gracePeriod=30 Feb 23 17:49:45 crc kubenswrapper[4724]: I0223 17:49:45.400765 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"098a7e4d-3eea-40f5-861c-9c026433186b","Type":"ContainerStarted","Data":"260e0d8d453ab55e4f3ba974ab4887d60042447ad1e7985658980d0ef5628c51"} Feb 23 17:49:45 crc kubenswrapper[4724]: I0223 17:49:45.402156 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=34.402136839 podStartE2EDuration="34.402136839s" podCreationTimestamp="2026-02-23 17:49:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:49:45.401873952 +0000 UTC m=+1141.218073552" watchObservedRunningTime="2026-02-23 17:49:45.402136839 +0000 UTC m=+1141.218336439" Feb 23 17:49:45 crc kubenswrapper[4724]: I0223 17:49:45.406216 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-q2ssq" event={"ID":"7421067a-d596-4a56-82f2-39eabd33567c","Type":"ContainerStarted","Data":"2d17032652c3c4cd2052b7d405025127ff9fe855f64f91c894b7002244475759"} Feb 23 17:49:45 crc kubenswrapper[4724]: I0223 17:49:45.411117 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" event={"ID":"a065b197-b354-4d9b-b2e9-7d4882a3d1a2","Type":"ContainerStarted","Data":"4c3c149666e58c3520418e687c5807bec12f2dc12c5496fde070763093334840"} Feb 23 17:49:45 crc kubenswrapper[4724]: I0223 17:49:45.447049 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-q2ssq" podStartSLOduration=5.33231651 podStartE2EDuration="34.447029392s" podCreationTimestamp="2026-02-23 17:49:11 +0000 UTC" firstStartedPulling="2026-02-23 17:49:13.619498023 +0000 UTC m=+1109.435697623" lastFinishedPulling="2026-02-23 17:49:42.734210905 +0000 UTC m=+1138.550410505" observedRunningTime="2026-02-23 17:49:45.439163184 +0000 UTC m=+1141.255362784" watchObservedRunningTime="2026-02-23 17:49:45.447029392 +0000 UTC m=+1141.263228992" Feb 23 17:49:45 crc kubenswrapper[4724]: E0223 17:49:45.449539 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.147:5001/podified-master-centos10/openstack-cinder-api:watcher_latest\\\"\"" pod="openstack/cinder-db-sync-kbqzq" podUID="987df27c-52c5-4950-be0d-72bbd4164ea6" Feb 23 17:49:45 crc kubenswrapper[4724]: I0223 17:49:45.453030 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=6.401899888 podStartE2EDuration="35.453019844s" podCreationTimestamp="2026-02-23 17:49:10 +0000 UTC" firstStartedPulling="2026-02-23 17:49:13.608430544 +0000 UTC m=+1109.424630144" lastFinishedPulling="2026-02-23 17:49:42.65955047 +0000 UTC m=+1138.475750100" observedRunningTime="2026-02-23 17:49:45.424780891 +0000 UTC m=+1141.240980491" watchObservedRunningTime="2026-02-23 17:49:45.453019844 +0000 UTC m=+1141.269219444" Feb 23 17:49:45 crc kubenswrapper[4724]: I0223 17:49:45.530936 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 23 17:49:45 crc kubenswrapper[4724]: I0223 17:49:45.569076 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bv5tn\" (UniqueName: \"kubernetes.io/projected/5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857-kube-api-access-bv5tn\") pod \"5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857\" (UID: \"5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857\") " Feb 23 17:49:45 crc kubenswrapper[4724]: I0223 17:49:45.569151 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857-config-data\") pod \"5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857\" (UID: \"5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857\") " Feb 23 17:49:45 crc kubenswrapper[4724]: I0223 17:49:45.569249 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857-custom-prometheus-ca\") pod \"5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857\" (UID: \"5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857\") " Feb 23 17:49:45 crc kubenswrapper[4724]: I0223 17:49:45.569277 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857-logs\") pod \"5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857\" (UID: \"5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857\") " Feb 23 17:49:45 crc kubenswrapper[4724]: I0223 17:49:45.569298 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857-combined-ca-bundle\") pod \"5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857\" (UID: \"5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857\") " Feb 23 17:49:45 crc kubenswrapper[4724]: I0223 17:49:45.582855 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857-logs" (OuterVolumeSpecName: "logs") pod "5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857" (UID: "5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:49:45 crc kubenswrapper[4724]: I0223 17:49:45.623077 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857-kube-api-access-bv5tn" (OuterVolumeSpecName: "kube-api-access-bv5tn") pod "5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857" (UID: "5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857"). InnerVolumeSpecName "kube-api-access-bv5tn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:49:45 crc kubenswrapper[4724]: I0223 17:49:45.677362 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bv5tn\" (UniqueName: \"kubernetes.io/projected/5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857-kube-api-access-bv5tn\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:45 crc kubenswrapper[4724]: I0223 17:49:45.677402 4724 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857-logs\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:45 crc kubenswrapper[4724]: I0223 17:49:45.769713 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857" (UID: "5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:49:45 crc kubenswrapper[4724]: I0223 17:49:45.774942 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857-config-data" (OuterVolumeSpecName: "config-data") pod "5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857" (UID: "5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:49:45 crc kubenswrapper[4724]: I0223 17:49:45.775311 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857" (UID: "5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:49:45 crc kubenswrapper[4724]: I0223 17:49:45.779696 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:45 crc kubenswrapper[4724]: I0223 17:49:45.779720 4724 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:45 crc kubenswrapper[4724]: I0223 17:49:45.779731 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.423957 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bff85059-9b40-450b-889d-1911c2d13b35","Type":"ContainerStarted","Data":"fbecfde50d5ad7486976a4316236032989c8b8534d71723f92d8d037a62ccf6d"} Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.424331 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="bff85059-9b40-450b-889d-1911c2d13b35" containerName="glance-log" containerID="cri-o://de3c51d3f6780335c6f0a1b3523910133e25baf24c4804042f50183120681a7e" gracePeriod=30 Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.424574 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="bff85059-9b40-450b-889d-1911c2d13b35" containerName="glance-httpd" containerID="cri-o://fbecfde50d5ad7486976a4316236032989c8b8534d71723f92d8d037a62ccf6d" gracePeriod=30 Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.430637 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b4b6c94fb-ttctl" event={"ID":"07785399-35e6-432b-8835-4412fa3ff02b","Type":"ContainerStarted","Data":"cca77116a7c8ae271dead3e371e22b62ec22c528b0ced373ce724450a1c2dbd5"} Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.430686 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b4b6c94fb-ttctl" event={"ID":"07785399-35e6-432b-8835-4412fa3ff02b","Type":"ContainerStarted","Data":"b6ada9b6e334cb5f15604e6a6436ebf24bfa44fd1a2d7f95a7a356e2cfe1a443"} Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.446004 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=35.445980845 podStartE2EDuration="35.445980845s" podCreationTimestamp="2026-02-23 17:49:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:49:46.442707762 +0000 UTC m=+1142.258907382" watchObservedRunningTime="2026-02-23 17:49:46.445980845 +0000 UTC m=+1142.262180445" Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.448563 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857","Type":"ContainerDied","Data":"7e03b4ba902cf950e11f2eaf4bdfa18fd834c3d1819eecddf568f86833f943c9"} Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.448618 4724 scope.go:117] "RemoveContainer" containerID="fb492313bded3525683787416d1463c506739ef103a7abf059baf18d6f79a5a7" Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.448742 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.481172 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5b4b6c94fb-ttctl" podStartSLOduration=26.346981966 podStartE2EDuration="26.481136163s" podCreationTimestamp="2026-02-23 17:49:20 +0000 UTC" firstStartedPulling="2026-02-23 17:49:45.037590434 +0000 UTC m=+1140.853790034" lastFinishedPulling="2026-02-23 17:49:45.171744631 +0000 UTC m=+1140.987944231" observedRunningTime="2026-02-23 17:49:46.475218863 +0000 UTC m=+1142.291418463" watchObservedRunningTime="2026-02-23 17:49:46.481136163 +0000 UTC m=+1142.297335763" Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.500253 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.506291 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74674fd4f8-mmmpd" event={"ID":"df53406b-fb3c-41f5-86af-b78ac8d5df6d","Type":"ContainerStarted","Data":"1655072b2b368448156effff044965d4dd72cc86d075ab29bd3d947a764a0158"} Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.506743 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74674fd4f8-mmmpd" event={"ID":"df53406b-fb3c-41f5-86af-b78ac8d5df6d","Type":"ContainerStarted","Data":"e2211db7088619e4eb64abce15e5e8d41646526a13426ffbd781e4629c000ebd"} Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.522263 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.536047 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2tlht" event={"ID":"3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b","Type":"ContainerStarted","Data":"020bc17f4ca0c21f819b49a77352653529f8879ef25555fab725be24d49f8c76"} Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.544341 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c4949dfdc-glzsk" podUID="e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.149:5353: i/o timeout" Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.557098 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.574510 4724 scope.go:117] "RemoveContainer" containerID="ce7b1da5555bfa7ecdeec681b29b25dcff01141018ebb033d8e4ec8fd435b299" Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.586004 4724 generic.go:334] "Generic (PLEG): container finished" podID="05587f8a-86e1-40f7-82ff-9d5909739c1c" containerID="7cb829588d3599c3b8413ccc7b6b867cbf7ed232d68f1a6b509c003ac967b150" exitCode=143 Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.586052 4724 generic.go:334] "Generic (PLEG): container finished" podID="05587f8a-86e1-40f7-82ff-9d5909739c1c" containerID="5556df7539f41b14c7a9d8a89bd4577aea3ff7948ad3aba229004fab3546f11f" exitCode=143 Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.586105 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"05587f8a-86e1-40f7-82ff-9d5909739c1c","Type":"ContainerDied","Data":"7cb829588d3599c3b8413ccc7b6b867cbf7ed232d68f1a6b509c003ac967b150"} Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.586172 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"05587f8a-86e1-40f7-82ff-9d5909739c1c","Type":"ContainerDied","Data":"5556df7539f41b14c7a9d8a89bd4577aea3ff7948ad3aba229004fab3546f11f"} Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.586184 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"05587f8a-86e1-40f7-82ff-9d5909739c1c","Type":"ContainerDied","Data":"00b66c6b7ede5b5398a5d1f77d729aff5481f2467481886402e5c6410525d8be"} Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.586346 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.594606 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Feb 23 17:49:46 crc kubenswrapper[4724]: E0223 17:49:46.595184 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05587f8a-86e1-40f7-82ff-9d5909739c1c" containerName="glance-httpd" Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.595210 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="05587f8a-86e1-40f7-82ff-9d5909739c1c" containerName="glance-httpd" Feb 23 17:49:46 crc kubenswrapper[4724]: E0223 17:49:46.595230 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05587f8a-86e1-40f7-82ff-9d5909739c1c" containerName="glance-log" Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.595239 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="05587f8a-86e1-40f7-82ff-9d5909739c1c" containerName="glance-log" Feb 23 17:49:46 crc kubenswrapper[4724]: E0223 17:49:46.595263 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857" containerName="watcher-api" Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.595272 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857" containerName="watcher-api" Feb 23 17:49:46 crc kubenswrapper[4724]: E0223 17:49:46.595286 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4" containerName="init" Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.595295 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4" containerName="init" Feb 23 17:49:46 crc kubenswrapper[4724]: E0223 17:49:46.595317 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857" containerName="watcher-api-log" Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.595328 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857" containerName="watcher-api-log" Feb 23 17:49:46 crc kubenswrapper[4724]: E0223 17:49:46.595350 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4" containerName="dnsmasq-dns" Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.595359 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4" containerName="dnsmasq-dns" Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.595994 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tn9c9\" (UniqueName: \"kubernetes.io/projected/05587f8a-86e1-40f7-82ff-9d5909739c1c-kube-api-access-tn9c9\") pod \"05587f8a-86e1-40f7-82ff-9d5909739c1c\" (UID: \"05587f8a-86e1-40f7-82ff-9d5909739c1c\") " Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.596045 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05587f8a-86e1-40f7-82ff-9d5909739c1c-scripts\") pod \"05587f8a-86e1-40f7-82ff-9d5909739c1c\" (UID: \"05587f8a-86e1-40f7-82ff-9d5909739c1c\") " Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.596089 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05587f8a-86e1-40f7-82ff-9d5909739c1c-combined-ca-bundle\") pod \"05587f8a-86e1-40f7-82ff-9d5909739c1c\" (UID: \"05587f8a-86e1-40f7-82ff-9d5909739c1c\") " Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.596111 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05587f8a-86e1-40f7-82ff-9d5909739c1c-logs\") pod \"05587f8a-86e1-40f7-82ff-9d5909739c1c\" (UID: \"05587f8a-86e1-40f7-82ff-9d5909739c1c\") " Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.596168 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05587f8a-86e1-40f7-82ff-9d5909739c1c-internal-tls-certs\") pod \"05587f8a-86e1-40f7-82ff-9d5909739c1c\" (UID: \"05587f8a-86e1-40f7-82ff-9d5909739c1c\") " Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.596253 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"05587f8a-86e1-40f7-82ff-9d5909739c1c\" (UID: \"05587f8a-86e1-40f7-82ff-9d5909739c1c\") " Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.596301 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/05587f8a-86e1-40f7-82ff-9d5909739c1c-httpd-run\") pod \"05587f8a-86e1-40f7-82ff-9d5909739c1c\" (UID: \"05587f8a-86e1-40f7-82ff-9d5909739c1c\") " Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.596339 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05587f8a-86e1-40f7-82ff-9d5909739c1c-config-data\") pod \"05587f8a-86e1-40f7-82ff-9d5909739c1c\" (UID: \"05587f8a-86e1-40f7-82ff-9d5909739c1c\") " Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.600945 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05587f8a-86e1-40f7-82ff-9d5909739c1c-logs" (OuterVolumeSpecName: "logs") pod "05587f8a-86e1-40f7-82ff-9d5909739c1c" (UID: "05587f8a-86e1-40f7-82ff-9d5909739c1c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.605187 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2fbb1eb-d1f9-4ce9-81f7-aaaaddab0ca4" containerName="dnsmasq-dns" Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.624010 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857" containerName="watcher-api" Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.624045 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="05587f8a-86e1-40f7-82ff-9d5909739c1c" containerName="glance-log" Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.624061 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="05587f8a-86e1-40f7-82ff-9d5909739c1c" containerName="glance-httpd" Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.624078 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857" containerName="watcher-api-log" Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.625474 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.613160 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05587f8a-86e1-40f7-82ff-9d5909739c1c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "05587f8a-86e1-40f7-82ff-9d5909739c1c" (UID: "05587f8a-86e1-40f7-82ff-9d5909739c1c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.625595 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.626472 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.636214 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05587f8a-86e1-40f7-82ff-9d5909739c1c-scripts" (OuterVolumeSpecName: "scripts") pod "05587f8a-86e1-40f7-82ff-9d5909739c1c" (UID: "05587f8a-86e1-40f7-82ff-9d5909739c1c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.640313 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05587f8a-86e1-40f7-82ff-9d5909739c1c-kube-api-access-tn9c9" (OuterVolumeSpecName: "kube-api-access-tn9c9") pod "05587f8a-86e1-40f7-82ff-9d5909739c1c" (UID: "05587f8a-86e1-40f7-82ff-9d5909739c1c"). InnerVolumeSpecName "kube-api-access-tn9c9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.647630 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "05587f8a-86e1-40f7-82ff-9d5909739c1c" (UID: "05587f8a-86e1-40f7-82ff-9d5909739c1c"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.684874 4724 scope.go:117] "RemoveContainer" containerID="7cb829588d3599c3b8413ccc7b6b867cbf7ed232d68f1a6b509c003ac967b150" Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.692144 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.696214 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-74674fd4f8-mmmpd" podStartSLOduration=26.562842556 podStartE2EDuration="26.696197893s" podCreationTimestamp="2026-02-23 17:49:20 +0000 UTC" firstStartedPulling="2026-02-23 17:49:45.040488517 +0000 UTC m=+1140.856688117" lastFinishedPulling="2026-02-23 17:49:45.173843854 +0000 UTC m=+1140.990043454" observedRunningTime="2026-02-23 17:49:46.582743488 +0000 UTC m=+1142.398943088" watchObservedRunningTime="2026-02-23 17:49:46.696197893 +0000 UTC m=+1142.512397493" Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.698232 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/406c91bf-6849-4273-8751-b0a234617dd4-logs\") pod \"watcher-api-0\" (UID: \"406c91bf-6849-4273-8751-b0a234617dd4\") " pod="openstack/watcher-api-0" Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.698279 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/406c91bf-6849-4273-8751-b0a234617dd4-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"406c91bf-6849-4273-8751-b0a234617dd4\") " pod="openstack/watcher-api-0" Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.698343 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/406c91bf-6849-4273-8751-b0a234617dd4-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"406c91bf-6849-4273-8751-b0a234617dd4\") " pod="openstack/watcher-api-0" Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.698366 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gkrk\" (UniqueName: \"kubernetes.io/projected/406c91bf-6849-4273-8751-b0a234617dd4-kube-api-access-5gkrk\") pod \"watcher-api-0\" (UID: \"406c91bf-6849-4273-8751-b0a234617dd4\") " pod="openstack/watcher-api-0" Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.698413 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/406c91bf-6849-4273-8751-b0a234617dd4-config-data\") pod \"watcher-api-0\" (UID: \"406c91bf-6849-4273-8751-b0a234617dd4\") " pod="openstack/watcher-api-0" Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.698485 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tn9c9\" (UniqueName: \"kubernetes.io/projected/05587f8a-86e1-40f7-82ff-9d5909739c1c-kube-api-access-tn9c9\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.698496 4724 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05587f8a-86e1-40f7-82ff-9d5909739c1c-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.698506 4724 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05587f8a-86e1-40f7-82ff-9d5909739c1c-logs\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.698523 4724 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.698533 4724 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/05587f8a-86e1-40f7-82ff-9d5909739c1c-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.739583 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-2tlht" podStartSLOduration=20.739563808 podStartE2EDuration="20.739563808s" podCreationTimestamp="2026-02-23 17:49:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:49:46.619039365 +0000 UTC m=+1142.435238965" watchObservedRunningTime="2026-02-23 17:49:46.739563808 +0000 UTC m=+1142.555763408" Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.776595 4724 scope.go:117] "RemoveContainer" containerID="5556df7539f41b14c7a9d8a89bd4577aea3ff7948ad3aba229004fab3546f11f" Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.800788 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/406c91bf-6849-4273-8751-b0a234617dd4-logs\") pod \"watcher-api-0\" (UID: \"406c91bf-6849-4273-8751-b0a234617dd4\") " pod="openstack/watcher-api-0" Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.800876 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/406c91bf-6849-4273-8751-b0a234617dd4-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"406c91bf-6849-4273-8751-b0a234617dd4\") " pod="openstack/watcher-api-0" Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.800999 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/406c91bf-6849-4273-8751-b0a234617dd4-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"406c91bf-6849-4273-8751-b0a234617dd4\") " pod="openstack/watcher-api-0" Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.801049 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gkrk\" (UniqueName: \"kubernetes.io/projected/406c91bf-6849-4273-8751-b0a234617dd4-kube-api-access-5gkrk\") pod \"watcher-api-0\" (UID: \"406c91bf-6849-4273-8751-b0a234617dd4\") " pod="openstack/watcher-api-0" Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.801111 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/406c91bf-6849-4273-8751-b0a234617dd4-config-data\") pod \"watcher-api-0\" (UID: \"406c91bf-6849-4273-8751-b0a234617dd4\") " pod="openstack/watcher-api-0" Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.803431 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/406c91bf-6849-4273-8751-b0a234617dd4-logs\") pod \"watcher-api-0\" (UID: \"406c91bf-6849-4273-8751-b0a234617dd4\") " pod="openstack/watcher-api-0" Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.820209 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/406c91bf-6849-4273-8751-b0a234617dd4-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"406c91bf-6849-4273-8751-b0a234617dd4\") " pod="openstack/watcher-api-0" Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.820425 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/406c91bf-6849-4273-8751-b0a234617dd4-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"406c91bf-6849-4273-8751-b0a234617dd4\") " pod="openstack/watcher-api-0" Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.825870 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/406c91bf-6849-4273-8751-b0a234617dd4-config-data\") pod \"watcher-api-0\" (UID: \"406c91bf-6849-4273-8751-b0a234617dd4\") " pod="openstack/watcher-api-0" Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.834242 4724 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.840187 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gkrk\" (UniqueName: \"kubernetes.io/projected/406c91bf-6849-4273-8751-b0a234617dd4-kube-api-access-5gkrk\") pod \"watcher-api-0\" (UID: \"406c91bf-6849-4273-8751-b0a234617dd4\") " pod="openstack/watcher-api-0" Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.906583 4724 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.953772 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05587f8a-86e1-40f7-82ff-9d5909739c1c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "05587f8a-86e1-40f7-82ff-9d5909739c1c" (UID: "05587f8a-86e1-40f7-82ff-9d5909739c1c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.972795 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857" path="/var/lib/kubelet/pods/5c0efb1b-cbc1-4ac1-b969-ce5ae7b03857/volumes" Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.992580 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05587f8a-86e1-40f7-82ff-9d5909739c1c-config-data" (OuterVolumeSpecName: "config-data") pod "05587f8a-86e1-40f7-82ff-9d5909739c1c" (UID: "05587f8a-86e1-40f7-82ff-9d5909739c1c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:49:46 crc kubenswrapper[4724]: I0223 17:49:46.992637 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05587f8a-86e1-40f7-82ff-9d5909739c1c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "05587f8a-86e1-40f7-82ff-9d5909739c1c" (UID: "05587f8a-86e1-40f7-82ff-9d5909739c1c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.010826 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05587f8a-86e1-40f7-82ff-9d5909739c1c-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.010856 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05587f8a-86e1-40f7-82ff-9d5909739c1c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.010867 4724 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05587f8a-86e1-40f7-82ff-9d5909739c1c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.054817 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.259674 4724 scope.go:117] "RemoveContainer" containerID="7cb829588d3599c3b8413ccc7b6b867cbf7ed232d68f1a6b509c003ac967b150" Feb 23 17:49:47 crc kubenswrapper[4724]: E0223 17:49:47.260460 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cb829588d3599c3b8413ccc7b6b867cbf7ed232d68f1a6b509c003ac967b150\": container with ID starting with 7cb829588d3599c3b8413ccc7b6b867cbf7ed232d68f1a6b509c003ac967b150 not found: ID does not exist" containerID="7cb829588d3599c3b8413ccc7b6b867cbf7ed232d68f1a6b509c003ac967b150" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.260492 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cb829588d3599c3b8413ccc7b6b867cbf7ed232d68f1a6b509c003ac967b150"} err="failed to get container status \"7cb829588d3599c3b8413ccc7b6b867cbf7ed232d68f1a6b509c003ac967b150\": rpc error: code = NotFound desc = could not find container \"7cb829588d3599c3b8413ccc7b6b867cbf7ed232d68f1a6b509c003ac967b150\": container with ID starting with 7cb829588d3599c3b8413ccc7b6b867cbf7ed232d68f1a6b509c003ac967b150 not found: ID does not exist" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.260511 4724 scope.go:117] "RemoveContainer" containerID="5556df7539f41b14c7a9d8a89bd4577aea3ff7948ad3aba229004fab3546f11f" Feb 23 17:49:47 crc kubenswrapper[4724]: E0223 17:49:47.260790 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5556df7539f41b14c7a9d8a89bd4577aea3ff7948ad3aba229004fab3546f11f\": container with ID starting with 5556df7539f41b14c7a9d8a89bd4577aea3ff7948ad3aba229004fab3546f11f not found: ID does not exist" containerID="5556df7539f41b14c7a9d8a89bd4577aea3ff7948ad3aba229004fab3546f11f" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.260810 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5556df7539f41b14c7a9d8a89bd4577aea3ff7948ad3aba229004fab3546f11f"} err="failed to get container status \"5556df7539f41b14c7a9d8a89bd4577aea3ff7948ad3aba229004fab3546f11f\": rpc error: code = NotFound desc = could not find container \"5556df7539f41b14c7a9d8a89bd4577aea3ff7948ad3aba229004fab3546f11f\": container with ID starting with 5556df7539f41b14c7a9d8a89bd4577aea3ff7948ad3aba229004fab3546f11f not found: ID does not exist" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.260825 4724 scope.go:117] "RemoveContainer" containerID="7cb829588d3599c3b8413ccc7b6b867cbf7ed232d68f1a6b509c003ac967b150" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.261104 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cb829588d3599c3b8413ccc7b6b867cbf7ed232d68f1a6b509c003ac967b150"} err="failed to get container status \"7cb829588d3599c3b8413ccc7b6b867cbf7ed232d68f1a6b509c003ac967b150\": rpc error: code = NotFound desc = could not find container \"7cb829588d3599c3b8413ccc7b6b867cbf7ed232d68f1a6b509c003ac967b150\": container with ID starting with 7cb829588d3599c3b8413ccc7b6b867cbf7ed232d68f1a6b509c003ac967b150 not found: ID does not exist" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.261119 4724 scope.go:117] "RemoveContainer" containerID="5556df7539f41b14c7a9d8a89bd4577aea3ff7948ad3aba229004fab3546f11f" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.261370 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5556df7539f41b14c7a9d8a89bd4577aea3ff7948ad3aba229004fab3546f11f"} err="failed to get container status \"5556df7539f41b14c7a9d8a89bd4577aea3ff7948ad3aba229004fab3546f11f\": rpc error: code = NotFound desc = could not find container \"5556df7539f41b14c7a9d8a89bd4577aea3ff7948ad3aba229004fab3546f11f\": container with ID starting with 5556df7539f41b14c7a9d8a89bd4577aea3ff7948ad3aba229004fab3546f11f not found: ID does not exist" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.267806 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.291362 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.301816 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.303330 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.306204 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.309651 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.310360 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.418048 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.422081 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fwtr\" (UniqueName: \"kubernetes.io/projected/b46d2359-d4b2-4f2a-9d22-52928aa39da8-kube-api-access-4fwtr\") pod \"glance-default-internal-api-0\" (UID: \"b46d2359-d4b2-4f2a-9d22-52928aa39da8\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.422138 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b46d2359-d4b2-4f2a-9d22-52928aa39da8-logs\") pod \"glance-default-internal-api-0\" (UID: \"b46d2359-d4b2-4f2a-9d22-52928aa39da8\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.422188 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b46d2359-d4b2-4f2a-9d22-52928aa39da8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b46d2359-d4b2-4f2a-9d22-52928aa39da8\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.422285 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b46d2359-d4b2-4f2a-9d22-52928aa39da8-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b46d2359-d4b2-4f2a-9d22-52928aa39da8\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.424067 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b46d2359-d4b2-4f2a-9d22-52928aa39da8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b46d2359-d4b2-4f2a-9d22-52928aa39da8\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.424180 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"b46d2359-d4b2-4f2a-9d22-52928aa39da8\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.424206 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b46d2359-d4b2-4f2a-9d22-52928aa39da8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b46d2359-d4b2-4f2a-9d22-52928aa39da8\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.424270 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b46d2359-d4b2-4f2a-9d22-52928aa39da8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b46d2359-d4b2-4f2a-9d22-52928aa39da8\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.525251 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bff85059-9b40-450b-889d-1911c2d13b35-logs\") pod \"bff85059-9b40-450b-889d-1911c2d13b35\" (UID: \"bff85059-9b40-450b-889d-1911c2d13b35\") " Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.525347 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bff85059-9b40-450b-889d-1911c2d13b35-scripts\") pod \"bff85059-9b40-450b-889d-1911c2d13b35\" (UID: \"bff85059-9b40-450b-889d-1911c2d13b35\") " Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.525371 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdhp2\" (UniqueName: \"kubernetes.io/projected/bff85059-9b40-450b-889d-1911c2d13b35-kube-api-access-kdhp2\") pod \"bff85059-9b40-450b-889d-1911c2d13b35\" (UID: \"bff85059-9b40-450b-889d-1911c2d13b35\") " Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.525436 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"bff85059-9b40-450b-889d-1911c2d13b35\" (UID: \"bff85059-9b40-450b-889d-1911c2d13b35\") " Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.525485 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bff85059-9b40-450b-889d-1911c2d13b35-public-tls-certs\") pod \"bff85059-9b40-450b-889d-1911c2d13b35\" (UID: \"bff85059-9b40-450b-889d-1911c2d13b35\") " Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.525567 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bff85059-9b40-450b-889d-1911c2d13b35-config-data\") pod \"bff85059-9b40-450b-889d-1911c2d13b35\" (UID: \"bff85059-9b40-450b-889d-1911c2d13b35\") " Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.525643 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bff85059-9b40-450b-889d-1911c2d13b35-httpd-run\") pod \"bff85059-9b40-450b-889d-1911c2d13b35\" (UID: \"bff85059-9b40-450b-889d-1911c2d13b35\") " Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.525700 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bff85059-9b40-450b-889d-1911c2d13b35-combined-ca-bundle\") pod \"bff85059-9b40-450b-889d-1911c2d13b35\" (UID: \"bff85059-9b40-450b-889d-1911c2d13b35\") " Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.525881 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bff85059-9b40-450b-889d-1911c2d13b35-logs" (OuterVolumeSpecName: "logs") pod "bff85059-9b40-450b-889d-1911c2d13b35" (UID: "bff85059-9b40-450b-889d-1911c2d13b35"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.525934 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b46d2359-d4b2-4f2a-9d22-52928aa39da8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b46d2359-d4b2-4f2a-9d22-52928aa39da8\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.525997 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b46d2359-d4b2-4f2a-9d22-52928aa39da8-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b46d2359-d4b2-4f2a-9d22-52928aa39da8\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.526052 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b46d2359-d4b2-4f2a-9d22-52928aa39da8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b46d2359-d4b2-4f2a-9d22-52928aa39da8\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.526080 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"b46d2359-d4b2-4f2a-9d22-52928aa39da8\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.526097 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b46d2359-d4b2-4f2a-9d22-52928aa39da8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b46d2359-d4b2-4f2a-9d22-52928aa39da8\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.526135 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b46d2359-d4b2-4f2a-9d22-52928aa39da8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b46d2359-d4b2-4f2a-9d22-52928aa39da8\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.526173 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fwtr\" (UniqueName: \"kubernetes.io/projected/b46d2359-d4b2-4f2a-9d22-52928aa39da8-kube-api-access-4fwtr\") pod \"glance-default-internal-api-0\" (UID: \"b46d2359-d4b2-4f2a-9d22-52928aa39da8\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.526192 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b46d2359-d4b2-4f2a-9d22-52928aa39da8-logs\") pod \"glance-default-internal-api-0\" (UID: \"b46d2359-d4b2-4f2a-9d22-52928aa39da8\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.526238 4724 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bff85059-9b40-450b-889d-1911c2d13b35-logs\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.526645 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b46d2359-d4b2-4f2a-9d22-52928aa39da8-logs\") pod \"glance-default-internal-api-0\" (UID: \"b46d2359-d4b2-4f2a-9d22-52928aa39da8\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.531323 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bff85059-9b40-450b-889d-1911c2d13b35-scripts" (OuterVolumeSpecName: "scripts") pod "bff85059-9b40-450b-889d-1911c2d13b35" (UID: "bff85059-9b40-450b-889d-1911c2d13b35"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.532749 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bff85059-9b40-450b-889d-1911c2d13b35-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "bff85059-9b40-450b-889d-1911c2d13b35" (UID: "bff85059-9b40-450b-889d-1911c2d13b35"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.533049 4724 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"b46d2359-d4b2-4f2a-9d22-52928aa39da8\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.536535 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b46d2359-d4b2-4f2a-9d22-52928aa39da8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b46d2359-d4b2-4f2a-9d22-52928aa39da8\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.536965 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bff85059-9b40-450b-889d-1911c2d13b35-kube-api-access-kdhp2" (OuterVolumeSpecName: "kube-api-access-kdhp2") pod "bff85059-9b40-450b-889d-1911c2d13b35" (UID: "bff85059-9b40-450b-889d-1911c2d13b35"). InnerVolumeSpecName "kube-api-access-kdhp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.539945 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b46d2359-d4b2-4f2a-9d22-52928aa39da8-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b46d2359-d4b2-4f2a-9d22-52928aa39da8\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.541504 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "bff85059-9b40-450b-889d-1911c2d13b35" (UID: "bff85059-9b40-450b-889d-1911c2d13b35"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.542219 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b46d2359-d4b2-4f2a-9d22-52928aa39da8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b46d2359-d4b2-4f2a-9d22-52928aa39da8\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.544971 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b46d2359-d4b2-4f2a-9d22-52928aa39da8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b46d2359-d4b2-4f2a-9d22-52928aa39da8\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.552427 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b46d2359-d4b2-4f2a-9d22-52928aa39da8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b46d2359-d4b2-4f2a-9d22-52928aa39da8\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.567310 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fwtr\" (UniqueName: \"kubernetes.io/projected/b46d2359-d4b2-4f2a-9d22-52928aa39da8-kube-api-access-4fwtr\") pod \"glance-default-internal-api-0\" (UID: \"b46d2359-d4b2-4f2a-9d22-52928aa39da8\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.577623 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"b46d2359-d4b2-4f2a-9d22-52928aa39da8\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.596140 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7a589efc-e414-47aa-90d8-14b2ad1f542e","Type":"ContainerStarted","Data":"5892e8d7bcde1c2d53816d81acf28f0f496ad8a2b3a54385c84447994d93d5d6"} Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.598965 4724 generic.go:334] "Generic (PLEG): container finished" podID="bff85059-9b40-450b-889d-1911c2d13b35" containerID="fbecfde50d5ad7486976a4316236032989c8b8534d71723f92d8d037a62ccf6d" exitCode=0 Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.598985 4724 generic.go:334] "Generic (PLEG): container finished" podID="bff85059-9b40-450b-889d-1911c2d13b35" containerID="de3c51d3f6780335c6f0a1b3523910133e25baf24c4804042f50183120681a7e" exitCode=143 Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.599013 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bff85059-9b40-450b-889d-1911c2d13b35","Type":"ContainerDied","Data":"fbecfde50d5ad7486976a4316236032989c8b8534d71723f92d8d037a62ccf6d"} Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.599033 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bff85059-9b40-450b-889d-1911c2d13b35","Type":"ContainerDied","Data":"de3c51d3f6780335c6f0a1b3523910133e25baf24c4804042f50183120681a7e"} Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.599044 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bff85059-9b40-450b-889d-1911c2d13b35","Type":"ContainerDied","Data":"6a453d3169276e5c6104a09bc4a672101e9ad98d5e958e058f2db3c54f386722"} Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.599059 4724 scope.go:117] "RemoveContainer" containerID="fbecfde50d5ad7486976a4316236032989c8b8534d71723f92d8d037a62ccf6d" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.599147 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.607471 4724 generic.go:334] "Generic (PLEG): container finished" podID="23123829-c64d-4376-8be6-660e7892a057" containerID="d5b80fec05b3057ddd89553615912d9121562cc5a9aae14eccf80b88544ea6e4" exitCode=0 Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.608060 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-stfh8" event={"ID":"23123829-c64d-4376-8be6-660e7892a057","Type":"ContainerDied","Data":"d5b80fec05b3057ddd89553615912d9121562cc5a9aae14eccf80b88544ea6e4"} Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.611019 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bff85059-9b40-450b-889d-1911c2d13b35-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "bff85059-9b40-450b-889d-1911c2d13b35" (UID: "bff85059-9b40-450b-889d-1911c2d13b35"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.623005 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bff85059-9b40-450b-889d-1911c2d13b35-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bff85059-9b40-450b-889d-1911c2d13b35" (UID: "bff85059-9b40-450b-889d-1911c2d13b35"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.628585 4724 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.628627 4724 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bff85059-9b40-450b-889d-1911c2d13b35-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.628640 4724 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bff85059-9b40-450b-889d-1911c2d13b35-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.628652 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bff85059-9b40-450b-889d-1911c2d13b35-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.628700 4724 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bff85059-9b40-450b-889d-1911c2d13b35-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.628712 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdhp2\" (UniqueName: \"kubernetes.io/projected/bff85059-9b40-450b-889d-1911c2d13b35-kube-api-access-kdhp2\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.640664 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.653872 4724 scope.go:117] "RemoveContainer" containerID="de3c51d3f6780335c6f0a1b3523910133e25baf24c4804042f50183120681a7e" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.665503 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bff85059-9b40-450b-889d-1911c2d13b35-config-data" (OuterVolumeSpecName: "config-data") pod "bff85059-9b40-450b-889d-1911c2d13b35" (UID: "bff85059-9b40-450b-889d-1911c2d13b35"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.667076 4724 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.690768 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.692556 4724 scope.go:117] "RemoveContainer" containerID="fbecfde50d5ad7486976a4316236032989c8b8534d71723f92d8d037a62ccf6d" Feb 23 17:49:47 crc kubenswrapper[4724]: E0223 17:49:47.696498 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbecfde50d5ad7486976a4316236032989c8b8534d71723f92d8d037a62ccf6d\": container with ID starting with fbecfde50d5ad7486976a4316236032989c8b8534d71723f92d8d037a62ccf6d not found: ID does not exist" containerID="fbecfde50d5ad7486976a4316236032989c8b8534d71723f92d8d037a62ccf6d" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.696537 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbecfde50d5ad7486976a4316236032989c8b8534d71723f92d8d037a62ccf6d"} err="failed to get container status \"fbecfde50d5ad7486976a4316236032989c8b8534d71723f92d8d037a62ccf6d\": rpc error: code = NotFound desc = could not find container \"fbecfde50d5ad7486976a4316236032989c8b8534d71723f92d8d037a62ccf6d\": container with ID starting with fbecfde50d5ad7486976a4316236032989c8b8534d71723f92d8d037a62ccf6d not found: ID does not exist" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.696562 4724 scope.go:117] "RemoveContainer" containerID="de3c51d3f6780335c6f0a1b3523910133e25baf24c4804042f50183120681a7e" Feb 23 17:49:47 crc kubenswrapper[4724]: E0223 17:49:47.700572 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de3c51d3f6780335c6f0a1b3523910133e25baf24c4804042f50183120681a7e\": container with ID starting with de3c51d3f6780335c6f0a1b3523910133e25baf24c4804042f50183120681a7e not found: ID does not exist" containerID="de3c51d3f6780335c6f0a1b3523910133e25baf24c4804042f50183120681a7e" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.700715 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de3c51d3f6780335c6f0a1b3523910133e25baf24c4804042f50183120681a7e"} err="failed to get container status \"de3c51d3f6780335c6f0a1b3523910133e25baf24c4804042f50183120681a7e\": rpc error: code = NotFound desc = could not find container \"de3c51d3f6780335c6f0a1b3523910133e25baf24c4804042f50183120681a7e\": container with ID starting with de3c51d3f6780335c6f0a1b3523910133e25baf24c4804042f50183120681a7e not found: ID does not exist" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.700786 4724 scope.go:117] "RemoveContainer" containerID="fbecfde50d5ad7486976a4316236032989c8b8534d71723f92d8d037a62ccf6d" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.704563 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbecfde50d5ad7486976a4316236032989c8b8534d71723f92d8d037a62ccf6d"} err="failed to get container status \"fbecfde50d5ad7486976a4316236032989c8b8534d71723f92d8d037a62ccf6d\": rpc error: code = NotFound desc = could not find container \"fbecfde50d5ad7486976a4316236032989c8b8534d71723f92d8d037a62ccf6d\": container with ID starting with fbecfde50d5ad7486976a4316236032989c8b8534d71723f92d8d037a62ccf6d not found: ID does not exist" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.704590 4724 scope.go:117] "RemoveContainer" containerID="de3c51d3f6780335c6f0a1b3523910133e25baf24c4804042f50183120681a7e" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.705651 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de3c51d3f6780335c6f0a1b3523910133e25baf24c4804042f50183120681a7e"} err="failed to get container status \"de3c51d3f6780335c6f0a1b3523910133e25baf24c4804042f50183120681a7e\": rpc error: code = NotFound desc = could not find container \"de3c51d3f6780335c6f0a1b3523910133e25baf24c4804042f50183120681a7e\": container with ID starting with de3c51d3f6780335c6f0a1b3523910133e25baf24c4804042f50183120681a7e not found: ID does not exist" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.731184 4724 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.731229 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bff85059-9b40-450b-889d-1911c2d13b35-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.943084 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.970033 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.991342 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 17:49:47 crc kubenswrapper[4724]: E0223 17:49:47.991816 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bff85059-9b40-450b-889d-1911c2d13b35" containerName="glance-log" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.991840 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="bff85059-9b40-450b-889d-1911c2d13b35" containerName="glance-log" Feb 23 17:49:47 crc kubenswrapper[4724]: E0223 17:49:47.991870 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bff85059-9b40-450b-889d-1911c2d13b35" containerName="glance-httpd" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.991878 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="bff85059-9b40-450b-889d-1911c2d13b35" containerName="glance-httpd" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.992108 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="bff85059-9b40-450b-889d-1911c2d13b35" containerName="glance-httpd" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.992145 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="bff85059-9b40-450b-889d-1911c2d13b35" containerName="glance-log" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.993287 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.997186 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 23 17:49:47 crc kubenswrapper[4724]: I0223 17:49:47.997518 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 23 17:49:48 crc kubenswrapper[4724]: I0223 17:49:48.003744 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 17:49:48 crc kubenswrapper[4724]: I0223 17:49:48.145729 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea216262-5ec8-4c74-8cec-376d7241e6a8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ea216262-5ec8-4c74-8cec-376d7241e6a8\") " pod="openstack/glance-default-external-api-0" Feb 23 17:49:48 crc kubenswrapper[4724]: I0223 17:49:48.146105 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea216262-5ec8-4c74-8cec-376d7241e6a8-config-data\") pod \"glance-default-external-api-0\" (UID: \"ea216262-5ec8-4c74-8cec-376d7241e6a8\") " pod="openstack/glance-default-external-api-0" Feb 23 17:49:48 crc kubenswrapper[4724]: I0223 17:49:48.146138 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea216262-5ec8-4c74-8cec-376d7241e6a8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ea216262-5ec8-4c74-8cec-376d7241e6a8\") " pod="openstack/glance-default-external-api-0" Feb 23 17:49:48 crc kubenswrapper[4724]: I0223 17:49:48.146171 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"ea216262-5ec8-4c74-8cec-376d7241e6a8\") " pod="openstack/glance-default-external-api-0" Feb 23 17:49:48 crc kubenswrapper[4724]: I0223 17:49:48.146231 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn4dr\" (UniqueName: \"kubernetes.io/projected/ea216262-5ec8-4c74-8cec-376d7241e6a8-kube-api-access-jn4dr\") pod \"glance-default-external-api-0\" (UID: \"ea216262-5ec8-4c74-8cec-376d7241e6a8\") " pod="openstack/glance-default-external-api-0" Feb 23 17:49:48 crc kubenswrapper[4724]: I0223 17:49:48.146255 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea216262-5ec8-4c74-8cec-376d7241e6a8-scripts\") pod \"glance-default-external-api-0\" (UID: \"ea216262-5ec8-4c74-8cec-376d7241e6a8\") " pod="openstack/glance-default-external-api-0" Feb 23 17:49:48 crc kubenswrapper[4724]: I0223 17:49:48.146288 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ea216262-5ec8-4c74-8cec-376d7241e6a8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ea216262-5ec8-4c74-8cec-376d7241e6a8\") " pod="openstack/glance-default-external-api-0" Feb 23 17:49:48 crc kubenswrapper[4724]: I0223 17:49:48.146305 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea216262-5ec8-4c74-8cec-376d7241e6a8-logs\") pod \"glance-default-external-api-0\" (UID: \"ea216262-5ec8-4c74-8cec-376d7241e6a8\") " pod="openstack/glance-default-external-api-0" Feb 23 17:49:48 crc kubenswrapper[4724]: I0223 17:49:48.251275 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn4dr\" (UniqueName: \"kubernetes.io/projected/ea216262-5ec8-4c74-8cec-376d7241e6a8-kube-api-access-jn4dr\") pod \"glance-default-external-api-0\" (UID: \"ea216262-5ec8-4c74-8cec-376d7241e6a8\") " pod="openstack/glance-default-external-api-0" Feb 23 17:49:48 crc kubenswrapper[4724]: I0223 17:49:48.251320 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea216262-5ec8-4c74-8cec-376d7241e6a8-scripts\") pod \"glance-default-external-api-0\" (UID: \"ea216262-5ec8-4c74-8cec-376d7241e6a8\") " pod="openstack/glance-default-external-api-0" Feb 23 17:49:48 crc kubenswrapper[4724]: I0223 17:49:48.251352 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ea216262-5ec8-4c74-8cec-376d7241e6a8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ea216262-5ec8-4c74-8cec-376d7241e6a8\") " pod="openstack/glance-default-external-api-0" Feb 23 17:49:48 crc kubenswrapper[4724]: I0223 17:49:48.251369 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea216262-5ec8-4c74-8cec-376d7241e6a8-logs\") pod \"glance-default-external-api-0\" (UID: \"ea216262-5ec8-4c74-8cec-376d7241e6a8\") " pod="openstack/glance-default-external-api-0" Feb 23 17:49:48 crc kubenswrapper[4724]: I0223 17:49:48.251451 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea216262-5ec8-4c74-8cec-376d7241e6a8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ea216262-5ec8-4c74-8cec-376d7241e6a8\") " pod="openstack/glance-default-external-api-0" Feb 23 17:49:48 crc kubenswrapper[4724]: I0223 17:49:48.251480 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea216262-5ec8-4c74-8cec-376d7241e6a8-config-data\") pod \"glance-default-external-api-0\" (UID: \"ea216262-5ec8-4c74-8cec-376d7241e6a8\") " pod="openstack/glance-default-external-api-0" Feb 23 17:49:48 crc kubenswrapper[4724]: I0223 17:49:48.251501 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea216262-5ec8-4c74-8cec-376d7241e6a8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ea216262-5ec8-4c74-8cec-376d7241e6a8\") " pod="openstack/glance-default-external-api-0" Feb 23 17:49:48 crc kubenswrapper[4724]: I0223 17:49:48.251530 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"ea216262-5ec8-4c74-8cec-376d7241e6a8\") " pod="openstack/glance-default-external-api-0" Feb 23 17:49:48 crc kubenswrapper[4724]: I0223 17:49:48.251976 4724 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"ea216262-5ec8-4c74-8cec-376d7241e6a8\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Feb 23 17:49:48 crc kubenswrapper[4724]: I0223 17:49:48.252196 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea216262-5ec8-4c74-8cec-376d7241e6a8-logs\") pod \"glance-default-external-api-0\" (UID: \"ea216262-5ec8-4c74-8cec-376d7241e6a8\") " pod="openstack/glance-default-external-api-0" Feb 23 17:49:48 crc kubenswrapper[4724]: I0223 17:49:48.253512 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ea216262-5ec8-4c74-8cec-376d7241e6a8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ea216262-5ec8-4c74-8cec-376d7241e6a8\") " pod="openstack/glance-default-external-api-0" Feb 23 17:49:48 crc kubenswrapper[4724]: I0223 17:49:48.258515 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea216262-5ec8-4c74-8cec-376d7241e6a8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ea216262-5ec8-4c74-8cec-376d7241e6a8\") " pod="openstack/glance-default-external-api-0" Feb 23 17:49:48 crc kubenswrapper[4724]: I0223 17:49:48.258623 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea216262-5ec8-4c74-8cec-376d7241e6a8-scripts\") pod \"glance-default-external-api-0\" (UID: \"ea216262-5ec8-4c74-8cec-376d7241e6a8\") " pod="openstack/glance-default-external-api-0" Feb 23 17:49:48 crc kubenswrapper[4724]: I0223 17:49:48.258863 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea216262-5ec8-4c74-8cec-376d7241e6a8-config-data\") pod \"glance-default-external-api-0\" (UID: \"ea216262-5ec8-4c74-8cec-376d7241e6a8\") " pod="openstack/glance-default-external-api-0" Feb 23 17:49:48 crc kubenswrapper[4724]: I0223 17:49:48.262075 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea216262-5ec8-4c74-8cec-376d7241e6a8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ea216262-5ec8-4c74-8cec-376d7241e6a8\") " pod="openstack/glance-default-external-api-0" Feb 23 17:49:48 crc kubenswrapper[4724]: I0223 17:49:48.263676 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 17:49:48 crc kubenswrapper[4724]: I0223 17:49:48.281679 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jn4dr\" (UniqueName: \"kubernetes.io/projected/ea216262-5ec8-4c74-8cec-376d7241e6a8-kube-api-access-jn4dr\") pod \"glance-default-external-api-0\" (UID: \"ea216262-5ec8-4c74-8cec-376d7241e6a8\") " pod="openstack/glance-default-external-api-0" Feb 23 17:49:48 crc kubenswrapper[4724]: I0223 17:49:48.315854 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"ea216262-5ec8-4c74-8cec-376d7241e6a8\") " pod="openstack/glance-default-external-api-0" Feb 23 17:49:48 crc kubenswrapper[4724]: I0223 17:49:48.323736 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 23 17:49:48 crc kubenswrapper[4724]: I0223 17:49:48.648676 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"406c91bf-6849-4273-8751-b0a234617dd4","Type":"ContainerStarted","Data":"cdb1e84b4dcda27ed4934326f0b69817bfb64b3ef7f230cf9eac85facb8ba05b"} Feb 23 17:49:48 crc kubenswrapper[4724]: I0223 17:49:48.649010 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"406c91bf-6849-4273-8751-b0a234617dd4","Type":"ContainerStarted","Data":"01a386e0bb6882288df622deef7079c1278373d4917bc2fdfb60b916bffc8e8b"} Feb 23 17:49:48 crc kubenswrapper[4724]: I0223 17:49:48.649023 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"406c91bf-6849-4273-8751-b0a234617dd4","Type":"ContainerStarted","Data":"774b41cc038a0fa4a49df4e5bd1e90c9021a2cfc82023d2a29ecf00a4f87ff01"} Feb 23 17:49:48 crc kubenswrapper[4724]: I0223 17:49:48.650676 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 23 17:49:48 crc kubenswrapper[4724]: I0223 17:49:48.659843 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b46d2359-d4b2-4f2a-9d22-52928aa39da8","Type":"ContainerStarted","Data":"9ade4ad5f2754b3ecaabd2f44b8ddcd9075c78fa4fddb524ee085f8630c1d51d"} Feb 23 17:49:48 crc kubenswrapper[4724]: I0223 17:49:48.683456 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=2.683436889 podStartE2EDuration="2.683436889s" podCreationTimestamp="2026-02-23 17:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:49:48.676782491 +0000 UTC m=+1144.492982091" watchObservedRunningTime="2026-02-23 17:49:48.683436889 +0000 UTC m=+1144.499636499" Feb 23 17:49:48 crc kubenswrapper[4724]: I0223 17:49:48.898121 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 17:49:48 crc kubenswrapper[4724]: W0223 17:49:48.910672 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea216262_5ec8_4c74_8cec_376d7241e6a8.slice/crio-c20d182e34a553f8333eed087e8443f3e1b978f18ffd378bf76283c24daaf6c3 WatchSource:0}: Error finding container c20d182e34a553f8333eed087e8443f3e1b978f18ffd378bf76283c24daaf6c3: Status 404 returned error can't find the container with id c20d182e34a553f8333eed087e8443f3e1b978f18ffd378bf76283c24daaf6c3 Feb 23 17:49:48 crc kubenswrapper[4724]: I0223 17:49:48.992316 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05587f8a-86e1-40f7-82ff-9d5909739c1c" path="/var/lib/kubelet/pods/05587f8a-86e1-40f7-82ff-9d5909739c1c/volumes" Feb 23 17:49:48 crc kubenswrapper[4724]: I0223 17:49:48.999145 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bff85059-9b40-450b-889d-1911c2d13b35" path="/var/lib/kubelet/pods/bff85059-9b40-450b-889d-1911c2d13b35/volumes" Feb 23 17:49:49 crc kubenswrapper[4724]: I0223 17:49:49.203112 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-stfh8" Feb 23 17:49:49 crc kubenswrapper[4724]: I0223 17:49:49.374504 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23123829-c64d-4376-8be6-660e7892a057-combined-ca-bundle\") pod \"23123829-c64d-4376-8be6-660e7892a057\" (UID: \"23123829-c64d-4376-8be6-660e7892a057\") " Feb 23 17:49:49 crc kubenswrapper[4724]: I0223 17:49:49.374640 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/23123829-c64d-4376-8be6-660e7892a057-config\") pod \"23123829-c64d-4376-8be6-660e7892a057\" (UID: \"23123829-c64d-4376-8be6-660e7892a057\") " Feb 23 17:49:49 crc kubenswrapper[4724]: I0223 17:49:49.374742 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q57hr\" (UniqueName: \"kubernetes.io/projected/23123829-c64d-4376-8be6-660e7892a057-kube-api-access-q57hr\") pod \"23123829-c64d-4376-8be6-660e7892a057\" (UID: \"23123829-c64d-4376-8be6-660e7892a057\") " Feb 23 17:49:49 crc kubenswrapper[4724]: I0223 17:49:49.380378 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23123829-c64d-4376-8be6-660e7892a057-kube-api-access-q57hr" (OuterVolumeSpecName: "kube-api-access-q57hr") pod "23123829-c64d-4376-8be6-660e7892a057" (UID: "23123829-c64d-4376-8be6-660e7892a057"). InnerVolumeSpecName "kube-api-access-q57hr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:49:49 crc kubenswrapper[4724]: I0223 17:49:49.404787 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23123829-c64d-4376-8be6-660e7892a057-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "23123829-c64d-4376-8be6-660e7892a057" (UID: "23123829-c64d-4376-8be6-660e7892a057"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:49:49 crc kubenswrapper[4724]: I0223 17:49:49.407322 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23123829-c64d-4376-8be6-660e7892a057-config" (OuterVolumeSpecName: "config") pod "23123829-c64d-4376-8be6-660e7892a057" (UID: "23123829-c64d-4376-8be6-660e7892a057"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:49:49 crc kubenswrapper[4724]: I0223 17:49:49.476824 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/23123829-c64d-4376-8be6-660e7892a057-config\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:49 crc kubenswrapper[4724]: I0223 17:49:49.476871 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q57hr\" (UniqueName: \"kubernetes.io/projected/23123829-c64d-4376-8be6-660e7892a057-kube-api-access-q57hr\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:49 crc kubenswrapper[4724]: I0223 17:49:49.476889 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23123829-c64d-4376-8be6-660e7892a057-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:49 crc kubenswrapper[4724]: I0223 17:49:49.715450 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b46d2359-d4b2-4f2a-9d22-52928aa39da8","Type":"ContainerStarted","Data":"b6983b542727d47e3909c7a0e5d2098fbe7de7dab8de7baa5b87c28a3af808db"} Feb 23 17:49:49 crc kubenswrapper[4724]: I0223 17:49:49.721309 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ea216262-5ec8-4c74-8cec-376d7241e6a8","Type":"ContainerStarted","Data":"c20d182e34a553f8333eed087e8443f3e1b978f18ffd378bf76283c24daaf6c3"} Feb 23 17:49:49 crc kubenswrapper[4724]: I0223 17:49:49.725565 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-stfh8" event={"ID":"23123829-c64d-4376-8be6-660e7892a057","Type":"ContainerDied","Data":"d00a24e02fbd571881af2ed30b56e30bf6e94b63811f42050632f7fe22ef2de8"} Feb 23 17:49:49 crc kubenswrapper[4724]: I0223 17:49:49.725593 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d00a24e02fbd571881af2ed30b56e30bf6e94b63811f42050632f7fe22ef2de8" Feb 23 17:49:49 crc kubenswrapper[4724]: I0223 17:49:49.725611 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-stfh8" Feb 23 17:49:49 crc kubenswrapper[4724]: I0223 17:49:49.889319 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7cdfc95f79-n8pfz"] Feb 23 17:49:49 crc kubenswrapper[4724]: E0223 17:49:49.889727 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23123829-c64d-4376-8be6-660e7892a057" containerName="neutron-db-sync" Feb 23 17:49:49 crc kubenswrapper[4724]: I0223 17:49:49.889738 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="23123829-c64d-4376-8be6-660e7892a057" containerName="neutron-db-sync" Feb 23 17:49:49 crc kubenswrapper[4724]: I0223 17:49:49.889919 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="23123829-c64d-4376-8be6-660e7892a057" containerName="neutron-db-sync" Feb 23 17:49:49 crc kubenswrapper[4724]: I0223 17:49:49.892238 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cdfc95f79-n8pfz" Feb 23 17:49:49 crc kubenswrapper[4724]: I0223 17:49:49.900622 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cdfc95f79-n8pfz"] Feb 23 17:49:49 crc kubenswrapper[4724]: I0223 17:49:49.997322 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/539cdd64-b5ce-475b-aed3-ebe41fcf5896-ovsdbserver-sb\") pod \"dnsmasq-dns-7cdfc95f79-n8pfz\" (UID: \"539cdd64-b5ce-475b-aed3-ebe41fcf5896\") " pod="openstack/dnsmasq-dns-7cdfc95f79-n8pfz" Feb 23 17:49:49 crc kubenswrapper[4724]: I0223 17:49:49.997512 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/539cdd64-b5ce-475b-aed3-ebe41fcf5896-ovsdbserver-nb\") pod \"dnsmasq-dns-7cdfc95f79-n8pfz\" (UID: \"539cdd64-b5ce-475b-aed3-ebe41fcf5896\") " pod="openstack/dnsmasq-dns-7cdfc95f79-n8pfz" Feb 23 17:49:49 crc kubenswrapper[4724]: I0223 17:49:49.997557 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/539cdd64-b5ce-475b-aed3-ebe41fcf5896-dns-swift-storage-0\") pod \"dnsmasq-dns-7cdfc95f79-n8pfz\" (UID: \"539cdd64-b5ce-475b-aed3-ebe41fcf5896\") " pod="openstack/dnsmasq-dns-7cdfc95f79-n8pfz" Feb 23 17:49:49 crc kubenswrapper[4724]: I0223 17:49:49.997582 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/539cdd64-b5ce-475b-aed3-ebe41fcf5896-config\") pod \"dnsmasq-dns-7cdfc95f79-n8pfz\" (UID: \"539cdd64-b5ce-475b-aed3-ebe41fcf5896\") " pod="openstack/dnsmasq-dns-7cdfc95f79-n8pfz" Feb 23 17:49:49 crc kubenswrapper[4724]: I0223 17:49:49.997629 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-829gx\" (UniqueName: \"kubernetes.io/projected/539cdd64-b5ce-475b-aed3-ebe41fcf5896-kube-api-access-829gx\") pod \"dnsmasq-dns-7cdfc95f79-n8pfz\" (UID: \"539cdd64-b5ce-475b-aed3-ebe41fcf5896\") " pod="openstack/dnsmasq-dns-7cdfc95f79-n8pfz" Feb 23 17:49:49 crc kubenswrapper[4724]: I0223 17:49:49.998897 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/539cdd64-b5ce-475b-aed3-ebe41fcf5896-dns-svc\") pod \"dnsmasq-dns-7cdfc95f79-n8pfz\" (UID: \"539cdd64-b5ce-475b-aed3-ebe41fcf5896\") " pod="openstack/dnsmasq-dns-7cdfc95f79-n8pfz" Feb 23 17:49:50 crc kubenswrapper[4724]: I0223 17:49:50.100194 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/539cdd64-b5ce-475b-aed3-ebe41fcf5896-ovsdbserver-sb\") pod \"dnsmasq-dns-7cdfc95f79-n8pfz\" (UID: \"539cdd64-b5ce-475b-aed3-ebe41fcf5896\") " pod="openstack/dnsmasq-dns-7cdfc95f79-n8pfz" Feb 23 17:49:50 crc kubenswrapper[4724]: I0223 17:49:50.100615 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/539cdd64-b5ce-475b-aed3-ebe41fcf5896-ovsdbserver-nb\") pod \"dnsmasq-dns-7cdfc95f79-n8pfz\" (UID: \"539cdd64-b5ce-475b-aed3-ebe41fcf5896\") " pod="openstack/dnsmasq-dns-7cdfc95f79-n8pfz" Feb 23 17:49:50 crc kubenswrapper[4724]: I0223 17:49:50.100647 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/539cdd64-b5ce-475b-aed3-ebe41fcf5896-dns-swift-storage-0\") pod \"dnsmasq-dns-7cdfc95f79-n8pfz\" (UID: \"539cdd64-b5ce-475b-aed3-ebe41fcf5896\") " pod="openstack/dnsmasq-dns-7cdfc95f79-n8pfz" Feb 23 17:49:50 crc kubenswrapper[4724]: I0223 17:49:50.100671 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/539cdd64-b5ce-475b-aed3-ebe41fcf5896-config\") pod \"dnsmasq-dns-7cdfc95f79-n8pfz\" (UID: \"539cdd64-b5ce-475b-aed3-ebe41fcf5896\") " pod="openstack/dnsmasq-dns-7cdfc95f79-n8pfz" Feb 23 17:49:50 crc kubenswrapper[4724]: I0223 17:49:50.100704 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-829gx\" (UniqueName: \"kubernetes.io/projected/539cdd64-b5ce-475b-aed3-ebe41fcf5896-kube-api-access-829gx\") pod \"dnsmasq-dns-7cdfc95f79-n8pfz\" (UID: \"539cdd64-b5ce-475b-aed3-ebe41fcf5896\") " pod="openstack/dnsmasq-dns-7cdfc95f79-n8pfz" Feb 23 17:49:50 crc kubenswrapper[4724]: I0223 17:49:50.100724 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/539cdd64-b5ce-475b-aed3-ebe41fcf5896-dns-svc\") pod \"dnsmasq-dns-7cdfc95f79-n8pfz\" (UID: \"539cdd64-b5ce-475b-aed3-ebe41fcf5896\") " pod="openstack/dnsmasq-dns-7cdfc95f79-n8pfz" Feb 23 17:49:50 crc kubenswrapper[4724]: I0223 17:49:50.101466 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/539cdd64-b5ce-475b-aed3-ebe41fcf5896-ovsdbserver-sb\") pod \"dnsmasq-dns-7cdfc95f79-n8pfz\" (UID: \"539cdd64-b5ce-475b-aed3-ebe41fcf5896\") " pod="openstack/dnsmasq-dns-7cdfc95f79-n8pfz" Feb 23 17:49:50 crc kubenswrapper[4724]: I0223 17:49:50.101491 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/539cdd64-b5ce-475b-aed3-ebe41fcf5896-dns-svc\") pod \"dnsmasq-dns-7cdfc95f79-n8pfz\" (UID: \"539cdd64-b5ce-475b-aed3-ebe41fcf5896\") " pod="openstack/dnsmasq-dns-7cdfc95f79-n8pfz" Feb 23 17:49:50 crc kubenswrapper[4724]: I0223 17:49:50.101775 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/539cdd64-b5ce-475b-aed3-ebe41fcf5896-ovsdbserver-nb\") pod \"dnsmasq-dns-7cdfc95f79-n8pfz\" (UID: \"539cdd64-b5ce-475b-aed3-ebe41fcf5896\") " pod="openstack/dnsmasq-dns-7cdfc95f79-n8pfz" Feb 23 17:49:50 crc kubenswrapper[4724]: I0223 17:49:50.101984 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/539cdd64-b5ce-475b-aed3-ebe41fcf5896-config\") pod \"dnsmasq-dns-7cdfc95f79-n8pfz\" (UID: \"539cdd64-b5ce-475b-aed3-ebe41fcf5896\") " pod="openstack/dnsmasq-dns-7cdfc95f79-n8pfz" Feb 23 17:49:50 crc kubenswrapper[4724]: I0223 17:49:50.102003 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/539cdd64-b5ce-475b-aed3-ebe41fcf5896-dns-swift-storage-0\") pod \"dnsmasq-dns-7cdfc95f79-n8pfz\" (UID: \"539cdd64-b5ce-475b-aed3-ebe41fcf5896\") " pod="openstack/dnsmasq-dns-7cdfc95f79-n8pfz" Feb 23 17:49:50 crc kubenswrapper[4724]: I0223 17:49:50.117048 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5b67f89948-r429p"] Feb 23 17:49:50 crc kubenswrapper[4724]: I0223 17:49:50.118366 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5b67f89948-r429p" Feb 23 17:49:50 crc kubenswrapper[4724]: I0223 17:49:50.124746 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-4w99c" Feb 23 17:49:50 crc kubenswrapper[4724]: I0223 17:49:50.125045 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 23 17:49:50 crc kubenswrapper[4724]: I0223 17:49:50.125133 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 23 17:49:50 crc kubenswrapper[4724]: I0223 17:49:50.130304 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 23 17:49:50 crc kubenswrapper[4724]: I0223 17:49:50.139276 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-829gx\" (UniqueName: \"kubernetes.io/projected/539cdd64-b5ce-475b-aed3-ebe41fcf5896-kube-api-access-829gx\") pod \"dnsmasq-dns-7cdfc95f79-n8pfz\" (UID: \"539cdd64-b5ce-475b-aed3-ebe41fcf5896\") " pod="openstack/dnsmasq-dns-7cdfc95f79-n8pfz" Feb 23 17:49:50 crc kubenswrapper[4724]: I0223 17:49:50.167292 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5b67f89948-r429p"] Feb 23 17:49:50 crc kubenswrapper[4724]: I0223 17:49:50.202417 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbbdbb8a-6e82-49cb-b631-8d8646d28dc5-ovndb-tls-certs\") pod \"neutron-5b67f89948-r429p\" (UID: \"dbbdbb8a-6e82-49cb-b631-8d8646d28dc5\") " pod="openstack/neutron-5b67f89948-r429p" Feb 23 17:49:50 crc kubenswrapper[4724]: I0223 17:49:50.209647 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/dbbdbb8a-6e82-49cb-b631-8d8646d28dc5-config\") pod \"neutron-5b67f89948-r429p\" (UID: \"dbbdbb8a-6e82-49cb-b631-8d8646d28dc5\") " pod="openstack/neutron-5b67f89948-r429p" Feb 23 17:49:50 crc kubenswrapper[4724]: I0223 17:49:50.209772 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbbdbb8a-6e82-49cb-b631-8d8646d28dc5-combined-ca-bundle\") pod \"neutron-5b67f89948-r429p\" (UID: \"dbbdbb8a-6e82-49cb-b631-8d8646d28dc5\") " pod="openstack/neutron-5b67f89948-r429p" Feb 23 17:49:50 crc kubenswrapper[4724]: I0223 17:49:50.209886 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz8s7\" (UniqueName: \"kubernetes.io/projected/dbbdbb8a-6e82-49cb-b631-8d8646d28dc5-kube-api-access-hz8s7\") pod \"neutron-5b67f89948-r429p\" (UID: \"dbbdbb8a-6e82-49cb-b631-8d8646d28dc5\") " pod="openstack/neutron-5b67f89948-r429p" Feb 23 17:49:50 crc kubenswrapper[4724]: I0223 17:49:50.209982 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/dbbdbb8a-6e82-49cb-b631-8d8646d28dc5-httpd-config\") pod \"neutron-5b67f89948-r429p\" (UID: \"dbbdbb8a-6e82-49cb-b631-8d8646d28dc5\") " pod="openstack/neutron-5b67f89948-r429p" Feb 23 17:49:50 crc kubenswrapper[4724]: I0223 17:49:50.249993 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cdfc95f79-n8pfz" Feb 23 17:49:50 crc kubenswrapper[4724]: I0223 17:49:50.311743 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbbdbb8a-6e82-49cb-b631-8d8646d28dc5-combined-ca-bundle\") pod \"neutron-5b67f89948-r429p\" (UID: \"dbbdbb8a-6e82-49cb-b631-8d8646d28dc5\") " pod="openstack/neutron-5b67f89948-r429p" Feb 23 17:49:50 crc kubenswrapper[4724]: I0223 17:49:50.311829 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hz8s7\" (UniqueName: \"kubernetes.io/projected/dbbdbb8a-6e82-49cb-b631-8d8646d28dc5-kube-api-access-hz8s7\") pod \"neutron-5b67f89948-r429p\" (UID: \"dbbdbb8a-6e82-49cb-b631-8d8646d28dc5\") " pod="openstack/neutron-5b67f89948-r429p" Feb 23 17:49:50 crc kubenswrapper[4724]: I0223 17:49:50.311882 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/dbbdbb8a-6e82-49cb-b631-8d8646d28dc5-httpd-config\") pod \"neutron-5b67f89948-r429p\" (UID: \"dbbdbb8a-6e82-49cb-b631-8d8646d28dc5\") " pod="openstack/neutron-5b67f89948-r429p" Feb 23 17:49:50 crc kubenswrapper[4724]: I0223 17:49:50.311980 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbbdbb8a-6e82-49cb-b631-8d8646d28dc5-ovndb-tls-certs\") pod \"neutron-5b67f89948-r429p\" (UID: \"dbbdbb8a-6e82-49cb-b631-8d8646d28dc5\") " pod="openstack/neutron-5b67f89948-r429p" Feb 23 17:49:50 crc kubenswrapper[4724]: I0223 17:49:50.312014 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/dbbdbb8a-6e82-49cb-b631-8d8646d28dc5-config\") pod \"neutron-5b67f89948-r429p\" (UID: \"dbbdbb8a-6e82-49cb-b631-8d8646d28dc5\") " pod="openstack/neutron-5b67f89948-r429p" Feb 23 17:49:50 crc kubenswrapper[4724]: I0223 17:49:50.317000 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/dbbdbb8a-6e82-49cb-b631-8d8646d28dc5-config\") pod \"neutron-5b67f89948-r429p\" (UID: \"dbbdbb8a-6e82-49cb-b631-8d8646d28dc5\") " pod="openstack/neutron-5b67f89948-r429p" Feb 23 17:49:50 crc kubenswrapper[4724]: I0223 17:49:50.317003 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/dbbdbb8a-6e82-49cb-b631-8d8646d28dc5-httpd-config\") pod \"neutron-5b67f89948-r429p\" (UID: \"dbbdbb8a-6e82-49cb-b631-8d8646d28dc5\") " pod="openstack/neutron-5b67f89948-r429p" Feb 23 17:49:50 crc kubenswrapper[4724]: I0223 17:49:50.318182 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbbdbb8a-6e82-49cb-b631-8d8646d28dc5-ovndb-tls-certs\") pod \"neutron-5b67f89948-r429p\" (UID: \"dbbdbb8a-6e82-49cb-b631-8d8646d28dc5\") " pod="openstack/neutron-5b67f89948-r429p" Feb 23 17:49:50 crc kubenswrapper[4724]: I0223 17:49:50.321994 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbbdbb8a-6e82-49cb-b631-8d8646d28dc5-combined-ca-bundle\") pod \"neutron-5b67f89948-r429p\" (UID: \"dbbdbb8a-6e82-49cb-b631-8d8646d28dc5\") " pod="openstack/neutron-5b67f89948-r429p" Feb 23 17:49:50 crc kubenswrapper[4724]: I0223 17:49:50.367187 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hz8s7\" (UniqueName: \"kubernetes.io/projected/dbbdbb8a-6e82-49cb-b631-8d8646d28dc5-kube-api-access-hz8s7\") pod \"neutron-5b67f89948-r429p\" (UID: \"dbbdbb8a-6e82-49cb-b631-8d8646d28dc5\") " pod="openstack/neutron-5b67f89948-r429p" Feb 23 17:49:50 crc kubenswrapper[4724]: I0223 17:49:50.521071 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5b67f89948-r429p" Feb 23 17:49:50 crc kubenswrapper[4724]: I0223 17:49:50.815260 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ea216262-5ec8-4c74-8cec-376d7241e6a8","Type":"ContainerStarted","Data":"bff2dc251ebe0f525f3a9b7f471ef5c7e64ab32deb29c9382a6695ad17c6762e"} Feb 23 17:49:50 crc kubenswrapper[4724]: I0223 17:49:50.815308 4724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 17:49:50 crc kubenswrapper[4724]: I0223 17:49:50.903275 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-74674fd4f8-mmmpd" Feb 23 17:49:50 crc kubenswrapper[4724]: I0223 17:49:50.903331 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-74674fd4f8-mmmpd" Feb 23 17:49:51 crc kubenswrapper[4724]: I0223 17:49:51.008495 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5b4b6c94fb-ttctl" Feb 23 17:49:51 crc kubenswrapper[4724]: I0223 17:49:51.008545 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5b4b6c94fb-ttctl" Feb 23 17:49:51 crc kubenswrapper[4724]: I0223 17:49:51.049215 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cdfc95f79-n8pfz"] Feb 23 17:49:51 crc kubenswrapper[4724]: I0223 17:49:51.281832 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5b67f89948-r429p"] Feb 23 17:49:51 crc kubenswrapper[4724]: I0223 17:49:51.557436 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 23 17:49:51 crc kubenswrapper[4724]: E0223 17:49:51.614462 4724 log.go:32] "ExecSync cmd from runtime service failed" err=< Feb 23 17:49:51 crc kubenswrapper[4724]: rpc error: code = Unknown desc = command error: setns `mnt`: Bad file descriptor Feb 23 17:49:51 crc kubenswrapper[4724]: fail startup Feb 23 17:49:51 crc kubenswrapper[4724]: , stdout: , stderr: , exit code -1 Feb 23 17:49:51 crc kubenswrapper[4724]: > containerID="7e4682491ba39455c36ca3a4ae47e601edfb68796561c3f91cd736b629c55dd4" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Feb 23 17:49:51 crc kubenswrapper[4724]: E0223 17:49:51.617829 4724 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7e4682491ba39455c36ca3a4ae47e601edfb68796561c3f91cd736b629c55dd4 is running failed: container process not found" containerID="7e4682491ba39455c36ca3a4ae47e601edfb68796561c3f91cd736b629c55dd4" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Feb 23 17:49:51 crc kubenswrapper[4724]: E0223 17:49:51.621472 4724 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7e4682491ba39455c36ca3a4ae47e601edfb68796561c3f91cd736b629c55dd4 is running failed: container process not found" containerID="7e4682491ba39455c36ca3a4ae47e601edfb68796561c3f91cd736b629c55dd4" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Feb 23 17:49:51 crc kubenswrapper[4724]: E0223 17:49:51.621518 4724 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7e4682491ba39455c36ca3a4ae47e601edfb68796561c3f91cd736b629c55dd4 is running failed: container process not found" probeType="Startup" pod="openstack/watcher-decision-engine-0" podUID="86eb7ff0-87b2-4538-8c5b-9126768e810b" containerName="watcher-decision-engine" Feb 23 17:49:51 crc kubenswrapper[4724]: I0223 17:49:51.625626 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 23 17:49:51 crc kubenswrapper[4724]: I0223 17:49:51.671138 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Feb 23 17:49:51 crc kubenswrapper[4724]: I0223 17:49:51.835106 4724 generic.go:334] "Generic (PLEG): container finished" podID="86eb7ff0-87b2-4538-8c5b-9126768e810b" containerID="7e4682491ba39455c36ca3a4ae47e601edfb68796561c3f91cd736b629c55dd4" exitCode=1 Feb 23 17:49:51 crc kubenswrapper[4724]: I0223 17:49:51.835152 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"86eb7ff0-87b2-4538-8c5b-9126768e810b","Type":"ContainerDied","Data":"7e4682491ba39455c36ca3a4ae47e601edfb68796561c3f91cd736b629c55dd4"} Feb 23 17:49:51 crc kubenswrapper[4724]: I0223 17:49:51.835981 4724 scope.go:117] "RemoveContainer" containerID="7e4682491ba39455c36ca3a4ae47e601edfb68796561c3f91cd736b629c55dd4" Feb 23 17:49:51 crc kubenswrapper[4724]: I0223 17:49:51.850924 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ea216262-5ec8-4c74-8cec-376d7241e6a8","Type":"ContainerStarted","Data":"294f31868cdcc017ecaf968c643faacf81ef76a3632549753f1869e91df6f12e"} Feb 23 17:49:51 crc kubenswrapper[4724]: I0223 17:49:51.864350 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b46d2359-d4b2-4f2a-9d22-52928aa39da8","Type":"ContainerStarted","Data":"9bfe9e18ae2365edf0716f2f387235bee21577f05d64aed5741d350a3ebde028"} Feb 23 17:49:51 crc kubenswrapper[4724]: I0223 17:49:51.873022 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b67f89948-r429p" event={"ID":"dbbdbb8a-6e82-49cb-b631-8d8646d28dc5","Type":"ContainerStarted","Data":"7cbacc20033fdc2236a16a7da6062a41d5ecb3f9fb4a5be55257888fd066fbe4"} Feb 23 17:49:51 crc kubenswrapper[4724]: I0223 17:49:51.873058 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b67f89948-r429p" event={"ID":"dbbdbb8a-6e82-49cb-b631-8d8646d28dc5","Type":"ContainerStarted","Data":"995033ca820787bed1cf5548f851028fd863649a3077f41c19941b2449420faf"} Feb 23 17:49:51 crc kubenswrapper[4724]: I0223 17:49:51.881188 4724 generic.go:334] "Generic (PLEG): container finished" podID="539cdd64-b5ce-475b-aed3-ebe41fcf5896" containerID="db6f6fa0097d6058dcf1d192857924d16f1a2eaa24e925627c6da1067252b471" exitCode=0 Feb 23 17:49:51 crc kubenswrapper[4724]: I0223 17:49:51.881486 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cdfc95f79-n8pfz" event={"ID":"539cdd64-b5ce-475b-aed3-ebe41fcf5896","Type":"ContainerDied","Data":"db6f6fa0097d6058dcf1d192857924d16f1a2eaa24e925627c6da1067252b471"} Feb 23 17:49:51 crc kubenswrapper[4724]: I0223 17:49:51.881567 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cdfc95f79-n8pfz" event={"ID":"539cdd64-b5ce-475b-aed3-ebe41fcf5896","Type":"ContainerStarted","Data":"4c3432aec99b46263d7aa22a8274f2a8d8b3a0e3ca25efd83fbc63030ac1ab39"} Feb 23 17:49:51 crc kubenswrapper[4724]: I0223 17:49:51.883060 4724 generic.go:334] "Generic (PLEG): container finished" podID="7421067a-d596-4a56-82f2-39eabd33567c" containerID="2d17032652c3c4cd2052b7d405025127ff9fe855f64f91c894b7002244475759" exitCode=0 Feb 23 17:49:51 crc kubenswrapper[4724]: I0223 17:49:51.883206 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-q2ssq" event={"ID":"7421067a-d596-4a56-82f2-39eabd33567c","Type":"ContainerDied","Data":"2d17032652c3c4cd2052b7d405025127ff9fe855f64f91c894b7002244475759"} Feb 23 17:49:51 crc kubenswrapper[4724]: I0223 17:49:51.929554 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.929532301 podStartE2EDuration="4.929532301s" podCreationTimestamp="2026-02-23 17:49:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:49:51.90969247 +0000 UTC m=+1147.725892070" watchObservedRunningTime="2026-02-23 17:49:51.929532301 +0000 UTC m=+1147.745731901" Feb 23 17:49:51 crc kubenswrapper[4724]: I0223 17:49:51.949479 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.949463924 podStartE2EDuration="4.949463924s" podCreationTimestamp="2026-02-23 17:49:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:49:51.93741828 +0000 UTC m=+1147.753617880" watchObservedRunningTime="2026-02-23 17:49:51.949463924 +0000 UTC m=+1147.765663524" Feb 23 17:49:51 crc kubenswrapper[4724]: I0223 17:49:51.987627 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Feb 23 17:49:52 crc kubenswrapper[4724]: I0223 17:49:52.056407 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 23 17:49:52 crc kubenswrapper[4724]: I0223 17:49:52.056499 4724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 17:49:52 crc kubenswrapper[4724]: I0223 17:49:52.100271 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Feb 23 17:49:52 crc kubenswrapper[4724]: I0223 17:49:52.343927 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 23 17:49:52 crc kubenswrapper[4724]: I0223 17:49:52.605798 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-588b89dd65-d4wqn"] Feb 23 17:49:52 crc kubenswrapper[4724]: I0223 17:49:52.607642 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-588b89dd65-d4wqn" Feb 23 17:49:52 crc kubenswrapper[4724]: I0223 17:49:52.618574 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 23 17:49:52 crc kubenswrapper[4724]: I0223 17:49:52.618752 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 23 17:49:52 crc kubenswrapper[4724]: I0223 17:49:52.628123 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-588b89dd65-d4wqn"] Feb 23 17:49:52 crc kubenswrapper[4724]: I0223 17:49:52.685881 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ffc40a2-ae26-4a8a-bb72-828751c04730-public-tls-certs\") pod \"neutron-588b89dd65-d4wqn\" (UID: \"3ffc40a2-ae26-4a8a-bb72-828751c04730\") " pod="openstack/neutron-588b89dd65-d4wqn" Feb 23 17:49:52 crc kubenswrapper[4724]: I0223 17:49:52.685942 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vrd4\" (UniqueName: \"kubernetes.io/projected/3ffc40a2-ae26-4a8a-bb72-828751c04730-kube-api-access-6vrd4\") pod \"neutron-588b89dd65-d4wqn\" (UID: \"3ffc40a2-ae26-4a8a-bb72-828751c04730\") " pod="openstack/neutron-588b89dd65-d4wqn" Feb 23 17:49:52 crc kubenswrapper[4724]: I0223 17:49:52.685988 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3ffc40a2-ae26-4a8a-bb72-828751c04730-httpd-config\") pod \"neutron-588b89dd65-d4wqn\" (UID: \"3ffc40a2-ae26-4a8a-bb72-828751c04730\") " pod="openstack/neutron-588b89dd65-d4wqn" Feb 23 17:49:52 crc kubenswrapper[4724]: I0223 17:49:52.686041 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3ffc40a2-ae26-4a8a-bb72-828751c04730-config\") pod \"neutron-588b89dd65-d4wqn\" (UID: \"3ffc40a2-ae26-4a8a-bb72-828751c04730\") " pod="openstack/neutron-588b89dd65-d4wqn" Feb 23 17:49:52 crc kubenswrapper[4724]: I0223 17:49:52.686079 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ffc40a2-ae26-4a8a-bb72-828751c04730-combined-ca-bundle\") pod \"neutron-588b89dd65-d4wqn\" (UID: \"3ffc40a2-ae26-4a8a-bb72-828751c04730\") " pod="openstack/neutron-588b89dd65-d4wqn" Feb 23 17:49:52 crc kubenswrapper[4724]: I0223 17:49:52.686099 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ffc40a2-ae26-4a8a-bb72-828751c04730-internal-tls-certs\") pod \"neutron-588b89dd65-d4wqn\" (UID: \"3ffc40a2-ae26-4a8a-bb72-828751c04730\") " pod="openstack/neutron-588b89dd65-d4wqn" Feb 23 17:49:52 crc kubenswrapper[4724]: I0223 17:49:52.686118 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ffc40a2-ae26-4a8a-bb72-828751c04730-ovndb-tls-certs\") pod \"neutron-588b89dd65-d4wqn\" (UID: \"3ffc40a2-ae26-4a8a-bb72-828751c04730\") " pod="openstack/neutron-588b89dd65-d4wqn" Feb 23 17:49:52 crc kubenswrapper[4724]: I0223 17:49:52.787809 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ffc40a2-ae26-4a8a-bb72-828751c04730-public-tls-certs\") pod \"neutron-588b89dd65-d4wqn\" (UID: \"3ffc40a2-ae26-4a8a-bb72-828751c04730\") " pod="openstack/neutron-588b89dd65-d4wqn" Feb 23 17:49:52 crc kubenswrapper[4724]: I0223 17:49:52.787876 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vrd4\" (UniqueName: \"kubernetes.io/projected/3ffc40a2-ae26-4a8a-bb72-828751c04730-kube-api-access-6vrd4\") pod \"neutron-588b89dd65-d4wqn\" (UID: \"3ffc40a2-ae26-4a8a-bb72-828751c04730\") " pod="openstack/neutron-588b89dd65-d4wqn" Feb 23 17:49:52 crc kubenswrapper[4724]: I0223 17:49:52.787916 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3ffc40a2-ae26-4a8a-bb72-828751c04730-httpd-config\") pod \"neutron-588b89dd65-d4wqn\" (UID: \"3ffc40a2-ae26-4a8a-bb72-828751c04730\") " pod="openstack/neutron-588b89dd65-d4wqn" Feb 23 17:49:52 crc kubenswrapper[4724]: I0223 17:49:52.787972 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3ffc40a2-ae26-4a8a-bb72-828751c04730-config\") pod \"neutron-588b89dd65-d4wqn\" (UID: \"3ffc40a2-ae26-4a8a-bb72-828751c04730\") " pod="openstack/neutron-588b89dd65-d4wqn" Feb 23 17:49:52 crc kubenswrapper[4724]: I0223 17:49:52.788009 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ffc40a2-ae26-4a8a-bb72-828751c04730-internal-tls-certs\") pod \"neutron-588b89dd65-d4wqn\" (UID: \"3ffc40a2-ae26-4a8a-bb72-828751c04730\") " pod="openstack/neutron-588b89dd65-d4wqn" Feb 23 17:49:52 crc kubenswrapper[4724]: I0223 17:49:52.788023 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ffc40a2-ae26-4a8a-bb72-828751c04730-combined-ca-bundle\") pod \"neutron-588b89dd65-d4wqn\" (UID: \"3ffc40a2-ae26-4a8a-bb72-828751c04730\") " pod="openstack/neutron-588b89dd65-d4wqn" Feb 23 17:49:52 crc kubenswrapper[4724]: I0223 17:49:52.788044 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ffc40a2-ae26-4a8a-bb72-828751c04730-ovndb-tls-certs\") pod \"neutron-588b89dd65-d4wqn\" (UID: \"3ffc40a2-ae26-4a8a-bb72-828751c04730\") " pod="openstack/neutron-588b89dd65-d4wqn" Feb 23 17:49:52 crc kubenswrapper[4724]: I0223 17:49:52.793880 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ffc40a2-ae26-4a8a-bb72-828751c04730-ovndb-tls-certs\") pod \"neutron-588b89dd65-d4wqn\" (UID: \"3ffc40a2-ae26-4a8a-bb72-828751c04730\") " pod="openstack/neutron-588b89dd65-d4wqn" Feb 23 17:49:52 crc kubenswrapper[4724]: I0223 17:49:52.801146 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ffc40a2-ae26-4a8a-bb72-828751c04730-internal-tls-certs\") pod \"neutron-588b89dd65-d4wqn\" (UID: \"3ffc40a2-ae26-4a8a-bb72-828751c04730\") " pod="openstack/neutron-588b89dd65-d4wqn" Feb 23 17:49:52 crc kubenswrapper[4724]: I0223 17:49:52.802124 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ffc40a2-ae26-4a8a-bb72-828751c04730-combined-ca-bundle\") pod \"neutron-588b89dd65-d4wqn\" (UID: \"3ffc40a2-ae26-4a8a-bb72-828751c04730\") " pod="openstack/neutron-588b89dd65-d4wqn" Feb 23 17:49:52 crc kubenswrapper[4724]: I0223 17:49:52.802256 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ffc40a2-ae26-4a8a-bb72-828751c04730-public-tls-certs\") pod \"neutron-588b89dd65-d4wqn\" (UID: \"3ffc40a2-ae26-4a8a-bb72-828751c04730\") " pod="openstack/neutron-588b89dd65-d4wqn" Feb 23 17:49:52 crc kubenswrapper[4724]: I0223 17:49:52.810965 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3ffc40a2-ae26-4a8a-bb72-828751c04730-httpd-config\") pod \"neutron-588b89dd65-d4wqn\" (UID: \"3ffc40a2-ae26-4a8a-bb72-828751c04730\") " pod="openstack/neutron-588b89dd65-d4wqn" Feb 23 17:49:52 crc kubenswrapper[4724]: I0223 17:49:52.811422 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3ffc40a2-ae26-4a8a-bb72-828751c04730-config\") pod \"neutron-588b89dd65-d4wqn\" (UID: \"3ffc40a2-ae26-4a8a-bb72-828751c04730\") " pod="openstack/neutron-588b89dd65-d4wqn" Feb 23 17:49:52 crc kubenswrapper[4724]: I0223 17:49:52.813521 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vrd4\" (UniqueName: \"kubernetes.io/projected/3ffc40a2-ae26-4a8a-bb72-828751c04730-kube-api-access-6vrd4\") pod \"neutron-588b89dd65-d4wqn\" (UID: \"3ffc40a2-ae26-4a8a-bb72-828751c04730\") " pod="openstack/neutron-588b89dd65-d4wqn" Feb 23 17:49:52 crc kubenswrapper[4724]: I0223 17:49:52.896565 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cdfc95f79-n8pfz" event={"ID":"539cdd64-b5ce-475b-aed3-ebe41fcf5896","Type":"ContainerStarted","Data":"ae7e304eebf05ec14f85c6e9641c5f41c4a563f12af2a5e28a766e4f9a807922"} Feb 23 17:49:52 crc kubenswrapper[4724]: I0223 17:49:52.897562 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7cdfc95f79-n8pfz" Feb 23 17:49:52 crc kubenswrapper[4724]: I0223 17:49:52.903668 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"86eb7ff0-87b2-4538-8c5b-9126768e810b","Type":"ContainerStarted","Data":"b281f0460f6aacdba1984f52546e50487809ee08e50ef5a7dbd8f741cfd7e606"} Feb 23 17:49:52 crc kubenswrapper[4724]: I0223 17:49:52.908167 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b67f89948-r429p" event={"ID":"dbbdbb8a-6e82-49cb-b631-8d8646d28dc5","Type":"ContainerStarted","Data":"c5d4f31a76e7501685c05c02ab7f671b92f2076ecbd40beaa6b549572565c277"} Feb 23 17:49:52 crc kubenswrapper[4724]: I0223 17:49:52.908301 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5b67f89948-r429p" Feb 23 17:49:52 crc kubenswrapper[4724]: I0223 17:49:52.911409 4724 generic.go:334] "Generic (PLEG): container finished" podID="3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b" containerID="020bc17f4ca0c21f819b49a77352653529f8879ef25555fab725be24d49f8c76" exitCode=0 Feb 23 17:49:52 crc kubenswrapper[4724]: I0223 17:49:52.911915 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2tlht" event={"ID":"3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b","Type":"ContainerDied","Data":"020bc17f4ca0c21f819b49a77352653529f8879ef25555fab725be24d49f8c76"} Feb 23 17:49:52 crc kubenswrapper[4724]: I0223 17:49:52.930053 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7cdfc95f79-n8pfz" podStartSLOduration=3.930035074 podStartE2EDuration="3.930035074s" podCreationTimestamp="2026-02-23 17:49:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:49:52.915705652 +0000 UTC m=+1148.731905252" watchObservedRunningTime="2026-02-23 17:49:52.930035074 +0000 UTC m=+1148.746234674" Feb 23 17:49:52 crc kubenswrapper[4724]: I0223 17:49:52.946363 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-588b89dd65-d4wqn" Feb 23 17:49:52 crc kubenswrapper[4724]: I0223 17:49:52.948291 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5b67f89948-r429p" podStartSLOduration=2.948273934 podStartE2EDuration="2.948273934s" podCreationTimestamp="2026-02-23 17:49:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:49:52.944296024 +0000 UTC m=+1148.760495634" watchObservedRunningTime="2026-02-23 17:49:52.948273934 +0000 UTC m=+1148.764473534" Feb 23 17:49:53 crc kubenswrapper[4724]: E0223 17:49:53.389219 4724 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c0efb1b_cbc1_4ac1_b969_ce5ae7b03857.slice/crio-conmon-fb492313bded3525683787416d1463c506739ef103a7abf059baf18d6f79a5a7.scope\": RecentStats: unable to find data in memory cache]" Feb 23 17:49:53 crc kubenswrapper[4724]: I0223 17:49:53.923946 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-applier-0" podUID="098a7e4d-3eea-40f5-861c-9c026433186b" containerName="watcher-applier" containerID="cri-o://260e0d8d453ab55e4f3ba974ab4887d60042447ad1e7985658980d0ef5628c51" gracePeriod=30 Feb 23 17:49:54 crc kubenswrapper[4724]: I0223 17:49:54.937448 4724 generic.go:334] "Generic (PLEG): container finished" podID="098a7e4d-3eea-40f5-861c-9c026433186b" containerID="260e0d8d453ab55e4f3ba974ab4887d60042447ad1e7985658980d0ef5628c51" exitCode=0 Feb 23 17:49:54 crc kubenswrapper[4724]: I0223 17:49:54.937529 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"098a7e4d-3eea-40f5-861c-9c026433186b","Type":"ContainerDied","Data":"260e0d8d453ab55e4f3ba974ab4887d60042447ad1e7985658980d0ef5628c51"} Feb 23 17:49:55 crc kubenswrapper[4724]: I0223 17:49:55.953074 4724 generic.go:334] "Generic (PLEG): container finished" podID="86eb7ff0-87b2-4538-8c5b-9126768e810b" containerID="b281f0460f6aacdba1984f52546e50487809ee08e50ef5a7dbd8f741cfd7e606" exitCode=1 Feb 23 17:49:55 crc kubenswrapper[4724]: I0223 17:49:55.953144 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"86eb7ff0-87b2-4538-8c5b-9126768e810b","Type":"ContainerDied","Data":"b281f0460f6aacdba1984f52546e50487809ee08e50ef5a7dbd8f741cfd7e606"} Feb 23 17:49:55 crc kubenswrapper[4724]: I0223 17:49:55.953383 4724 scope.go:117] "RemoveContainer" containerID="7e4682491ba39455c36ca3a4ae47e601edfb68796561c3f91cd736b629c55dd4" Feb 23 17:49:55 crc kubenswrapper[4724]: I0223 17:49:55.953743 4724 scope.go:117] "RemoveContainer" containerID="b281f0460f6aacdba1984f52546e50487809ee08e50ef5a7dbd8f741cfd7e606" Feb 23 17:49:55 crc kubenswrapper[4724]: E0223 17:49:55.953925 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(86eb7ff0-87b2-4538-8c5b-9126768e810b)\"" pod="openstack/watcher-decision-engine-0" podUID="86eb7ff0-87b2-4538-8c5b-9126768e810b" Feb 23 17:49:56 crc kubenswrapper[4724]: E0223 17:49:56.624327 4724 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 260e0d8d453ab55e4f3ba974ab4887d60042447ad1e7985658980d0ef5628c51 is running failed: container process not found" containerID="260e0d8d453ab55e4f3ba974ab4887d60042447ad1e7985658980d0ef5628c51" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 23 17:49:56 crc kubenswrapper[4724]: E0223 17:49:56.625137 4724 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 260e0d8d453ab55e4f3ba974ab4887d60042447ad1e7985658980d0ef5628c51 is running failed: container process not found" containerID="260e0d8d453ab55e4f3ba974ab4887d60042447ad1e7985658980d0ef5628c51" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 23 17:49:56 crc kubenswrapper[4724]: E0223 17:49:56.626342 4724 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 260e0d8d453ab55e4f3ba974ab4887d60042447ad1e7985658980d0ef5628c51 is running failed: container process not found" containerID="260e0d8d453ab55e4f3ba974ab4887d60042447ad1e7985658980d0ef5628c51" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 23 17:49:56 crc kubenswrapper[4724]: E0223 17:49:56.626416 4724 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 260e0d8d453ab55e4f3ba974ab4887d60042447ad1e7985658980d0ef5628c51 is running failed: container process not found" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="098a7e4d-3eea-40f5-861c-9c026433186b" containerName="watcher-applier" Feb 23 17:49:56 crc kubenswrapper[4724]: I0223 17:49:56.692213 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-q2ssq" Feb 23 17:49:56 crc kubenswrapper[4724]: I0223 17:49:56.723597 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2tlht" Feb 23 17:49:56 crc kubenswrapper[4724]: I0223 17:49:56.778625 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b-config-data\") pod \"3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b\" (UID: \"3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b\") " Feb 23 17:49:56 crc kubenswrapper[4724]: I0223 17:49:56.778753 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7421067a-d596-4a56-82f2-39eabd33567c-scripts\") pod \"7421067a-d596-4a56-82f2-39eabd33567c\" (UID: \"7421067a-d596-4a56-82f2-39eabd33567c\") " Feb 23 17:49:56 crc kubenswrapper[4724]: I0223 17:49:56.778774 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b-credential-keys\") pod \"3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b\" (UID: \"3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b\") " Feb 23 17:49:56 crc kubenswrapper[4724]: I0223 17:49:56.778803 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7421067a-d596-4a56-82f2-39eabd33567c-logs\") pod \"7421067a-d596-4a56-82f2-39eabd33567c\" (UID: \"7421067a-d596-4a56-82f2-39eabd33567c\") " Feb 23 17:49:56 crc kubenswrapper[4724]: I0223 17:49:56.778844 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcrsg\" (UniqueName: \"kubernetes.io/projected/7421067a-d596-4a56-82f2-39eabd33567c-kube-api-access-xcrsg\") pod \"7421067a-d596-4a56-82f2-39eabd33567c\" (UID: \"7421067a-d596-4a56-82f2-39eabd33567c\") " Feb 23 17:49:56 crc kubenswrapper[4724]: I0223 17:49:56.778880 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7421067a-d596-4a56-82f2-39eabd33567c-combined-ca-bundle\") pod \"7421067a-d596-4a56-82f2-39eabd33567c\" (UID: \"7421067a-d596-4a56-82f2-39eabd33567c\") " Feb 23 17:49:56 crc kubenswrapper[4724]: I0223 17:49:56.778904 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcrp6\" (UniqueName: \"kubernetes.io/projected/3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b-kube-api-access-bcrp6\") pod \"3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b\" (UID: \"3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b\") " Feb 23 17:49:56 crc kubenswrapper[4724]: I0223 17:49:56.778934 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7421067a-d596-4a56-82f2-39eabd33567c-config-data\") pod \"7421067a-d596-4a56-82f2-39eabd33567c\" (UID: \"7421067a-d596-4a56-82f2-39eabd33567c\") " Feb 23 17:49:56 crc kubenswrapper[4724]: I0223 17:49:56.778987 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b-combined-ca-bundle\") pod \"3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b\" (UID: \"3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b\") " Feb 23 17:49:56 crc kubenswrapper[4724]: I0223 17:49:56.779026 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b-fernet-keys\") pod \"3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b\" (UID: \"3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b\") " Feb 23 17:49:56 crc kubenswrapper[4724]: I0223 17:49:56.779057 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b-scripts\") pod \"3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b\" (UID: \"3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b\") " Feb 23 17:49:56 crc kubenswrapper[4724]: I0223 17:49:56.790001 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7421067a-d596-4a56-82f2-39eabd33567c-kube-api-access-xcrsg" (OuterVolumeSpecName: "kube-api-access-xcrsg") pod "7421067a-d596-4a56-82f2-39eabd33567c" (UID: "7421067a-d596-4a56-82f2-39eabd33567c"). InnerVolumeSpecName "kube-api-access-xcrsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:49:56 crc kubenswrapper[4724]: I0223 17:49:56.790664 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b-scripts" (OuterVolumeSpecName: "scripts") pod "3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b" (UID: "3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:49:56 crc kubenswrapper[4724]: I0223 17:49:56.799240 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b" (UID: "3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:49:56 crc kubenswrapper[4724]: I0223 17:49:56.800007 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7421067a-d596-4a56-82f2-39eabd33567c-logs" (OuterVolumeSpecName: "logs") pod "7421067a-d596-4a56-82f2-39eabd33567c" (UID: "7421067a-d596-4a56-82f2-39eabd33567c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:49:56 crc kubenswrapper[4724]: I0223 17:49:56.807493 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7421067a-d596-4a56-82f2-39eabd33567c-scripts" (OuterVolumeSpecName: "scripts") pod "7421067a-d596-4a56-82f2-39eabd33567c" (UID: "7421067a-d596-4a56-82f2-39eabd33567c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:49:56 crc kubenswrapper[4724]: I0223 17:49:56.807620 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b" (UID: "3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:49:56 crc kubenswrapper[4724]: I0223 17:49:56.812260 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b-kube-api-access-bcrp6" (OuterVolumeSpecName: "kube-api-access-bcrp6") pod "3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b" (UID: "3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b"). InnerVolumeSpecName "kube-api-access-bcrp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:49:56 crc kubenswrapper[4724]: I0223 17:49:56.819547 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b" (UID: "3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:49:56 crc kubenswrapper[4724]: I0223 17:49:56.832940 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b-config-data" (OuterVolumeSpecName: "config-data") pod "3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b" (UID: "3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:49:56 crc kubenswrapper[4724]: I0223 17:49:56.835727 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 23 17:49:56 crc kubenswrapper[4724]: I0223 17:49:56.854613 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7421067a-d596-4a56-82f2-39eabd33567c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7421067a-d596-4a56-82f2-39eabd33567c" (UID: "7421067a-d596-4a56-82f2-39eabd33567c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:49:56 crc kubenswrapper[4724]: I0223 17:49:56.881515 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7421067a-d596-4a56-82f2-39eabd33567c-config-data" (OuterVolumeSpecName: "config-data") pod "7421067a-d596-4a56-82f2-39eabd33567c" (UID: "7421067a-d596-4a56-82f2-39eabd33567c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:49:56 crc kubenswrapper[4724]: I0223 17:49:56.881681 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:56 crc kubenswrapper[4724]: I0223 17:49:56.882964 4724 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:56 crc kubenswrapper[4724]: I0223 17:49:56.882983 4724 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:56 crc kubenswrapper[4724]: I0223 17:49:56.882996 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:56 crc kubenswrapper[4724]: I0223 17:49:56.883009 4724 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7421067a-d596-4a56-82f2-39eabd33567c-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:56 crc kubenswrapper[4724]: I0223 17:49:56.883023 4724 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:56 crc kubenswrapper[4724]: I0223 17:49:56.883033 4724 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7421067a-d596-4a56-82f2-39eabd33567c-logs\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:56 crc kubenswrapper[4724]: I0223 17:49:56.883046 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcrsg\" (UniqueName: \"kubernetes.io/projected/7421067a-d596-4a56-82f2-39eabd33567c-kube-api-access-xcrsg\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:56 crc kubenswrapper[4724]: I0223 17:49:56.883059 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7421067a-d596-4a56-82f2-39eabd33567c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:56 crc kubenswrapper[4724]: I0223 17:49:56.883100 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcrp6\" (UniqueName: \"kubernetes.io/projected/3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b-kube-api-access-bcrp6\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:56 crc kubenswrapper[4724]: I0223 17:49:56.964681 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 23 17:49:56 crc kubenswrapper[4724]: I0223 17:49:56.967013 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2tlht" Feb 23 17:49:56 crc kubenswrapper[4724]: I0223 17:49:56.971758 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"098a7e4d-3eea-40f5-861c-9c026433186b","Type":"ContainerDied","Data":"9e086010bb3538445a17bf5e5166408e5f2a2de408c7965d0422bc794c4d280a"} Feb 23 17:49:56 crc kubenswrapper[4724]: I0223 17:49:56.971799 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2tlht" event={"ID":"3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b","Type":"ContainerDied","Data":"13629885143ce824f3279a46941fa7dccb8dbff471dedbe80b8e70c5b0c38684"} Feb 23 17:49:56 crc kubenswrapper[4724]: I0223 17:49:56.971815 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13629885143ce824f3279a46941fa7dccb8dbff471dedbe80b8e70c5b0c38684" Feb 23 17:49:56 crc kubenswrapper[4724]: I0223 17:49:56.971834 4724 scope.go:117] "RemoveContainer" containerID="260e0d8d453ab55e4f3ba974ab4887d60042447ad1e7985658980d0ef5628c51" Feb 23 17:49:56 crc kubenswrapper[4724]: I0223 17:49:56.975808 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-q2ssq" Feb 23 17:49:56 crc kubenswrapper[4724]: I0223 17:49:56.975994 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-q2ssq" event={"ID":"7421067a-d596-4a56-82f2-39eabd33567c","Type":"ContainerDied","Data":"b812d931894e6f2efb0282c358edc9f7218b25f2c22756d51534a43dccdb3105"} Feb 23 17:49:56 crc kubenswrapper[4724]: I0223 17:49:56.976048 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b812d931894e6f2efb0282c358edc9f7218b25f2c22756d51534a43dccdb3105" Feb 23 17:49:56 crc kubenswrapper[4724]: I0223 17:49:56.986732 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/098a7e4d-3eea-40f5-861c-9c026433186b-logs\") pod \"098a7e4d-3eea-40f5-861c-9c026433186b\" (UID: \"098a7e4d-3eea-40f5-861c-9c026433186b\") " Feb 23 17:49:56 crc kubenswrapper[4724]: I0223 17:49:56.986909 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/098a7e4d-3eea-40f5-861c-9c026433186b-config-data\") pod \"098a7e4d-3eea-40f5-861c-9c026433186b\" (UID: \"098a7e4d-3eea-40f5-861c-9c026433186b\") " Feb 23 17:49:56 crc kubenswrapper[4724]: I0223 17:49:56.987135 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/098a7e4d-3eea-40f5-861c-9c026433186b-combined-ca-bundle\") pod \"098a7e4d-3eea-40f5-861c-9c026433186b\" (UID: \"098a7e4d-3eea-40f5-861c-9c026433186b\") " Feb 23 17:49:56 crc kubenswrapper[4724]: I0223 17:49:56.987208 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chng2\" (UniqueName: \"kubernetes.io/projected/098a7e4d-3eea-40f5-861c-9c026433186b-kube-api-access-chng2\") pod \"098a7e4d-3eea-40f5-861c-9c026433186b\" (UID: \"098a7e4d-3eea-40f5-861c-9c026433186b\") " Feb 23 17:49:56 crc kubenswrapper[4724]: I0223 17:49:56.987672 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7421067a-d596-4a56-82f2-39eabd33567c-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:56 crc kubenswrapper[4724]: I0223 17:49:56.991260 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/098a7e4d-3eea-40f5-861c-9c026433186b-logs" (OuterVolumeSpecName: "logs") pod "098a7e4d-3eea-40f5-861c-9c026433186b" (UID: "098a7e4d-3eea-40f5-861c-9c026433186b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:49:56 crc kubenswrapper[4724]: I0223 17:49:56.997806 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7a589efc-e414-47aa-90d8-14b2ad1f542e","Type":"ContainerStarted","Data":"daa172e2828ce21702379f3d032a44553407ffd0e5e3a6dbd3e72bf44e56fd19"} Feb 23 17:49:56 crc kubenswrapper[4724]: I0223 17:49:56.997863 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/098a7e4d-3eea-40f5-861c-9c026433186b-kube-api-access-chng2" (OuterVolumeSpecName: "kube-api-access-chng2") pod "098a7e4d-3eea-40f5-861c-9c026433186b" (UID: "098a7e4d-3eea-40f5-861c-9c026433186b"). InnerVolumeSpecName "kube-api-access-chng2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:49:57 crc kubenswrapper[4724]: I0223 17:49:57.044583 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/098a7e4d-3eea-40f5-861c-9c026433186b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "098a7e4d-3eea-40f5-861c-9c026433186b" (UID: "098a7e4d-3eea-40f5-861c-9c026433186b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:49:57 crc kubenswrapper[4724]: I0223 17:49:57.055026 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Feb 23 17:49:57 crc kubenswrapper[4724]: I0223 17:49:57.066342 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Feb 23 17:49:57 crc kubenswrapper[4724]: I0223 17:49:57.070546 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/098a7e4d-3eea-40f5-861c-9c026433186b-config-data" (OuterVolumeSpecName: "config-data") pod "098a7e4d-3eea-40f5-861c-9c026433186b" (UID: "098a7e4d-3eea-40f5-861c-9c026433186b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:49:57 crc kubenswrapper[4724]: I0223 17:49:57.090083 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/098a7e4d-3eea-40f5-861c-9c026433186b-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:57 crc kubenswrapper[4724]: I0223 17:49:57.090113 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/098a7e4d-3eea-40f5-861c-9c026433186b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:57 crc kubenswrapper[4724]: I0223 17:49:57.090123 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chng2\" (UniqueName: \"kubernetes.io/projected/098a7e4d-3eea-40f5-861c-9c026433186b-kube-api-access-chng2\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:57 crc kubenswrapper[4724]: I0223 17:49:57.090132 4724 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/098a7e4d-3eea-40f5-861c-9c026433186b-logs\") on node \"crc\" DevicePath \"\"" Feb 23 17:49:57 crc kubenswrapper[4724]: I0223 17:49:57.113342 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-588b89dd65-d4wqn"] Feb 23 17:49:57 crc kubenswrapper[4724]: I0223 17:49:57.302015 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Feb 23 17:49:57 crc kubenswrapper[4724]: I0223 17:49:57.325512 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-applier-0"] Feb 23 17:49:57 crc kubenswrapper[4724]: I0223 17:49:57.341509 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Feb 23 17:49:57 crc kubenswrapper[4724]: E0223 17:49:57.342893 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7421067a-d596-4a56-82f2-39eabd33567c" containerName="placement-db-sync" Feb 23 17:49:57 crc kubenswrapper[4724]: I0223 17:49:57.342931 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="7421067a-d596-4a56-82f2-39eabd33567c" containerName="placement-db-sync" Feb 23 17:49:57 crc kubenswrapper[4724]: E0223 17:49:57.342969 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="098a7e4d-3eea-40f5-861c-9c026433186b" containerName="watcher-applier" Feb 23 17:49:57 crc kubenswrapper[4724]: I0223 17:49:57.342977 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="098a7e4d-3eea-40f5-861c-9c026433186b" containerName="watcher-applier" Feb 23 17:49:57 crc kubenswrapper[4724]: E0223 17:49:57.342994 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b" containerName="keystone-bootstrap" Feb 23 17:49:57 crc kubenswrapper[4724]: I0223 17:49:57.343000 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b" containerName="keystone-bootstrap" Feb 23 17:49:57 crc kubenswrapper[4724]: I0223 17:49:57.343157 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="7421067a-d596-4a56-82f2-39eabd33567c" containerName="placement-db-sync" Feb 23 17:49:57 crc kubenswrapper[4724]: I0223 17:49:57.343173 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b" containerName="keystone-bootstrap" Feb 23 17:49:57 crc kubenswrapper[4724]: I0223 17:49:57.343189 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="098a7e4d-3eea-40f5-861c-9c026433186b" containerName="watcher-applier" Feb 23 17:49:57 crc kubenswrapper[4724]: I0223 17:49:57.343729 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 23 17:49:57 crc kubenswrapper[4724]: I0223 17:49:57.343814 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 23 17:49:57 crc kubenswrapper[4724]: I0223 17:49:57.346615 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Feb 23 17:49:57 crc kubenswrapper[4724]: I0223 17:49:57.396803 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f1d15c2-eeeb-41fe-89c7-27ad522e5c56-logs\") pod \"watcher-applier-0\" (UID: \"9f1d15c2-eeeb-41fe-89c7-27ad522e5c56\") " pod="openstack/watcher-applier-0" Feb 23 17:49:57 crc kubenswrapper[4724]: I0223 17:49:57.397160 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8p4t\" (UniqueName: \"kubernetes.io/projected/9f1d15c2-eeeb-41fe-89c7-27ad522e5c56-kube-api-access-m8p4t\") pod \"watcher-applier-0\" (UID: \"9f1d15c2-eeeb-41fe-89c7-27ad522e5c56\") " pod="openstack/watcher-applier-0" Feb 23 17:49:57 crc kubenswrapper[4724]: I0223 17:49:57.397187 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f1d15c2-eeeb-41fe-89c7-27ad522e5c56-config-data\") pod \"watcher-applier-0\" (UID: \"9f1d15c2-eeeb-41fe-89c7-27ad522e5c56\") " pod="openstack/watcher-applier-0" Feb 23 17:49:57 crc kubenswrapper[4724]: I0223 17:49:57.397290 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f1d15c2-eeeb-41fe-89c7-27ad522e5c56-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"9f1d15c2-eeeb-41fe-89c7-27ad522e5c56\") " pod="openstack/watcher-applier-0" Feb 23 17:49:57 crc kubenswrapper[4724]: I0223 17:49:57.499176 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f1d15c2-eeeb-41fe-89c7-27ad522e5c56-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"9f1d15c2-eeeb-41fe-89c7-27ad522e5c56\") " pod="openstack/watcher-applier-0" Feb 23 17:49:57 crc kubenswrapper[4724]: I0223 17:49:57.499867 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f1d15c2-eeeb-41fe-89c7-27ad522e5c56-logs\") pod \"watcher-applier-0\" (UID: \"9f1d15c2-eeeb-41fe-89c7-27ad522e5c56\") " pod="openstack/watcher-applier-0" Feb 23 17:49:57 crc kubenswrapper[4724]: I0223 17:49:57.499922 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8p4t\" (UniqueName: \"kubernetes.io/projected/9f1d15c2-eeeb-41fe-89c7-27ad522e5c56-kube-api-access-m8p4t\") pod \"watcher-applier-0\" (UID: \"9f1d15c2-eeeb-41fe-89c7-27ad522e5c56\") " pod="openstack/watcher-applier-0" Feb 23 17:49:57 crc kubenswrapper[4724]: I0223 17:49:57.499949 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f1d15c2-eeeb-41fe-89c7-27ad522e5c56-config-data\") pod \"watcher-applier-0\" (UID: \"9f1d15c2-eeeb-41fe-89c7-27ad522e5c56\") " pod="openstack/watcher-applier-0" Feb 23 17:49:57 crc kubenswrapper[4724]: I0223 17:49:57.500453 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f1d15c2-eeeb-41fe-89c7-27ad522e5c56-logs\") pod \"watcher-applier-0\" (UID: \"9f1d15c2-eeeb-41fe-89c7-27ad522e5c56\") " pod="openstack/watcher-applier-0" Feb 23 17:49:57 crc kubenswrapper[4724]: I0223 17:49:57.504063 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f1d15c2-eeeb-41fe-89c7-27ad522e5c56-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"9f1d15c2-eeeb-41fe-89c7-27ad522e5c56\") " pod="openstack/watcher-applier-0" Feb 23 17:49:57 crc kubenswrapper[4724]: I0223 17:49:57.514974 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f1d15c2-eeeb-41fe-89c7-27ad522e5c56-config-data\") pod \"watcher-applier-0\" (UID: \"9f1d15c2-eeeb-41fe-89c7-27ad522e5c56\") " pod="openstack/watcher-applier-0" Feb 23 17:49:57 crc kubenswrapper[4724]: I0223 17:49:57.519949 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8p4t\" (UniqueName: \"kubernetes.io/projected/9f1d15c2-eeeb-41fe-89c7-27ad522e5c56-kube-api-access-m8p4t\") pod \"watcher-applier-0\" (UID: \"9f1d15c2-eeeb-41fe-89c7-27ad522e5c56\") " pod="openstack/watcher-applier-0" Feb 23 17:49:57 crc kubenswrapper[4724]: I0223 17:49:57.641755 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 23 17:49:57 crc kubenswrapper[4724]: I0223 17:49:57.641823 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 23 17:49:57 crc kubenswrapper[4724]: I0223 17:49:57.665876 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 23 17:49:57 crc kubenswrapper[4724]: I0223 17:49:57.696449 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 23 17:49:57 crc kubenswrapper[4724]: I0223 17:49:57.697115 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 23 17:49:57 crc kubenswrapper[4724]: I0223 17:49:57.837529 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-57d985d94b-jc7cf"] Feb 23 17:49:57 crc kubenswrapper[4724]: I0223 17:49:57.838983 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-57d985d94b-jc7cf" Feb 23 17:49:57 crc kubenswrapper[4724]: I0223 17:49:57.844039 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-zmpb8" Feb 23 17:49:57 crc kubenswrapper[4724]: I0223 17:49:57.844247 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 23 17:49:57 crc kubenswrapper[4724]: I0223 17:49:57.845237 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 23 17:49:57 crc kubenswrapper[4724]: I0223 17:49:57.846021 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 23 17:49:57 crc kubenswrapper[4724]: I0223 17:49:57.846677 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 23 17:49:57 crc kubenswrapper[4724]: I0223 17:49:57.854292 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-57d985d94b-jc7cf"] Feb 23 17:49:57 crc kubenswrapper[4724]: I0223 17:49:57.908942 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx5bf\" (UniqueName: \"kubernetes.io/projected/63ff397b-64ac-4aa1-b20e-e2570bcc4423-kube-api-access-tx5bf\") pod \"placement-57d985d94b-jc7cf\" (UID: \"63ff397b-64ac-4aa1-b20e-e2570bcc4423\") " pod="openstack/placement-57d985d94b-jc7cf" Feb 23 17:49:57 crc kubenswrapper[4724]: I0223 17:49:57.908986 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/63ff397b-64ac-4aa1-b20e-e2570bcc4423-internal-tls-certs\") pod \"placement-57d985d94b-jc7cf\" (UID: \"63ff397b-64ac-4aa1-b20e-e2570bcc4423\") " pod="openstack/placement-57d985d94b-jc7cf" Feb 23 17:49:57 crc kubenswrapper[4724]: I0223 17:49:57.909017 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63ff397b-64ac-4aa1-b20e-e2570bcc4423-config-data\") pod \"placement-57d985d94b-jc7cf\" (UID: \"63ff397b-64ac-4aa1-b20e-e2570bcc4423\") " pod="openstack/placement-57d985d94b-jc7cf" Feb 23 17:49:57 crc kubenswrapper[4724]: I0223 17:49:57.909102 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63ff397b-64ac-4aa1-b20e-e2570bcc4423-scripts\") pod \"placement-57d985d94b-jc7cf\" (UID: \"63ff397b-64ac-4aa1-b20e-e2570bcc4423\") " pod="openstack/placement-57d985d94b-jc7cf" Feb 23 17:49:57 crc kubenswrapper[4724]: I0223 17:49:57.909122 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63ff397b-64ac-4aa1-b20e-e2570bcc4423-logs\") pod \"placement-57d985d94b-jc7cf\" (UID: \"63ff397b-64ac-4aa1-b20e-e2570bcc4423\") " pod="openstack/placement-57d985d94b-jc7cf" Feb 23 17:49:57 crc kubenswrapper[4724]: I0223 17:49:57.909148 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/63ff397b-64ac-4aa1-b20e-e2570bcc4423-public-tls-certs\") pod \"placement-57d985d94b-jc7cf\" (UID: \"63ff397b-64ac-4aa1-b20e-e2570bcc4423\") " pod="openstack/placement-57d985d94b-jc7cf" Feb 23 17:49:57 crc kubenswrapper[4724]: I0223 17:49:57.909172 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63ff397b-64ac-4aa1-b20e-e2570bcc4423-combined-ca-bundle\") pod \"placement-57d985d94b-jc7cf\" (UID: \"63ff397b-64ac-4aa1-b20e-e2570bcc4423\") " pod="openstack/placement-57d985d94b-jc7cf" Feb 23 17:49:57 crc kubenswrapper[4724]: I0223 17:49:57.938825 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5cb5799495-xxmx4"] Feb 23 17:49:57 crc kubenswrapper[4724]: I0223 17:49:57.939987 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5cb5799495-xxmx4" Feb 23 17:49:57 crc kubenswrapper[4724]: I0223 17:49:57.946127 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 23 17:49:57 crc kubenswrapper[4724]: I0223 17:49:57.946362 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 23 17:49:57 crc kubenswrapper[4724]: I0223 17:49:57.946501 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 23 17:49:57 crc kubenswrapper[4724]: I0223 17:49:57.946634 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 23 17:49:57 crc kubenswrapper[4724]: I0223 17:49:57.946735 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-8cc4s" Feb 23 17:49:57 crc kubenswrapper[4724]: I0223 17:49:57.946846 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 23 17:49:57 crc kubenswrapper[4724]: I0223 17:49:57.955436 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5cb5799495-xxmx4"] Feb 23 17:49:58 crc kubenswrapper[4724]: I0223 17:49:58.016047 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36583b8f-b74d-4f25-980e-030c8d3896c7-scripts\") pod \"keystone-5cb5799495-xxmx4\" (UID: \"36583b8f-b74d-4f25-980e-030c8d3896c7\") " pod="openstack/keystone-5cb5799495-xxmx4" Feb 23 17:49:58 crc kubenswrapper[4724]: I0223 17:49:58.016420 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63ff397b-64ac-4aa1-b20e-e2570bcc4423-scripts\") pod \"placement-57d985d94b-jc7cf\" (UID: \"63ff397b-64ac-4aa1-b20e-e2570bcc4423\") " pod="openstack/placement-57d985d94b-jc7cf" Feb 23 17:49:58 crc kubenswrapper[4724]: I0223 17:49:58.016456 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63ff397b-64ac-4aa1-b20e-e2570bcc4423-logs\") pod \"placement-57d985d94b-jc7cf\" (UID: \"63ff397b-64ac-4aa1-b20e-e2570bcc4423\") " pod="openstack/placement-57d985d94b-jc7cf" Feb 23 17:49:58 crc kubenswrapper[4724]: I0223 17:49:58.016499 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/36583b8f-b74d-4f25-980e-030c8d3896c7-public-tls-certs\") pod \"keystone-5cb5799495-xxmx4\" (UID: \"36583b8f-b74d-4f25-980e-030c8d3896c7\") " pod="openstack/keystone-5cb5799495-xxmx4" Feb 23 17:49:58 crc kubenswrapper[4724]: I0223 17:49:58.016528 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/63ff397b-64ac-4aa1-b20e-e2570bcc4423-public-tls-certs\") pod \"placement-57d985d94b-jc7cf\" (UID: \"63ff397b-64ac-4aa1-b20e-e2570bcc4423\") " pod="openstack/placement-57d985d94b-jc7cf" Feb 23 17:49:58 crc kubenswrapper[4724]: I0223 17:49:58.016551 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/36583b8f-b74d-4f25-980e-030c8d3896c7-fernet-keys\") pod \"keystone-5cb5799495-xxmx4\" (UID: \"36583b8f-b74d-4f25-980e-030c8d3896c7\") " pod="openstack/keystone-5cb5799495-xxmx4" Feb 23 17:49:58 crc kubenswrapper[4724]: I0223 17:49:58.016576 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63ff397b-64ac-4aa1-b20e-e2570bcc4423-combined-ca-bundle\") pod \"placement-57d985d94b-jc7cf\" (UID: \"63ff397b-64ac-4aa1-b20e-e2570bcc4423\") " pod="openstack/placement-57d985d94b-jc7cf" Feb 23 17:49:58 crc kubenswrapper[4724]: I0223 17:49:58.016617 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/36583b8f-b74d-4f25-980e-030c8d3896c7-internal-tls-certs\") pod \"keystone-5cb5799495-xxmx4\" (UID: \"36583b8f-b74d-4f25-980e-030c8d3896c7\") " pod="openstack/keystone-5cb5799495-xxmx4" Feb 23 17:49:58 crc kubenswrapper[4724]: I0223 17:49:58.016714 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tx5bf\" (UniqueName: \"kubernetes.io/projected/63ff397b-64ac-4aa1-b20e-e2570bcc4423-kube-api-access-tx5bf\") pod \"placement-57d985d94b-jc7cf\" (UID: \"63ff397b-64ac-4aa1-b20e-e2570bcc4423\") " pod="openstack/placement-57d985d94b-jc7cf" Feb 23 17:49:58 crc kubenswrapper[4724]: I0223 17:49:58.016751 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/63ff397b-64ac-4aa1-b20e-e2570bcc4423-internal-tls-certs\") pod \"placement-57d985d94b-jc7cf\" (UID: \"63ff397b-64ac-4aa1-b20e-e2570bcc4423\") " pod="openstack/placement-57d985d94b-jc7cf" Feb 23 17:49:58 crc kubenswrapper[4724]: I0223 17:49:58.016788 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63ff397b-64ac-4aa1-b20e-e2570bcc4423-config-data\") pod \"placement-57d985d94b-jc7cf\" (UID: \"63ff397b-64ac-4aa1-b20e-e2570bcc4423\") " pod="openstack/placement-57d985d94b-jc7cf" Feb 23 17:49:58 crc kubenswrapper[4724]: I0223 17:49:58.016840 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvkh2\" (UniqueName: \"kubernetes.io/projected/36583b8f-b74d-4f25-980e-030c8d3896c7-kube-api-access-gvkh2\") pod \"keystone-5cb5799495-xxmx4\" (UID: \"36583b8f-b74d-4f25-980e-030c8d3896c7\") " pod="openstack/keystone-5cb5799495-xxmx4" Feb 23 17:49:58 crc kubenswrapper[4724]: I0223 17:49:58.016891 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/36583b8f-b74d-4f25-980e-030c8d3896c7-credential-keys\") pod \"keystone-5cb5799495-xxmx4\" (UID: \"36583b8f-b74d-4f25-980e-030c8d3896c7\") " pod="openstack/keystone-5cb5799495-xxmx4" Feb 23 17:49:58 crc kubenswrapper[4724]: I0223 17:49:58.016929 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36583b8f-b74d-4f25-980e-030c8d3896c7-combined-ca-bundle\") pod \"keystone-5cb5799495-xxmx4\" (UID: \"36583b8f-b74d-4f25-980e-030c8d3896c7\") " pod="openstack/keystone-5cb5799495-xxmx4" Feb 23 17:49:58 crc kubenswrapper[4724]: I0223 17:49:58.016956 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36583b8f-b74d-4f25-980e-030c8d3896c7-config-data\") pod \"keystone-5cb5799495-xxmx4\" (UID: \"36583b8f-b74d-4f25-980e-030c8d3896c7\") " pod="openstack/keystone-5cb5799495-xxmx4" Feb 23 17:49:58 crc kubenswrapper[4724]: I0223 17:49:58.020970 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63ff397b-64ac-4aa1-b20e-e2570bcc4423-logs\") pod \"placement-57d985d94b-jc7cf\" (UID: \"63ff397b-64ac-4aa1-b20e-e2570bcc4423\") " pod="openstack/placement-57d985d94b-jc7cf" Feb 23 17:49:58 crc kubenswrapper[4724]: I0223 17:49:58.025404 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63ff397b-64ac-4aa1-b20e-e2570bcc4423-scripts\") pod \"placement-57d985d94b-jc7cf\" (UID: \"63ff397b-64ac-4aa1-b20e-e2570bcc4423\") " pod="openstack/placement-57d985d94b-jc7cf" Feb 23 17:49:58 crc kubenswrapper[4724]: I0223 17:49:58.029275 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/63ff397b-64ac-4aa1-b20e-e2570bcc4423-internal-tls-certs\") pod \"placement-57d985d94b-jc7cf\" (UID: \"63ff397b-64ac-4aa1-b20e-e2570bcc4423\") " pod="openstack/placement-57d985d94b-jc7cf" Feb 23 17:49:58 crc kubenswrapper[4724]: I0223 17:49:58.031820 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63ff397b-64ac-4aa1-b20e-e2570bcc4423-config-data\") pod \"placement-57d985d94b-jc7cf\" (UID: \"63ff397b-64ac-4aa1-b20e-e2570bcc4423\") " pod="openstack/placement-57d985d94b-jc7cf" Feb 23 17:49:58 crc kubenswrapper[4724]: I0223 17:49:58.033753 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63ff397b-64ac-4aa1-b20e-e2570bcc4423-combined-ca-bundle\") pod \"placement-57d985d94b-jc7cf\" (UID: \"63ff397b-64ac-4aa1-b20e-e2570bcc4423\") " pod="openstack/placement-57d985d94b-jc7cf" Feb 23 17:49:58 crc kubenswrapper[4724]: I0223 17:49:58.042301 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tx5bf\" (UniqueName: \"kubernetes.io/projected/63ff397b-64ac-4aa1-b20e-e2570bcc4423-kube-api-access-tx5bf\") pod \"placement-57d985d94b-jc7cf\" (UID: \"63ff397b-64ac-4aa1-b20e-e2570bcc4423\") " pod="openstack/placement-57d985d94b-jc7cf" Feb 23 17:49:58 crc kubenswrapper[4724]: I0223 17:49:58.048266 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/63ff397b-64ac-4aa1-b20e-e2570bcc4423-public-tls-certs\") pod \"placement-57d985d94b-jc7cf\" (UID: \"63ff397b-64ac-4aa1-b20e-e2570bcc4423\") " pod="openstack/placement-57d985d94b-jc7cf" Feb 23 17:49:58 crc kubenswrapper[4724]: I0223 17:49:58.071277 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-588b89dd65-d4wqn" event={"ID":"3ffc40a2-ae26-4a8a-bb72-828751c04730","Type":"ContainerStarted","Data":"3ff2fd843ef53d5f1691f9303bdfe9ea0bf6b364f7a11e401a752d438181038b"} Feb 23 17:49:58 crc kubenswrapper[4724]: I0223 17:49:58.071333 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-588b89dd65-d4wqn" event={"ID":"3ffc40a2-ae26-4a8a-bb72-828751c04730","Type":"ContainerStarted","Data":"ab8707cba41239a181b76258ee6c61d599341579a7b0daa4365b3c09dc031b3b"} Feb 23 17:49:58 crc kubenswrapper[4724]: I0223 17:49:58.071349 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-588b89dd65-d4wqn" event={"ID":"3ffc40a2-ae26-4a8a-bb72-828751c04730","Type":"ContainerStarted","Data":"ac88ff228f8a130163e6b031b2cfee4b8b99a2e6d92d0c6a60627d9108c447ab"} Feb 23 17:49:58 crc kubenswrapper[4724]: I0223 17:49:58.073561 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 23 17:49:58 crc kubenswrapper[4724]: I0223 17:49:58.073594 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 23 17:49:58 crc kubenswrapper[4724]: I0223 17:49:58.083676 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 23 17:49:58 crc kubenswrapper[4724]: I0223 17:49:58.108552 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-588b89dd65-d4wqn" podStartSLOduration=6.108534599 podStartE2EDuration="6.108534599s" podCreationTimestamp="2026-02-23 17:49:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:49:58.100628459 +0000 UTC m=+1153.916828059" watchObservedRunningTime="2026-02-23 17:49:58.108534599 +0000 UTC m=+1153.924734199" Feb 23 17:49:58 crc kubenswrapper[4724]: I0223 17:49:58.118614 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/36583b8f-b74d-4f25-980e-030c8d3896c7-internal-tls-certs\") pod \"keystone-5cb5799495-xxmx4\" (UID: \"36583b8f-b74d-4f25-980e-030c8d3896c7\") " pod="openstack/keystone-5cb5799495-xxmx4" Feb 23 17:49:58 crc kubenswrapper[4724]: I0223 17:49:58.118761 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvkh2\" (UniqueName: \"kubernetes.io/projected/36583b8f-b74d-4f25-980e-030c8d3896c7-kube-api-access-gvkh2\") pod \"keystone-5cb5799495-xxmx4\" (UID: \"36583b8f-b74d-4f25-980e-030c8d3896c7\") " pod="openstack/keystone-5cb5799495-xxmx4" Feb 23 17:49:58 crc kubenswrapper[4724]: I0223 17:49:58.118810 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/36583b8f-b74d-4f25-980e-030c8d3896c7-credential-keys\") pod \"keystone-5cb5799495-xxmx4\" (UID: \"36583b8f-b74d-4f25-980e-030c8d3896c7\") " pod="openstack/keystone-5cb5799495-xxmx4" Feb 23 17:49:58 crc kubenswrapper[4724]: I0223 17:49:58.118838 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36583b8f-b74d-4f25-980e-030c8d3896c7-combined-ca-bundle\") pod \"keystone-5cb5799495-xxmx4\" (UID: \"36583b8f-b74d-4f25-980e-030c8d3896c7\") " pod="openstack/keystone-5cb5799495-xxmx4" Feb 23 17:49:58 crc kubenswrapper[4724]: I0223 17:49:58.118868 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36583b8f-b74d-4f25-980e-030c8d3896c7-config-data\") pod \"keystone-5cb5799495-xxmx4\" (UID: \"36583b8f-b74d-4f25-980e-030c8d3896c7\") " pod="openstack/keystone-5cb5799495-xxmx4" Feb 23 17:49:58 crc kubenswrapper[4724]: I0223 17:49:58.118904 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36583b8f-b74d-4f25-980e-030c8d3896c7-scripts\") pod \"keystone-5cb5799495-xxmx4\" (UID: \"36583b8f-b74d-4f25-980e-030c8d3896c7\") " pod="openstack/keystone-5cb5799495-xxmx4" Feb 23 17:49:58 crc kubenswrapper[4724]: I0223 17:49:58.118960 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/36583b8f-b74d-4f25-980e-030c8d3896c7-public-tls-certs\") pod \"keystone-5cb5799495-xxmx4\" (UID: \"36583b8f-b74d-4f25-980e-030c8d3896c7\") " pod="openstack/keystone-5cb5799495-xxmx4" Feb 23 17:49:58 crc kubenswrapper[4724]: I0223 17:49:58.118987 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/36583b8f-b74d-4f25-980e-030c8d3896c7-fernet-keys\") pod \"keystone-5cb5799495-xxmx4\" (UID: \"36583b8f-b74d-4f25-980e-030c8d3896c7\") " pod="openstack/keystone-5cb5799495-xxmx4" Feb 23 17:49:58 crc kubenswrapper[4724]: I0223 17:49:58.125491 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/36583b8f-b74d-4f25-980e-030c8d3896c7-internal-tls-certs\") pod \"keystone-5cb5799495-xxmx4\" (UID: \"36583b8f-b74d-4f25-980e-030c8d3896c7\") " pod="openstack/keystone-5cb5799495-xxmx4" Feb 23 17:49:58 crc kubenswrapper[4724]: I0223 17:49:58.129304 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36583b8f-b74d-4f25-980e-030c8d3896c7-combined-ca-bundle\") pod \"keystone-5cb5799495-xxmx4\" (UID: \"36583b8f-b74d-4f25-980e-030c8d3896c7\") " pod="openstack/keystone-5cb5799495-xxmx4" Feb 23 17:49:58 crc kubenswrapper[4724]: I0223 17:49:58.134301 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/36583b8f-b74d-4f25-980e-030c8d3896c7-fernet-keys\") pod \"keystone-5cb5799495-xxmx4\" (UID: \"36583b8f-b74d-4f25-980e-030c8d3896c7\") " pod="openstack/keystone-5cb5799495-xxmx4" Feb 23 17:49:58 crc kubenswrapper[4724]: I0223 17:49:58.152192 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36583b8f-b74d-4f25-980e-030c8d3896c7-scripts\") pod \"keystone-5cb5799495-xxmx4\" (UID: \"36583b8f-b74d-4f25-980e-030c8d3896c7\") " pod="openstack/keystone-5cb5799495-xxmx4" Feb 23 17:49:58 crc kubenswrapper[4724]: I0223 17:49:58.152541 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/36583b8f-b74d-4f25-980e-030c8d3896c7-credential-keys\") pod \"keystone-5cb5799495-xxmx4\" (UID: \"36583b8f-b74d-4f25-980e-030c8d3896c7\") " pod="openstack/keystone-5cb5799495-xxmx4" Feb 23 17:49:58 crc kubenswrapper[4724]: I0223 17:49:58.152869 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/36583b8f-b74d-4f25-980e-030c8d3896c7-public-tls-certs\") pod \"keystone-5cb5799495-xxmx4\" (UID: \"36583b8f-b74d-4f25-980e-030c8d3896c7\") " pod="openstack/keystone-5cb5799495-xxmx4" Feb 23 17:49:58 crc kubenswrapper[4724]: I0223 17:49:58.159816 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvkh2\" (UniqueName: \"kubernetes.io/projected/36583b8f-b74d-4f25-980e-030c8d3896c7-kube-api-access-gvkh2\") pod \"keystone-5cb5799495-xxmx4\" (UID: \"36583b8f-b74d-4f25-980e-030c8d3896c7\") " pod="openstack/keystone-5cb5799495-xxmx4" Feb 23 17:49:58 crc kubenswrapper[4724]: I0223 17:49:58.171382 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-57d985d94b-jc7cf" Feb 23 17:49:58 crc kubenswrapper[4724]: I0223 17:49:58.173135 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36583b8f-b74d-4f25-980e-030c8d3896c7-config-data\") pod \"keystone-5cb5799495-xxmx4\" (UID: \"36583b8f-b74d-4f25-980e-030c8d3896c7\") " pod="openstack/keystone-5cb5799495-xxmx4" Feb 23 17:49:58 crc kubenswrapper[4724]: I0223 17:49:58.271794 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5cb5799495-xxmx4" Feb 23 17:49:58 crc kubenswrapper[4724]: I0223 17:49:58.320351 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 23 17:49:58 crc kubenswrapper[4724]: I0223 17:49:58.325481 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 23 17:49:58 crc kubenswrapper[4724]: I0223 17:49:58.325518 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 23 17:49:58 crc kubenswrapper[4724]: I0223 17:49:58.386708 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 23 17:49:58 crc kubenswrapper[4724]: I0223 17:49:58.428612 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 23 17:49:58 crc kubenswrapper[4724]: I0223 17:49:58.863974 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-57d985d94b-jc7cf"] Feb 23 17:49:58 crc kubenswrapper[4724]: W0223 17:49:58.869043 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63ff397b_64ac_4aa1_b20e_e2570bcc4423.slice/crio-faf8e8e01e595a8926d32d61a6c4629c500201c575f87384f9f4d17c3b44b7b3 WatchSource:0}: Error finding container faf8e8e01e595a8926d32d61a6c4629c500201c575f87384f9f4d17c3b44b7b3: Status 404 returned error can't find the container with id faf8e8e01e595a8926d32d61a6c4629c500201c575f87384f9f4d17c3b44b7b3 Feb 23 17:49:58 crc kubenswrapper[4724]: I0223 17:49:58.986677 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="098a7e4d-3eea-40f5-861c-9c026433186b" path="/var/lib/kubelet/pods/098a7e4d-3eea-40f5-861c-9c026433186b/volumes" Feb 23 17:49:58 crc kubenswrapper[4724]: I0223 17:49:58.987731 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5cb5799495-xxmx4"] Feb 23 17:49:58 crc kubenswrapper[4724]: W0223 17:49:58.994053 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36583b8f_b74d_4f25_980e_030c8d3896c7.slice/crio-fb2e363a57183d33c2a4c168d03020ad61c29e203661da903e5a5be96df2b986 WatchSource:0}: Error finding container fb2e363a57183d33c2a4c168d03020ad61c29e203661da903e5a5be96df2b986: Status 404 returned error can't find the container with id fb2e363a57183d33c2a4c168d03020ad61c29e203661da903e5a5be96df2b986 Feb 23 17:49:59 crc kubenswrapper[4724]: I0223 17:49:59.124593 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5cb5799495-xxmx4" event={"ID":"36583b8f-b74d-4f25-980e-030c8d3896c7","Type":"ContainerStarted","Data":"fb2e363a57183d33c2a4c168d03020ad61c29e203661da903e5a5be96df2b986"} Feb 23 17:49:59 crc kubenswrapper[4724]: I0223 17:49:59.130367 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"9f1d15c2-eeeb-41fe-89c7-27ad522e5c56","Type":"ContainerStarted","Data":"83420c1acfc2061d9432f2021c0cc50a6bcef3c81d941d660d001230df4580db"} Feb 23 17:49:59 crc kubenswrapper[4724]: I0223 17:49:59.130431 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"9f1d15c2-eeeb-41fe-89c7-27ad522e5c56","Type":"ContainerStarted","Data":"27b16a0bdfadc318538cd26596ec33af0f6e3f5bb6d50e49f2e9bcf4e10ba500"} Feb 23 17:49:59 crc kubenswrapper[4724]: I0223 17:49:59.151923 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-57d985d94b-jc7cf" event={"ID":"63ff397b-64ac-4aa1-b20e-e2570bcc4423","Type":"ContainerStarted","Data":"faf8e8e01e595a8926d32d61a6c4629c500201c575f87384f9f4d17c3b44b7b3"} Feb 23 17:49:59 crc kubenswrapper[4724]: I0223 17:49:59.151964 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 23 17:49:59 crc kubenswrapper[4724]: I0223 17:49:59.152850 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-588b89dd65-d4wqn" Feb 23 17:49:59 crc kubenswrapper[4724]: I0223 17:49:59.153694 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 23 17:50:00 crc kubenswrapper[4724]: I0223 17:50:00.170959 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5cb5799495-xxmx4" event={"ID":"36583b8f-b74d-4f25-980e-030c8d3896c7","Type":"ContainerStarted","Data":"f7cdf30717b2a3ed4d6b686ef81446495f2b4ea3ad2c6e39f3d8ec89e459619a"} Feb 23 17:50:00 crc kubenswrapper[4724]: I0223 17:50:00.173124 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5cb5799495-xxmx4" Feb 23 17:50:00 crc kubenswrapper[4724]: I0223 17:50:00.179023 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-k8sd8" event={"ID":"05e85ad1-bec3-4a1d-b77e-cac6dab7c9fd","Type":"ContainerStarted","Data":"cb27846d6f45fc5bb8869f74bc52bff927385c5e9ffa3a8b5c01b350275cfcab"} Feb 23 17:50:00 crc kubenswrapper[4724]: I0223 17:50:00.185513 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-57d985d94b-jc7cf" event={"ID":"63ff397b-64ac-4aa1-b20e-e2570bcc4423","Type":"ContainerStarted","Data":"c554f8541c449c23f2af4f7f15d503e07ff2f40beb831b1f9e1efc00f9b315df"} Feb 23 17:50:00 crc kubenswrapper[4724]: I0223 17:50:00.185577 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-57d985d94b-jc7cf" event={"ID":"63ff397b-64ac-4aa1-b20e-e2570bcc4423","Type":"ContainerStarted","Data":"083a4d92c4ac23306dfbe7d05c72f6d9262c3da71b1d63656163c941e3be0e4d"} Feb 23 17:50:00 crc kubenswrapper[4724]: I0223 17:50:00.186522 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-57d985d94b-jc7cf" Feb 23 17:50:00 crc kubenswrapper[4724]: I0223 17:50:00.186617 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-57d985d94b-jc7cf" Feb 23 17:50:00 crc kubenswrapper[4724]: I0223 17:50:00.188228 4724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 17:50:00 crc kubenswrapper[4724]: I0223 17:50:00.188246 4724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 17:50:00 crc kubenswrapper[4724]: I0223 17:50:00.188868 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-kbqzq" event={"ID":"987df27c-52c5-4950-be0d-72bbd4164ea6","Type":"ContainerStarted","Data":"277f67881cf8a06dd036d527211a4dabd0e326e4726048374f9afc657ebda77f"} Feb 23 17:50:00 crc kubenswrapper[4724]: I0223 17:50:00.192840 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=3.192820315 podStartE2EDuration="3.192820315s" podCreationTimestamp="2026-02-23 17:49:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:49:59.153844151 +0000 UTC m=+1154.970043741" watchObservedRunningTime="2026-02-23 17:50:00.192820315 +0000 UTC m=+1156.009019915" Feb 23 17:50:00 crc kubenswrapper[4724]: I0223 17:50:00.222456 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-57d985d94b-jc7cf" podStartSLOduration=3.222440633 podStartE2EDuration="3.222440633s" podCreationTimestamp="2026-02-23 17:49:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:50:00.220805572 +0000 UTC m=+1156.037005172" watchObservedRunningTime="2026-02-23 17:50:00.222440633 +0000 UTC m=+1156.038640233" Feb 23 17:50:00 crc kubenswrapper[4724]: I0223 17:50:00.226239 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5cb5799495-xxmx4" podStartSLOduration=3.226218229 podStartE2EDuration="3.226218229s" podCreationTimestamp="2026-02-23 17:49:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:50:00.194560439 +0000 UTC m=+1156.010760039" watchObservedRunningTime="2026-02-23 17:50:00.226218229 +0000 UTC m=+1156.042417829" Feb 23 17:50:00 crc kubenswrapper[4724]: I0223 17:50:00.253787 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-k8sd8" podStartSLOduration=3.6679065250000003 podStartE2EDuration="49.253766374s" podCreationTimestamp="2026-02-23 17:49:11 +0000 UTC" firstStartedPulling="2026-02-23 17:49:13.572672701 +0000 UTC m=+1109.388872311" lastFinishedPulling="2026-02-23 17:49:59.15853256 +0000 UTC m=+1154.974732160" observedRunningTime="2026-02-23 17:50:00.245125386 +0000 UTC m=+1156.061324976" watchObservedRunningTime="2026-02-23 17:50:00.253766374 +0000 UTC m=+1156.069965974" Feb 23 17:50:00 crc kubenswrapper[4724]: I0223 17:50:00.301312 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7cdfc95f79-n8pfz" Feb 23 17:50:00 crc kubenswrapper[4724]: I0223 17:50:00.339358 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-kbqzq" podStartSLOduration=4.909654718 podStartE2EDuration="49.339335515s" podCreationTimestamp="2026-02-23 17:49:11 +0000 UTC" firstStartedPulling="2026-02-23 17:49:13.598911253 +0000 UTC m=+1109.415110853" lastFinishedPulling="2026-02-23 17:49:58.02859206 +0000 UTC m=+1153.844791650" observedRunningTime="2026-02-23 17:50:00.328174173 +0000 UTC m=+1156.144373773" watchObservedRunningTime="2026-02-23 17:50:00.339335515 +0000 UTC m=+1156.155535115" Feb 23 17:50:00 crc kubenswrapper[4724]: I0223 17:50:00.473033 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-856b879ffc-m4wq9"] Feb 23 17:50:00 crc kubenswrapper[4724]: I0223 17:50:00.473334 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-856b879ffc-m4wq9" podUID="cc252ed4-e739-4270-b189-1b35bd5a3533" containerName="dnsmasq-dns" containerID="cri-o://540cc4dad8054d7db9adbd63abe3367aaa4b9cc3c0d0f17d6296210bd132d60d" gracePeriod=10 Feb 23 17:50:00 crc kubenswrapper[4724]: I0223 17:50:00.918588 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-74674fd4f8-mmmpd" podUID="df53406b-fb3c-41f5-86af-b78ac8d5df6d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.167:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.167:8443: connect: connection refused" Feb 23 17:50:01 crc kubenswrapper[4724]: I0223 17:50:01.023544 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5b4b6c94fb-ttctl" podUID="07785399-35e6-432b-8835-4412fa3ff02b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.168:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.168:8443: connect: connection refused" Feb 23 17:50:01 crc kubenswrapper[4724]: I0223 17:50:01.216158 4724 generic.go:334] "Generic (PLEG): container finished" podID="cc252ed4-e739-4270-b189-1b35bd5a3533" containerID="540cc4dad8054d7db9adbd63abe3367aaa4b9cc3c0d0f17d6296210bd132d60d" exitCode=0 Feb 23 17:50:01 crc kubenswrapper[4724]: I0223 17:50:01.217715 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-856b879ffc-m4wq9" event={"ID":"cc252ed4-e739-4270-b189-1b35bd5a3533","Type":"ContainerDied","Data":"540cc4dad8054d7db9adbd63abe3367aaa4b9cc3c0d0f17d6296210bd132d60d"} Feb 23 17:50:01 crc kubenswrapper[4724]: I0223 17:50:01.217745 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-856b879ffc-m4wq9" event={"ID":"cc252ed4-e739-4270-b189-1b35bd5a3533","Type":"ContainerDied","Data":"23245bf0e2bf83e0a09be5c9ef4af00e2630ba3cb416663dfe31a5b2019a2a1b"} Feb 23 17:50:01 crc kubenswrapper[4724]: I0223 17:50:01.217755 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23245bf0e2bf83e0a09be5c9ef4af00e2630ba3cb416663dfe31a5b2019a2a1b" Feb 23 17:50:01 crc kubenswrapper[4724]: I0223 17:50:01.217831 4724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 17:50:01 crc kubenswrapper[4724]: I0223 17:50:01.217839 4724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 17:50:01 crc kubenswrapper[4724]: I0223 17:50:01.228665 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-856b879ffc-m4wq9" Feb 23 17:50:01 crc kubenswrapper[4724]: I0223 17:50:01.337205 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc252ed4-e739-4270-b189-1b35bd5a3533-dns-swift-storage-0\") pod \"cc252ed4-e739-4270-b189-1b35bd5a3533\" (UID: \"cc252ed4-e739-4270-b189-1b35bd5a3533\") " Feb 23 17:50:01 crc kubenswrapper[4724]: I0223 17:50:01.337305 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc252ed4-e739-4270-b189-1b35bd5a3533-dns-svc\") pod \"cc252ed4-e739-4270-b189-1b35bd5a3533\" (UID: \"cc252ed4-e739-4270-b189-1b35bd5a3533\") " Feb 23 17:50:01 crc kubenswrapper[4724]: I0223 17:50:01.337328 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68gbq\" (UniqueName: \"kubernetes.io/projected/cc252ed4-e739-4270-b189-1b35bd5a3533-kube-api-access-68gbq\") pod \"cc252ed4-e739-4270-b189-1b35bd5a3533\" (UID: \"cc252ed4-e739-4270-b189-1b35bd5a3533\") " Feb 23 17:50:01 crc kubenswrapper[4724]: I0223 17:50:01.337434 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc252ed4-e739-4270-b189-1b35bd5a3533-ovsdbserver-nb\") pod \"cc252ed4-e739-4270-b189-1b35bd5a3533\" (UID: \"cc252ed4-e739-4270-b189-1b35bd5a3533\") " Feb 23 17:50:01 crc kubenswrapper[4724]: I0223 17:50:01.337468 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc252ed4-e739-4270-b189-1b35bd5a3533-config\") pod \"cc252ed4-e739-4270-b189-1b35bd5a3533\" (UID: \"cc252ed4-e739-4270-b189-1b35bd5a3533\") " Feb 23 17:50:01 crc kubenswrapper[4724]: I0223 17:50:01.337570 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc252ed4-e739-4270-b189-1b35bd5a3533-ovsdbserver-sb\") pod \"cc252ed4-e739-4270-b189-1b35bd5a3533\" (UID: \"cc252ed4-e739-4270-b189-1b35bd5a3533\") " Feb 23 17:50:01 crc kubenswrapper[4724]: I0223 17:50:01.361519 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc252ed4-e739-4270-b189-1b35bd5a3533-kube-api-access-68gbq" (OuterVolumeSpecName: "kube-api-access-68gbq") pod "cc252ed4-e739-4270-b189-1b35bd5a3533" (UID: "cc252ed4-e739-4270-b189-1b35bd5a3533"). InnerVolumeSpecName "kube-api-access-68gbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:50:01 crc kubenswrapper[4724]: I0223 17:50:01.439487 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68gbq\" (UniqueName: \"kubernetes.io/projected/cc252ed4-e739-4270-b189-1b35bd5a3533-kube-api-access-68gbq\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:01 crc kubenswrapper[4724]: I0223 17:50:01.451633 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc252ed4-e739-4270-b189-1b35bd5a3533-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cc252ed4-e739-4270-b189-1b35bd5a3533" (UID: "cc252ed4-e739-4270-b189-1b35bd5a3533"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:50:01 crc kubenswrapper[4724]: I0223 17:50:01.452614 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc252ed4-e739-4270-b189-1b35bd5a3533-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "cc252ed4-e739-4270-b189-1b35bd5a3533" (UID: "cc252ed4-e739-4270-b189-1b35bd5a3533"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:50:01 crc kubenswrapper[4724]: I0223 17:50:01.461204 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc252ed4-e739-4270-b189-1b35bd5a3533-config" (OuterVolumeSpecName: "config") pod "cc252ed4-e739-4270-b189-1b35bd5a3533" (UID: "cc252ed4-e739-4270-b189-1b35bd5a3533"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:50:01 crc kubenswrapper[4724]: I0223 17:50:01.484694 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc252ed4-e739-4270-b189-1b35bd5a3533-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cc252ed4-e739-4270-b189-1b35bd5a3533" (UID: "cc252ed4-e739-4270-b189-1b35bd5a3533"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:50:01 crc kubenswrapper[4724]: I0223 17:50:01.500722 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc252ed4-e739-4270-b189-1b35bd5a3533-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cc252ed4-e739-4270-b189-1b35bd5a3533" (UID: "cc252ed4-e739-4270-b189-1b35bd5a3533"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:50:01 crc kubenswrapper[4724]: I0223 17:50:01.541213 4724 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc252ed4-e739-4270-b189-1b35bd5a3533-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:01 crc kubenswrapper[4724]: I0223 17:50:01.541254 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc252ed4-e739-4270-b189-1b35bd5a3533-config\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:01 crc kubenswrapper[4724]: I0223 17:50:01.541269 4724 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc252ed4-e739-4270-b189-1b35bd5a3533-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:01 crc kubenswrapper[4724]: I0223 17:50:01.541281 4724 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc252ed4-e739-4270-b189-1b35bd5a3533-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:01 crc kubenswrapper[4724]: I0223 17:50:01.541294 4724 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc252ed4-e739-4270-b189-1b35bd5a3533-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:01 crc kubenswrapper[4724]: I0223 17:50:01.557675 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 23 17:50:01 crc kubenswrapper[4724]: I0223 17:50:01.557757 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 23 17:50:01 crc kubenswrapper[4724]: I0223 17:50:01.558891 4724 scope.go:117] "RemoveContainer" containerID="b281f0460f6aacdba1984f52546e50487809ee08e50ef5a7dbd8f741cfd7e606" Feb 23 17:50:01 crc kubenswrapper[4724]: E0223 17:50:01.560026 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(86eb7ff0-87b2-4538-8c5b-9126768e810b)\"" pod="openstack/watcher-decision-engine-0" podUID="86eb7ff0-87b2-4538-8c5b-9126768e810b" Feb 23 17:50:02 crc kubenswrapper[4724]: I0223 17:50:02.071141 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 23 17:50:02 crc kubenswrapper[4724]: I0223 17:50:02.071382 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="406c91bf-6849-4273-8751-b0a234617dd4" containerName="watcher-api-log" containerID="cri-o://01a386e0bb6882288df622deef7079c1278373d4917bc2fdfb60b916bffc8e8b" gracePeriod=30 Feb 23 17:50:02 crc kubenswrapper[4724]: I0223 17:50:02.071495 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="406c91bf-6849-4273-8751-b0a234617dd4" containerName="watcher-api" containerID="cri-o://cdb1e84b4dcda27ed4934326f0b69817bfb64b3ef7f230cf9eac85facb8ba05b" gracePeriod=30 Feb 23 17:50:02 crc kubenswrapper[4724]: I0223 17:50:02.114175 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/watcher-api-0" podUID="406c91bf-6849-4273-8751-b0a234617dd4" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.170:9322/\": EOF" Feb 23 17:50:02 crc kubenswrapper[4724]: I0223 17:50:02.229084 4724 generic.go:334] "Generic (PLEG): container finished" podID="406c91bf-6849-4273-8751-b0a234617dd4" containerID="01a386e0bb6882288df622deef7079c1278373d4917bc2fdfb60b916bffc8e8b" exitCode=143 Feb 23 17:50:02 crc kubenswrapper[4724]: I0223 17:50:02.229469 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-856b879ffc-m4wq9" Feb 23 17:50:02 crc kubenswrapper[4724]: I0223 17:50:02.245844 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"406c91bf-6849-4273-8751-b0a234617dd4","Type":"ContainerDied","Data":"01a386e0bb6882288df622deef7079c1278373d4917bc2fdfb60b916bffc8e8b"} Feb 23 17:50:02 crc kubenswrapper[4724]: I0223 17:50:02.294637 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-856b879ffc-m4wq9"] Feb 23 17:50:02 crc kubenswrapper[4724]: I0223 17:50:02.305496 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-856b879ffc-m4wq9"] Feb 23 17:50:02 crc kubenswrapper[4724]: I0223 17:50:02.666035 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 23 17:50:02 crc kubenswrapper[4724]: I0223 17:50:02.961987 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc252ed4-e739-4270-b189-1b35bd5a3533" path="/var/lib/kubelet/pods/cc252ed4-e739-4270-b189-1b35bd5a3533/volumes" Feb 23 17:50:03 crc kubenswrapper[4724]: E0223 17:50:03.754683 4724 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c0efb1b_cbc1_4ac1_b969_ce5ae7b03857.slice/crio-conmon-fb492313bded3525683787416d1463c506739ef103a7abf059baf18d6f79a5a7.scope\": RecentStats: unable to find data in memory cache]" Feb 23 17:50:03 crc kubenswrapper[4724]: I0223 17:50:03.782918 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 23 17:50:03 crc kubenswrapper[4724]: I0223 17:50:03.885469 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/406c91bf-6849-4273-8751-b0a234617dd4-config-data\") pod \"406c91bf-6849-4273-8751-b0a234617dd4\" (UID: \"406c91bf-6849-4273-8751-b0a234617dd4\") " Feb 23 17:50:03 crc kubenswrapper[4724]: I0223 17:50:03.885546 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/406c91bf-6849-4273-8751-b0a234617dd4-logs\") pod \"406c91bf-6849-4273-8751-b0a234617dd4\" (UID: \"406c91bf-6849-4273-8751-b0a234617dd4\") " Feb 23 17:50:03 crc kubenswrapper[4724]: I0223 17:50:03.885582 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/406c91bf-6849-4273-8751-b0a234617dd4-combined-ca-bundle\") pod \"406c91bf-6849-4273-8751-b0a234617dd4\" (UID: \"406c91bf-6849-4273-8751-b0a234617dd4\") " Feb 23 17:50:03 crc kubenswrapper[4724]: I0223 17:50:03.885615 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gkrk\" (UniqueName: \"kubernetes.io/projected/406c91bf-6849-4273-8751-b0a234617dd4-kube-api-access-5gkrk\") pod \"406c91bf-6849-4273-8751-b0a234617dd4\" (UID: \"406c91bf-6849-4273-8751-b0a234617dd4\") " Feb 23 17:50:03 crc kubenswrapper[4724]: I0223 17:50:03.885729 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/406c91bf-6849-4273-8751-b0a234617dd4-custom-prometheus-ca\") pod \"406c91bf-6849-4273-8751-b0a234617dd4\" (UID: \"406c91bf-6849-4273-8751-b0a234617dd4\") " Feb 23 17:50:03 crc kubenswrapper[4724]: I0223 17:50:03.886195 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/406c91bf-6849-4273-8751-b0a234617dd4-logs" (OuterVolumeSpecName: "logs") pod "406c91bf-6849-4273-8751-b0a234617dd4" (UID: "406c91bf-6849-4273-8751-b0a234617dd4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:50:03 crc kubenswrapper[4724]: I0223 17:50:03.900815 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/406c91bf-6849-4273-8751-b0a234617dd4-kube-api-access-5gkrk" (OuterVolumeSpecName: "kube-api-access-5gkrk") pod "406c91bf-6849-4273-8751-b0a234617dd4" (UID: "406c91bf-6849-4273-8751-b0a234617dd4"). InnerVolumeSpecName "kube-api-access-5gkrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:50:03 crc kubenswrapper[4724]: I0223 17:50:03.906079 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 23 17:50:03 crc kubenswrapper[4724]: I0223 17:50:03.906457 4724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 17:50:03 crc kubenswrapper[4724]: I0223 17:50:03.908171 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 23 17:50:03 crc kubenswrapper[4724]: I0223 17:50:03.930915 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/406c91bf-6849-4273-8751-b0a234617dd4-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "406c91bf-6849-4273-8751-b0a234617dd4" (UID: "406c91bf-6849-4273-8751-b0a234617dd4"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:50:03 crc kubenswrapper[4724]: I0223 17:50:03.934464 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 23 17:50:03 crc kubenswrapper[4724]: I0223 17:50:03.934603 4724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 17:50:03 crc kubenswrapper[4724]: I0223 17:50:03.936324 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 23 17:50:03 crc kubenswrapper[4724]: I0223 17:50:03.962938 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/406c91bf-6849-4273-8751-b0a234617dd4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "406c91bf-6849-4273-8751-b0a234617dd4" (UID: "406c91bf-6849-4273-8751-b0a234617dd4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:50:03 crc kubenswrapper[4724]: I0223 17:50:03.994865 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gkrk\" (UniqueName: \"kubernetes.io/projected/406c91bf-6849-4273-8751-b0a234617dd4-kube-api-access-5gkrk\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:03 crc kubenswrapper[4724]: I0223 17:50:03.994901 4724 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/406c91bf-6849-4273-8751-b0a234617dd4-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:03 crc kubenswrapper[4724]: I0223 17:50:03.994910 4724 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/406c91bf-6849-4273-8751-b0a234617dd4-logs\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:03 crc kubenswrapper[4724]: I0223 17:50:03.994919 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/406c91bf-6849-4273-8751-b0a234617dd4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:03 crc kubenswrapper[4724]: I0223 17:50:03.996885 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/406c91bf-6849-4273-8751-b0a234617dd4-config-data" (OuterVolumeSpecName: "config-data") pod "406c91bf-6849-4273-8751-b0a234617dd4" (UID: "406c91bf-6849-4273-8751-b0a234617dd4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:50:04 crc kubenswrapper[4724]: I0223 17:50:04.096271 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/406c91bf-6849-4273-8751-b0a234617dd4-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:04 crc kubenswrapper[4724]: I0223 17:50:04.251218 4724 generic.go:334] "Generic (PLEG): container finished" podID="406c91bf-6849-4273-8751-b0a234617dd4" containerID="cdb1e84b4dcda27ed4934326f0b69817bfb64b3ef7f230cf9eac85facb8ba05b" exitCode=0 Feb 23 17:50:04 crc kubenswrapper[4724]: I0223 17:50:04.251285 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 23 17:50:04 crc kubenswrapper[4724]: I0223 17:50:04.251329 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"406c91bf-6849-4273-8751-b0a234617dd4","Type":"ContainerDied","Data":"cdb1e84b4dcda27ed4934326f0b69817bfb64b3ef7f230cf9eac85facb8ba05b"} Feb 23 17:50:04 crc kubenswrapper[4724]: I0223 17:50:04.251403 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"406c91bf-6849-4273-8751-b0a234617dd4","Type":"ContainerDied","Data":"774b41cc038a0fa4a49df4e5bd1e90c9021a2cfc82023d2a29ecf00a4f87ff01"} Feb 23 17:50:04 crc kubenswrapper[4724]: I0223 17:50:04.251428 4724 scope.go:117] "RemoveContainer" containerID="cdb1e84b4dcda27ed4934326f0b69817bfb64b3ef7f230cf9eac85facb8ba05b" Feb 23 17:50:04 crc kubenswrapper[4724]: I0223 17:50:04.280732 4724 scope.go:117] "RemoveContainer" containerID="01a386e0bb6882288df622deef7079c1278373d4917bc2fdfb60b916bffc8e8b" Feb 23 17:50:04 crc kubenswrapper[4724]: I0223 17:50:04.326053 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 23 17:50:04 crc kubenswrapper[4724]: I0223 17:50:04.343482 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Feb 23 17:50:04 crc kubenswrapper[4724]: I0223 17:50:04.370257 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Feb 23 17:50:04 crc kubenswrapper[4724]: E0223 17:50:04.370738 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="406c91bf-6849-4273-8751-b0a234617dd4" containerName="watcher-api" Feb 23 17:50:04 crc kubenswrapper[4724]: I0223 17:50:04.370751 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="406c91bf-6849-4273-8751-b0a234617dd4" containerName="watcher-api" Feb 23 17:50:04 crc kubenswrapper[4724]: E0223 17:50:04.370763 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc252ed4-e739-4270-b189-1b35bd5a3533" containerName="dnsmasq-dns" Feb 23 17:50:04 crc kubenswrapper[4724]: I0223 17:50:04.370769 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc252ed4-e739-4270-b189-1b35bd5a3533" containerName="dnsmasq-dns" Feb 23 17:50:04 crc kubenswrapper[4724]: E0223 17:50:04.370788 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="406c91bf-6849-4273-8751-b0a234617dd4" containerName="watcher-api-log" Feb 23 17:50:04 crc kubenswrapper[4724]: I0223 17:50:04.370795 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="406c91bf-6849-4273-8751-b0a234617dd4" containerName="watcher-api-log" Feb 23 17:50:04 crc kubenswrapper[4724]: E0223 17:50:04.370812 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc252ed4-e739-4270-b189-1b35bd5a3533" containerName="init" Feb 23 17:50:04 crc kubenswrapper[4724]: I0223 17:50:04.370817 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc252ed4-e739-4270-b189-1b35bd5a3533" containerName="init" Feb 23 17:50:04 crc kubenswrapper[4724]: I0223 17:50:04.370995 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="406c91bf-6849-4273-8751-b0a234617dd4" containerName="watcher-api-log" Feb 23 17:50:04 crc kubenswrapper[4724]: I0223 17:50:04.371017 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc252ed4-e739-4270-b189-1b35bd5a3533" containerName="dnsmasq-dns" Feb 23 17:50:04 crc kubenswrapper[4724]: I0223 17:50:04.371027 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="406c91bf-6849-4273-8751-b0a234617dd4" containerName="watcher-api" Feb 23 17:50:04 crc kubenswrapper[4724]: I0223 17:50:04.371926 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 23 17:50:04 crc kubenswrapper[4724]: I0223 17:50:04.379437 4724 scope.go:117] "RemoveContainer" containerID="cdb1e84b4dcda27ed4934326f0b69817bfb64b3ef7f230cf9eac85facb8ba05b" Feb 23 17:50:04 crc kubenswrapper[4724]: I0223 17:50:04.380216 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-internal-svc" Feb 23 17:50:04 crc kubenswrapper[4724]: I0223 17:50:04.380349 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Feb 23 17:50:04 crc kubenswrapper[4724]: I0223 17:50:04.380502 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-public-svc" Feb 23 17:50:04 crc kubenswrapper[4724]: I0223 17:50:04.382432 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 23 17:50:04 crc kubenswrapper[4724]: E0223 17:50:04.384437 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdb1e84b4dcda27ed4934326f0b69817bfb64b3ef7f230cf9eac85facb8ba05b\": container with ID starting with cdb1e84b4dcda27ed4934326f0b69817bfb64b3ef7f230cf9eac85facb8ba05b not found: ID does not exist" containerID="cdb1e84b4dcda27ed4934326f0b69817bfb64b3ef7f230cf9eac85facb8ba05b" Feb 23 17:50:04 crc kubenswrapper[4724]: I0223 17:50:04.384471 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdb1e84b4dcda27ed4934326f0b69817bfb64b3ef7f230cf9eac85facb8ba05b"} err="failed to get container status \"cdb1e84b4dcda27ed4934326f0b69817bfb64b3ef7f230cf9eac85facb8ba05b\": rpc error: code = NotFound desc = could not find container \"cdb1e84b4dcda27ed4934326f0b69817bfb64b3ef7f230cf9eac85facb8ba05b\": container with ID starting with cdb1e84b4dcda27ed4934326f0b69817bfb64b3ef7f230cf9eac85facb8ba05b not found: ID does not exist" Feb 23 17:50:04 crc kubenswrapper[4724]: I0223 17:50:04.384497 4724 scope.go:117] "RemoveContainer" containerID="01a386e0bb6882288df622deef7079c1278373d4917bc2fdfb60b916bffc8e8b" Feb 23 17:50:04 crc kubenswrapper[4724]: E0223 17:50:04.390035 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01a386e0bb6882288df622deef7079c1278373d4917bc2fdfb60b916bffc8e8b\": container with ID starting with 01a386e0bb6882288df622deef7079c1278373d4917bc2fdfb60b916bffc8e8b not found: ID does not exist" containerID="01a386e0bb6882288df622deef7079c1278373d4917bc2fdfb60b916bffc8e8b" Feb 23 17:50:04 crc kubenswrapper[4724]: I0223 17:50:04.390095 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01a386e0bb6882288df622deef7079c1278373d4917bc2fdfb60b916bffc8e8b"} err="failed to get container status \"01a386e0bb6882288df622deef7079c1278373d4917bc2fdfb60b916bffc8e8b\": rpc error: code = NotFound desc = could not find container \"01a386e0bb6882288df622deef7079c1278373d4917bc2fdfb60b916bffc8e8b\": container with ID starting with 01a386e0bb6882288df622deef7079c1278373d4917bc2fdfb60b916bffc8e8b not found: ID does not exist" Feb 23 17:50:04 crc kubenswrapper[4724]: I0223 17:50:04.506138 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5c6dff7-7008-48cf-8e14-42d2f92c9221-public-tls-certs\") pod \"watcher-api-0\" (UID: \"f5c6dff7-7008-48cf-8e14-42d2f92c9221\") " pod="openstack/watcher-api-0" Feb 23 17:50:04 crc kubenswrapper[4724]: I0223 17:50:04.506327 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5c6dff7-7008-48cf-8e14-42d2f92c9221-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"f5c6dff7-7008-48cf-8e14-42d2f92c9221\") " pod="openstack/watcher-api-0" Feb 23 17:50:04 crc kubenswrapper[4724]: I0223 17:50:04.506352 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5c6dff7-7008-48cf-8e14-42d2f92c9221-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"f5c6dff7-7008-48cf-8e14-42d2f92c9221\") " pod="openstack/watcher-api-0" Feb 23 17:50:04 crc kubenswrapper[4724]: I0223 17:50:04.506374 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5c6dff7-7008-48cf-8e14-42d2f92c9221-logs\") pod \"watcher-api-0\" (UID: \"f5c6dff7-7008-48cf-8e14-42d2f92c9221\") " pod="openstack/watcher-api-0" Feb 23 17:50:04 crc kubenswrapper[4724]: I0223 17:50:04.506443 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpj59\" (UniqueName: \"kubernetes.io/projected/f5c6dff7-7008-48cf-8e14-42d2f92c9221-kube-api-access-zpj59\") pod \"watcher-api-0\" (UID: \"f5c6dff7-7008-48cf-8e14-42d2f92c9221\") " pod="openstack/watcher-api-0" Feb 23 17:50:04 crc kubenswrapper[4724]: I0223 17:50:04.506494 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f5c6dff7-7008-48cf-8e14-42d2f92c9221-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"f5c6dff7-7008-48cf-8e14-42d2f92c9221\") " pod="openstack/watcher-api-0" Feb 23 17:50:04 crc kubenswrapper[4724]: I0223 17:50:04.506571 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5c6dff7-7008-48cf-8e14-42d2f92c9221-config-data\") pod \"watcher-api-0\" (UID: \"f5c6dff7-7008-48cf-8e14-42d2f92c9221\") " pod="openstack/watcher-api-0" Feb 23 17:50:04 crc kubenswrapper[4724]: I0223 17:50:04.608625 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5c6dff7-7008-48cf-8e14-42d2f92c9221-config-data\") pod \"watcher-api-0\" (UID: \"f5c6dff7-7008-48cf-8e14-42d2f92c9221\") " pod="openstack/watcher-api-0" Feb 23 17:50:04 crc kubenswrapper[4724]: I0223 17:50:04.608689 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5c6dff7-7008-48cf-8e14-42d2f92c9221-public-tls-certs\") pod \"watcher-api-0\" (UID: \"f5c6dff7-7008-48cf-8e14-42d2f92c9221\") " pod="openstack/watcher-api-0" Feb 23 17:50:04 crc kubenswrapper[4724]: I0223 17:50:04.608713 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5c6dff7-7008-48cf-8e14-42d2f92c9221-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"f5c6dff7-7008-48cf-8e14-42d2f92c9221\") " pod="openstack/watcher-api-0" Feb 23 17:50:04 crc kubenswrapper[4724]: I0223 17:50:04.608743 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5c6dff7-7008-48cf-8e14-42d2f92c9221-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"f5c6dff7-7008-48cf-8e14-42d2f92c9221\") " pod="openstack/watcher-api-0" Feb 23 17:50:04 crc kubenswrapper[4724]: I0223 17:50:04.608771 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5c6dff7-7008-48cf-8e14-42d2f92c9221-logs\") pod \"watcher-api-0\" (UID: \"f5c6dff7-7008-48cf-8e14-42d2f92c9221\") " pod="openstack/watcher-api-0" Feb 23 17:50:04 crc kubenswrapper[4724]: I0223 17:50:04.608815 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpj59\" (UniqueName: \"kubernetes.io/projected/f5c6dff7-7008-48cf-8e14-42d2f92c9221-kube-api-access-zpj59\") pod \"watcher-api-0\" (UID: \"f5c6dff7-7008-48cf-8e14-42d2f92c9221\") " pod="openstack/watcher-api-0" Feb 23 17:50:04 crc kubenswrapper[4724]: I0223 17:50:04.608878 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f5c6dff7-7008-48cf-8e14-42d2f92c9221-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"f5c6dff7-7008-48cf-8e14-42d2f92c9221\") " pod="openstack/watcher-api-0" Feb 23 17:50:04 crc kubenswrapper[4724]: I0223 17:50:04.609578 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5c6dff7-7008-48cf-8e14-42d2f92c9221-logs\") pod \"watcher-api-0\" (UID: \"f5c6dff7-7008-48cf-8e14-42d2f92c9221\") " pod="openstack/watcher-api-0" Feb 23 17:50:04 crc kubenswrapper[4724]: I0223 17:50:04.619048 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5c6dff7-7008-48cf-8e14-42d2f92c9221-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"f5c6dff7-7008-48cf-8e14-42d2f92c9221\") " pod="openstack/watcher-api-0" Feb 23 17:50:04 crc kubenswrapper[4724]: I0223 17:50:04.619057 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5c6dff7-7008-48cf-8e14-42d2f92c9221-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"f5c6dff7-7008-48cf-8e14-42d2f92c9221\") " pod="openstack/watcher-api-0" Feb 23 17:50:04 crc kubenswrapper[4724]: I0223 17:50:04.620054 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f5c6dff7-7008-48cf-8e14-42d2f92c9221-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"f5c6dff7-7008-48cf-8e14-42d2f92c9221\") " pod="openstack/watcher-api-0" Feb 23 17:50:04 crc kubenswrapper[4724]: I0223 17:50:04.626584 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5c6dff7-7008-48cf-8e14-42d2f92c9221-public-tls-certs\") pod \"watcher-api-0\" (UID: \"f5c6dff7-7008-48cf-8e14-42d2f92c9221\") " pod="openstack/watcher-api-0" Feb 23 17:50:04 crc kubenswrapper[4724]: I0223 17:50:04.633191 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpj59\" (UniqueName: \"kubernetes.io/projected/f5c6dff7-7008-48cf-8e14-42d2f92c9221-kube-api-access-zpj59\") pod \"watcher-api-0\" (UID: \"f5c6dff7-7008-48cf-8e14-42d2f92c9221\") " pod="openstack/watcher-api-0" Feb 23 17:50:04 crc kubenswrapper[4724]: I0223 17:50:04.634002 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5c6dff7-7008-48cf-8e14-42d2f92c9221-config-data\") pod \"watcher-api-0\" (UID: \"f5c6dff7-7008-48cf-8e14-42d2f92c9221\") " pod="openstack/watcher-api-0" Feb 23 17:50:04 crc kubenswrapper[4724]: I0223 17:50:04.704892 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 23 17:50:04 crc kubenswrapper[4724]: I0223 17:50:04.965913 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="406c91bf-6849-4273-8751-b0a234617dd4" path="/var/lib/kubelet/pods/406c91bf-6849-4273-8751-b0a234617dd4/volumes" Feb 23 17:50:05 crc kubenswrapper[4724]: I0223 17:50:05.177904 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 23 17:50:07 crc kubenswrapper[4724]: I0223 17:50:07.300520 4724 generic.go:334] "Generic (PLEG): container finished" podID="05e85ad1-bec3-4a1d-b77e-cac6dab7c9fd" containerID="cb27846d6f45fc5bb8869f74bc52bff927385c5e9ffa3a8b5c01b350275cfcab" exitCode=0 Feb 23 17:50:07 crc kubenswrapper[4724]: I0223 17:50:07.300684 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-k8sd8" event={"ID":"05e85ad1-bec3-4a1d-b77e-cac6dab7c9fd","Type":"ContainerDied","Data":"cb27846d6f45fc5bb8869f74bc52bff927385c5e9ffa3a8b5c01b350275cfcab"} Feb 23 17:50:07 crc kubenswrapper[4724]: I0223 17:50:07.666756 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 23 17:50:07 crc kubenswrapper[4724]: I0223 17:50:07.696793 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Feb 23 17:50:08 crc kubenswrapper[4724]: I0223 17:50:08.350970 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Feb 23 17:50:09 crc kubenswrapper[4724]: W0223 17:50:09.227274 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5c6dff7_7008_48cf_8e14_42d2f92c9221.slice/crio-029280f351739876bb2c782a0c5082a8e9d6eee074f830f8b08b85c495015690 WatchSource:0}: Error finding container 029280f351739876bb2c782a0c5082a8e9d6eee074f830f8b08b85c495015690: Status 404 returned error can't find the container with id 029280f351739876bb2c782a0c5082a8e9d6eee074f830f8b08b85c495015690 Feb 23 17:50:09 crc kubenswrapper[4724]: I0223 17:50:09.323048 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-k8sd8" Feb 23 17:50:09 crc kubenswrapper[4724]: I0223 17:50:09.326061 4724 generic.go:334] "Generic (PLEG): container finished" podID="987df27c-52c5-4950-be0d-72bbd4164ea6" containerID="277f67881cf8a06dd036d527211a4dabd0e326e4726048374f9afc657ebda77f" exitCode=0 Feb 23 17:50:09 crc kubenswrapper[4724]: I0223 17:50:09.326149 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-kbqzq" event={"ID":"987df27c-52c5-4950-be0d-72bbd4164ea6","Type":"ContainerDied","Data":"277f67881cf8a06dd036d527211a4dabd0e326e4726048374f9afc657ebda77f"} Feb 23 17:50:09 crc kubenswrapper[4724]: I0223 17:50:09.334073 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-k8sd8" event={"ID":"05e85ad1-bec3-4a1d-b77e-cac6dab7c9fd","Type":"ContainerDied","Data":"ad6e956b650ca4b1a7afe57d8fc90748987e407794c8b544f7dc31c618cf7859"} Feb 23 17:50:09 crc kubenswrapper[4724]: I0223 17:50:09.334148 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad6e956b650ca4b1a7afe57d8fc90748987e407794c8b544f7dc31c618cf7859" Feb 23 17:50:09 crc kubenswrapper[4724]: I0223 17:50:09.334117 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-k8sd8" Feb 23 17:50:09 crc kubenswrapper[4724]: I0223 17:50:09.345649 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"f5c6dff7-7008-48cf-8e14-42d2f92c9221","Type":"ContainerStarted","Data":"029280f351739876bb2c782a0c5082a8e9d6eee074f830f8b08b85c495015690"} Feb 23 17:50:09 crc kubenswrapper[4724]: I0223 17:50:09.508086 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmpm9\" (UniqueName: \"kubernetes.io/projected/05e85ad1-bec3-4a1d-b77e-cac6dab7c9fd-kube-api-access-pmpm9\") pod \"05e85ad1-bec3-4a1d-b77e-cac6dab7c9fd\" (UID: \"05e85ad1-bec3-4a1d-b77e-cac6dab7c9fd\") " Feb 23 17:50:09 crc kubenswrapper[4724]: I0223 17:50:09.508149 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05e85ad1-bec3-4a1d-b77e-cac6dab7c9fd-combined-ca-bundle\") pod \"05e85ad1-bec3-4a1d-b77e-cac6dab7c9fd\" (UID: \"05e85ad1-bec3-4a1d-b77e-cac6dab7c9fd\") " Feb 23 17:50:09 crc kubenswrapper[4724]: I0223 17:50:09.508357 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/05e85ad1-bec3-4a1d-b77e-cac6dab7c9fd-db-sync-config-data\") pod \"05e85ad1-bec3-4a1d-b77e-cac6dab7c9fd\" (UID: \"05e85ad1-bec3-4a1d-b77e-cac6dab7c9fd\") " Feb 23 17:50:09 crc kubenswrapper[4724]: I0223 17:50:09.514074 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05e85ad1-bec3-4a1d-b77e-cac6dab7c9fd-kube-api-access-pmpm9" (OuterVolumeSpecName: "kube-api-access-pmpm9") pod "05e85ad1-bec3-4a1d-b77e-cac6dab7c9fd" (UID: "05e85ad1-bec3-4a1d-b77e-cac6dab7c9fd"). InnerVolumeSpecName "kube-api-access-pmpm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:50:09 crc kubenswrapper[4724]: I0223 17:50:09.514556 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05e85ad1-bec3-4a1d-b77e-cac6dab7c9fd-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "05e85ad1-bec3-4a1d-b77e-cac6dab7c9fd" (UID: "05e85ad1-bec3-4a1d-b77e-cac6dab7c9fd"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:50:09 crc kubenswrapper[4724]: I0223 17:50:09.562889 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05e85ad1-bec3-4a1d-b77e-cac6dab7c9fd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "05e85ad1-bec3-4a1d-b77e-cac6dab7c9fd" (UID: "05e85ad1-bec3-4a1d-b77e-cac6dab7c9fd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:50:09 crc kubenswrapper[4724]: I0223 17:50:09.610609 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmpm9\" (UniqueName: \"kubernetes.io/projected/05e85ad1-bec3-4a1d-b77e-cac6dab7c9fd-kube-api-access-pmpm9\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:09 crc kubenswrapper[4724]: I0223 17:50:09.610652 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05e85ad1-bec3-4a1d-b77e-cac6dab7c9fd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:09 crc kubenswrapper[4724]: I0223 17:50:09.610662 4724 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/05e85ad1-bec3-4a1d-b77e-cac6dab7c9fd-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.361245 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"f5c6dff7-7008-48cf-8e14-42d2f92c9221","Type":"ContainerStarted","Data":"b1fced8e7a4ad18c13b9e149eb44f6aee202723b77b5c2401142dd3f1574efe9"} Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.361493 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"f5c6dff7-7008-48cf-8e14-42d2f92c9221","Type":"ContainerStarted","Data":"ac6c88806f705901a97da86a033889d8172d1e5b2239d121c938c286b6e1f18e"} Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.361965 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.364619 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7a589efc-e414-47aa-90d8-14b2ad1f542e" containerName="ceilometer-central-agent" containerID="cri-o://d9fecb18242066d76feca02682eee3c73ddfba742dc5358eaf55e3998693314e" gracePeriod=30 Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.364908 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7a589efc-e414-47aa-90d8-14b2ad1f542e","Type":"ContainerStarted","Data":"d88672f21ece6f9c8b57a6221022fb9dea8ccaff2517cb59f9c19ead7d02e2b5"} Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.364956 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.365024 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7a589efc-e414-47aa-90d8-14b2ad1f542e" containerName="proxy-httpd" containerID="cri-o://d88672f21ece6f9c8b57a6221022fb9dea8ccaff2517cb59f9c19ead7d02e2b5" gracePeriod=30 Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.365085 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7a589efc-e414-47aa-90d8-14b2ad1f542e" containerName="sg-core" containerID="cri-o://daa172e2828ce21702379f3d032a44553407ffd0e5e3a6dbd3e72bf44e56fd19" gracePeriod=30 Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.365151 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7a589efc-e414-47aa-90d8-14b2ad1f542e" containerName="ceilometer-notification-agent" containerID="cri-o://5892e8d7bcde1c2d53816d81acf28f0f496ad8a2b3a54385c84447994d93d5d6" gracePeriod=30 Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.396841 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=6.396816451 podStartE2EDuration="6.396816451s" podCreationTimestamp="2026-02-23 17:50:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:50:10.392240786 +0000 UTC m=+1166.208440386" watchObservedRunningTime="2026-02-23 17:50:10.396816451 +0000 UTC m=+1166.213016051" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.419989 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.069577267 podStartE2EDuration="59.419971586s" podCreationTimestamp="2026-02-23 17:49:11 +0000 UTC" firstStartedPulling="2026-02-23 17:49:13.96710854 +0000 UTC m=+1109.783308140" lastFinishedPulling="2026-02-23 17:50:09.317502859 +0000 UTC m=+1165.133702459" observedRunningTime="2026-02-23 17:50:10.41736909 +0000 UTC m=+1166.233568690" watchObservedRunningTime="2026-02-23 17:50:10.419971586 +0000 UTC m=+1166.236171186" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.665051 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-68f84cbc4f-9ns6x"] Feb 23 17:50:10 crc kubenswrapper[4724]: E0223 17:50:10.665572 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05e85ad1-bec3-4a1d-b77e-cac6dab7c9fd" containerName="barbican-db-sync" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.665589 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="05e85ad1-bec3-4a1d-b77e-cac6dab7c9fd" containerName="barbican-db-sync" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.665842 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="05e85ad1-bec3-4a1d-b77e-cac6dab7c9fd" containerName="barbican-db-sync" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.667118 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-68f84cbc4f-9ns6x" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.670354 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-s4fjm" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.670555 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.670711 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.702708 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-7cbfcdd8bd-6sfgm"] Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.704301 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7cbfcdd8bd-6sfgm" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.710224 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.716923 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-68f84cbc4f-9ns6x"] Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.732671 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7cbfcdd8bd-6sfgm"] Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.745149 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7b6cf4bd7c-6flfl"] Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.746735 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b6cf4bd7c-6flfl" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.760977 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b6cf4bd7c-6flfl"] Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.835454 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-65447d49b6-tcbqk"] Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.837040 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-65447d49b6-tcbqk" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.839867 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.845634 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c06fc526-bdf8-419c-8261-29fca2da229c-logs\") pod \"barbican-keystone-listener-7cbfcdd8bd-6sfgm\" (UID: \"c06fc526-bdf8-419c-8261-29fca2da229c\") " pod="openstack/barbican-keystone-listener-7cbfcdd8bd-6sfgm" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.845702 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0195d90-e7f7-4cba-b83a-75b5e0a1bcd1-logs\") pod \"barbican-worker-68f84cbc4f-9ns6x\" (UID: \"d0195d90-e7f7-4cba-b83a-75b5e0a1bcd1\") " pod="openstack/barbican-worker-68f84cbc4f-9ns6x" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.845779 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52bg5\" (UniqueName: \"kubernetes.io/projected/95b1fddb-4398-4c03-bb76-82fd6cba5f5a-kube-api-access-52bg5\") pod \"dnsmasq-dns-7b6cf4bd7c-6flfl\" (UID: \"95b1fddb-4398-4c03-bb76-82fd6cba5f5a\") " pod="openstack/dnsmasq-dns-7b6cf4bd7c-6flfl" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.845858 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g78r7\" (UniqueName: \"kubernetes.io/projected/c06fc526-bdf8-419c-8261-29fca2da229c-kube-api-access-g78r7\") pod \"barbican-keystone-listener-7cbfcdd8bd-6sfgm\" (UID: \"c06fc526-bdf8-419c-8261-29fca2da229c\") " pod="openstack/barbican-keystone-listener-7cbfcdd8bd-6sfgm" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.846314 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d0195d90-e7f7-4cba-b83a-75b5e0a1bcd1-config-data-custom\") pod \"barbican-worker-68f84cbc4f-9ns6x\" (UID: \"d0195d90-e7f7-4cba-b83a-75b5e0a1bcd1\") " pod="openstack/barbican-worker-68f84cbc4f-9ns6x" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.846355 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95b1fddb-4398-4c03-bb76-82fd6cba5f5a-ovsdbserver-sb\") pod \"dnsmasq-dns-7b6cf4bd7c-6flfl\" (UID: \"95b1fddb-4398-4c03-bb76-82fd6cba5f5a\") " pod="openstack/dnsmasq-dns-7b6cf4bd7c-6flfl" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.846408 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c06fc526-bdf8-419c-8261-29fca2da229c-config-data-custom\") pod \"barbican-keystone-listener-7cbfcdd8bd-6sfgm\" (UID: \"c06fc526-bdf8-419c-8261-29fca2da229c\") " pod="openstack/barbican-keystone-listener-7cbfcdd8bd-6sfgm" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.846451 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzl5r\" (UniqueName: \"kubernetes.io/projected/d0195d90-e7f7-4cba-b83a-75b5e0a1bcd1-kube-api-access-zzl5r\") pod \"barbican-worker-68f84cbc4f-9ns6x\" (UID: \"d0195d90-e7f7-4cba-b83a-75b5e0a1bcd1\") " pod="openstack/barbican-worker-68f84cbc4f-9ns6x" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.846469 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0195d90-e7f7-4cba-b83a-75b5e0a1bcd1-combined-ca-bundle\") pod \"barbican-worker-68f84cbc4f-9ns6x\" (UID: \"d0195d90-e7f7-4cba-b83a-75b5e0a1bcd1\") " pod="openstack/barbican-worker-68f84cbc4f-9ns6x" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.846529 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95b1fddb-4398-4c03-bb76-82fd6cba5f5a-ovsdbserver-nb\") pod \"dnsmasq-dns-7b6cf4bd7c-6flfl\" (UID: \"95b1fddb-4398-4c03-bb76-82fd6cba5f5a\") " pod="openstack/dnsmasq-dns-7b6cf4bd7c-6flfl" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.846588 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95b1fddb-4398-4c03-bb76-82fd6cba5f5a-dns-svc\") pod \"dnsmasq-dns-7b6cf4bd7c-6flfl\" (UID: \"95b1fddb-4398-4c03-bb76-82fd6cba5f5a\") " pod="openstack/dnsmasq-dns-7b6cf4bd7c-6flfl" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.846661 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c06fc526-bdf8-419c-8261-29fca2da229c-config-data\") pod \"barbican-keystone-listener-7cbfcdd8bd-6sfgm\" (UID: \"c06fc526-bdf8-419c-8261-29fca2da229c\") " pod="openstack/barbican-keystone-listener-7cbfcdd8bd-6sfgm" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.846685 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0195d90-e7f7-4cba-b83a-75b5e0a1bcd1-config-data\") pod \"barbican-worker-68f84cbc4f-9ns6x\" (UID: \"d0195d90-e7f7-4cba-b83a-75b5e0a1bcd1\") " pod="openstack/barbican-worker-68f84cbc4f-9ns6x" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.846729 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c06fc526-bdf8-419c-8261-29fca2da229c-combined-ca-bundle\") pod \"barbican-keystone-listener-7cbfcdd8bd-6sfgm\" (UID: \"c06fc526-bdf8-419c-8261-29fca2da229c\") " pod="openstack/barbican-keystone-listener-7cbfcdd8bd-6sfgm" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.846758 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/95b1fddb-4398-4c03-bb76-82fd6cba5f5a-dns-swift-storage-0\") pod \"dnsmasq-dns-7b6cf4bd7c-6flfl\" (UID: \"95b1fddb-4398-4c03-bb76-82fd6cba5f5a\") " pod="openstack/dnsmasq-dns-7b6cf4bd7c-6flfl" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.846788 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95b1fddb-4398-4c03-bb76-82fd6cba5f5a-config\") pod \"dnsmasq-dns-7b6cf4bd7c-6flfl\" (UID: \"95b1fddb-4398-4c03-bb76-82fd6cba5f5a\") " pod="openstack/dnsmasq-dns-7b6cf4bd7c-6flfl" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.860903 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-65447d49b6-tcbqk"] Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.887754 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-kbqzq" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.949219 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g78r7\" (UniqueName: \"kubernetes.io/projected/c06fc526-bdf8-419c-8261-29fca2da229c-kube-api-access-g78r7\") pod \"barbican-keystone-listener-7cbfcdd8bd-6sfgm\" (UID: \"c06fc526-bdf8-419c-8261-29fca2da229c\") " pod="openstack/barbican-keystone-listener-7cbfcdd8bd-6sfgm" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.949537 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52ql5\" (UniqueName: \"kubernetes.io/projected/17ce0d64-cfcb-48c4-8282-b53ae002e25e-kube-api-access-52ql5\") pod \"barbican-api-65447d49b6-tcbqk\" (UID: \"17ce0d64-cfcb-48c4-8282-b53ae002e25e\") " pod="openstack/barbican-api-65447d49b6-tcbqk" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.949585 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17ce0d64-cfcb-48c4-8282-b53ae002e25e-config-data-custom\") pod \"barbican-api-65447d49b6-tcbqk\" (UID: \"17ce0d64-cfcb-48c4-8282-b53ae002e25e\") " pod="openstack/barbican-api-65447d49b6-tcbqk" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.949606 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d0195d90-e7f7-4cba-b83a-75b5e0a1bcd1-config-data-custom\") pod \"barbican-worker-68f84cbc4f-9ns6x\" (UID: \"d0195d90-e7f7-4cba-b83a-75b5e0a1bcd1\") " pod="openstack/barbican-worker-68f84cbc4f-9ns6x" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.949625 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95b1fddb-4398-4c03-bb76-82fd6cba5f5a-ovsdbserver-sb\") pod \"dnsmasq-dns-7b6cf4bd7c-6flfl\" (UID: \"95b1fddb-4398-4c03-bb76-82fd6cba5f5a\") " pod="openstack/dnsmasq-dns-7b6cf4bd7c-6flfl" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.949646 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c06fc526-bdf8-419c-8261-29fca2da229c-config-data-custom\") pod \"barbican-keystone-listener-7cbfcdd8bd-6sfgm\" (UID: \"c06fc526-bdf8-419c-8261-29fca2da229c\") " pod="openstack/barbican-keystone-listener-7cbfcdd8bd-6sfgm" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.949663 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17ce0d64-cfcb-48c4-8282-b53ae002e25e-config-data\") pod \"barbican-api-65447d49b6-tcbqk\" (UID: \"17ce0d64-cfcb-48c4-8282-b53ae002e25e\") " pod="openstack/barbican-api-65447d49b6-tcbqk" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.949685 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17ce0d64-cfcb-48c4-8282-b53ae002e25e-combined-ca-bundle\") pod \"barbican-api-65447d49b6-tcbqk\" (UID: \"17ce0d64-cfcb-48c4-8282-b53ae002e25e\") " pod="openstack/barbican-api-65447d49b6-tcbqk" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.949702 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzl5r\" (UniqueName: \"kubernetes.io/projected/d0195d90-e7f7-4cba-b83a-75b5e0a1bcd1-kube-api-access-zzl5r\") pod \"barbican-worker-68f84cbc4f-9ns6x\" (UID: \"d0195d90-e7f7-4cba-b83a-75b5e0a1bcd1\") " pod="openstack/barbican-worker-68f84cbc4f-9ns6x" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.949718 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0195d90-e7f7-4cba-b83a-75b5e0a1bcd1-combined-ca-bundle\") pod \"barbican-worker-68f84cbc4f-9ns6x\" (UID: \"d0195d90-e7f7-4cba-b83a-75b5e0a1bcd1\") " pod="openstack/barbican-worker-68f84cbc4f-9ns6x" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.949737 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95b1fddb-4398-4c03-bb76-82fd6cba5f5a-ovsdbserver-nb\") pod \"dnsmasq-dns-7b6cf4bd7c-6flfl\" (UID: \"95b1fddb-4398-4c03-bb76-82fd6cba5f5a\") " pod="openstack/dnsmasq-dns-7b6cf4bd7c-6flfl" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.949764 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95b1fddb-4398-4c03-bb76-82fd6cba5f5a-dns-svc\") pod \"dnsmasq-dns-7b6cf4bd7c-6flfl\" (UID: \"95b1fddb-4398-4c03-bb76-82fd6cba5f5a\") " pod="openstack/dnsmasq-dns-7b6cf4bd7c-6flfl" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.949856 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c06fc526-bdf8-419c-8261-29fca2da229c-config-data\") pod \"barbican-keystone-listener-7cbfcdd8bd-6sfgm\" (UID: \"c06fc526-bdf8-419c-8261-29fca2da229c\") " pod="openstack/barbican-keystone-listener-7cbfcdd8bd-6sfgm" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.949912 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0195d90-e7f7-4cba-b83a-75b5e0a1bcd1-config-data\") pod \"barbican-worker-68f84cbc4f-9ns6x\" (UID: \"d0195d90-e7f7-4cba-b83a-75b5e0a1bcd1\") " pod="openstack/barbican-worker-68f84cbc4f-9ns6x" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.949962 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c06fc526-bdf8-419c-8261-29fca2da229c-combined-ca-bundle\") pod \"barbican-keystone-listener-7cbfcdd8bd-6sfgm\" (UID: \"c06fc526-bdf8-419c-8261-29fca2da229c\") " pod="openstack/barbican-keystone-listener-7cbfcdd8bd-6sfgm" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.950002 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/95b1fddb-4398-4c03-bb76-82fd6cba5f5a-dns-swift-storage-0\") pod \"dnsmasq-dns-7b6cf4bd7c-6flfl\" (UID: \"95b1fddb-4398-4c03-bb76-82fd6cba5f5a\") " pod="openstack/dnsmasq-dns-7b6cf4bd7c-6flfl" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.950032 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95b1fddb-4398-4c03-bb76-82fd6cba5f5a-config\") pod \"dnsmasq-dns-7b6cf4bd7c-6flfl\" (UID: \"95b1fddb-4398-4c03-bb76-82fd6cba5f5a\") " pod="openstack/dnsmasq-dns-7b6cf4bd7c-6flfl" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.950062 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17ce0d64-cfcb-48c4-8282-b53ae002e25e-logs\") pod \"barbican-api-65447d49b6-tcbqk\" (UID: \"17ce0d64-cfcb-48c4-8282-b53ae002e25e\") " pod="openstack/barbican-api-65447d49b6-tcbqk" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.950087 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c06fc526-bdf8-419c-8261-29fca2da229c-logs\") pod \"barbican-keystone-listener-7cbfcdd8bd-6sfgm\" (UID: \"c06fc526-bdf8-419c-8261-29fca2da229c\") " pod="openstack/barbican-keystone-listener-7cbfcdd8bd-6sfgm" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.950116 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0195d90-e7f7-4cba-b83a-75b5e0a1bcd1-logs\") pod \"barbican-worker-68f84cbc4f-9ns6x\" (UID: \"d0195d90-e7f7-4cba-b83a-75b5e0a1bcd1\") " pod="openstack/barbican-worker-68f84cbc4f-9ns6x" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.950183 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52bg5\" (UniqueName: \"kubernetes.io/projected/95b1fddb-4398-4c03-bb76-82fd6cba5f5a-kube-api-access-52bg5\") pod \"dnsmasq-dns-7b6cf4bd7c-6flfl\" (UID: \"95b1fddb-4398-4c03-bb76-82fd6cba5f5a\") " pod="openstack/dnsmasq-dns-7b6cf4bd7c-6flfl" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.950639 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95b1fddb-4398-4c03-bb76-82fd6cba5f5a-dns-svc\") pod \"dnsmasq-dns-7b6cf4bd7c-6flfl\" (UID: \"95b1fddb-4398-4c03-bb76-82fd6cba5f5a\") " pod="openstack/dnsmasq-dns-7b6cf4bd7c-6flfl" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.951371 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0195d90-e7f7-4cba-b83a-75b5e0a1bcd1-logs\") pod \"barbican-worker-68f84cbc4f-9ns6x\" (UID: \"d0195d90-e7f7-4cba-b83a-75b5e0a1bcd1\") " pod="openstack/barbican-worker-68f84cbc4f-9ns6x" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.951773 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95b1fddb-4398-4c03-bb76-82fd6cba5f5a-config\") pod \"dnsmasq-dns-7b6cf4bd7c-6flfl\" (UID: \"95b1fddb-4398-4c03-bb76-82fd6cba5f5a\") " pod="openstack/dnsmasq-dns-7b6cf4bd7c-6flfl" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.952042 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c06fc526-bdf8-419c-8261-29fca2da229c-logs\") pod \"barbican-keystone-listener-7cbfcdd8bd-6sfgm\" (UID: \"c06fc526-bdf8-419c-8261-29fca2da229c\") " pod="openstack/barbican-keystone-listener-7cbfcdd8bd-6sfgm" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.953158 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/95b1fddb-4398-4c03-bb76-82fd6cba5f5a-dns-swift-storage-0\") pod \"dnsmasq-dns-7b6cf4bd7c-6flfl\" (UID: \"95b1fddb-4398-4c03-bb76-82fd6cba5f5a\") " pod="openstack/dnsmasq-dns-7b6cf4bd7c-6flfl" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.953215 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95b1fddb-4398-4c03-bb76-82fd6cba5f5a-ovsdbserver-nb\") pod \"dnsmasq-dns-7b6cf4bd7c-6flfl\" (UID: \"95b1fddb-4398-4c03-bb76-82fd6cba5f5a\") " pod="openstack/dnsmasq-dns-7b6cf4bd7c-6flfl" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.953233 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95b1fddb-4398-4c03-bb76-82fd6cba5f5a-ovsdbserver-sb\") pod \"dnsmasq-dns-7b6cf4bd7c-6flfl\" (UID: \"95b1fddb-4398-4c03-bb76-82fd6cba5f5a\") " pod="openstack/dnsmasq-dns-7b6cf4bd7c-6flfl" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.956439 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d0195d90-e7f7-4cba-b83a-75b5e0a1bcd1-config-data-custom\") pod \"barbican-worker-68f84cbc4f-9ns6x\" (UID: \"d0195d90-e7f7-4cba-b83a-75b5e0a1bcd1\") " pod="openstack/barbican-worker-68f84cbc4f-9ns6x" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.957317 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0195d90-e7f7-4cba-b83a-75b5e0a1bcd1-combined-ca-bundle\") pod \"barbican-worker-68f84cbc4f-9ns6x\" (UID: \"d0195d90-e7f7-4cba-b83a-75b5e0a1bcd1\") " pod="openstack/barbican-worker-68f84cbc4f-9ns6x" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.958990 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c06fc526-bdf8-419c-8261-29fca2da229c-config-data\") pod \"barbican-keystone-listener-7cbfcdd8bd-6sfgm\" (UID: \"c06fc526-bdf8-419c-8261-29fca2da229c\") " pod="openstack/barbican-keystone-listener-7cbfcdd8bd-6sfgm" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.963474 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c06fc526-bdf8-419c-8261-29fca2da229c-config-data-custom\") pod \"barbican-keystone-listener-7cbfcdd8bd-6sfgm\" (UID: \"c06fc526-bdf8-419c-8261-29fca2da229c\") " pod="openstack/barbican-keystone-listener-7cbfcdd8bd-6sfgm" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.966475 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0195d90-e7f7-4cba-b83a-75b5e0a1bcd1-config-data\") pod \"barbican-worker-68f84cbc4f-9ns6x\" (UID: \"d0195d90-e7f7-4cba-b83a-75b5e0a1bcd1\") " pod="openstack/barbican-worker-68f84cbc4f-9ns6x" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.976266 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52bg5\" (UniqueName: \"kubernetes.io/projected/95b1fddb-4398-4c03-bb76-82fd6cba5f5a-kube-api-access-52bg5\") pod \"dnsmasq-dns-7b6cf4bd7c-6flfl\" (UID: \"95b1fddb-4398-4c03-bb76-82fd6cba5f5a\") " pod="openstack/dnsmasq-dns-7b6cf4bd7c-6flfl" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.976537 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c06fc526-bdf8-419c-8261-29fca2da229c-combined-ca-bundle\") pod \"barbican-keystone-listener-7cbfcdd8bd-6sfgm\" (UID: \"c06fc526-bdf8-419c-8261-29fca2da229c\") " pod="openstack/barbican-keystone-listener-7cbfcdd8bd-6sfgm" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.978285 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzl5r\" (UniqueName: \"kubernetes.io/projected/d0195d90-e7f7-4cba-b83a-75b5e0a1bcd1-kube-api-access-zzl5r\") pod \"barbican-worker-68f84cbc4f-9ns6x\" (UID: \"d0195d90-e7f7-4cba-b83a-75b5e0a1bcd1\") " pod="openstack/barbican-worker-68f84cbc4f-9ns6x" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.979993 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g78r7\" (UniqueName: \"kubernetes.io/projected/c06fc526-bdf8-419c-8261-29fca2da229c-kube-api-access-g78r7\") pod \"barbican-keystone-listener-7cbfcdd8bd-6sfgm\" (UID: \"c06fc526-bdf8-419c-8261-29fca2da229c\") " pod="openstack/barbican-keystone-listener-7cbfcdd8bd-6sfgm" Feb 23 17:50:10 crc kubenswrapper[4724]: I0223 17:50:10.995483 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-68f84cbc4f-9ns6x" Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.040519 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7cbfcdd8bd-6sfgm" Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.050952 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/987df27c-52c5-4950-be0d-72bbd4164ea6-config-data\") pod \"987df27c-52c5-4950-be0d-72bbd4164ea6\" (UID: \"987df27c-52c5-4950-be0d-72bbd4164ea6\") " Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.051058 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtjkq\" (UniqueName: \"kubernetes.io/projected/987df27c-52c5-4950-be0d-72bbd4164ea6-kube-api-access-dtjkq\") pod \"987df27c-52c5-4950-be0d-72bbd4164ea6\" (UID: \"987df27c-52c5-4950-be0d-72bbd4164ea6\") " Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.051147 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/987df27c-52c5-4950-be0d-72bbd4164ea6-scripts\") pod \"987df27c-52c5-4950-be0d-72bbd4164ea6\" (UID: \"987df27c-52c5-4950-be0d-72bbd4164ea6\") " Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.051181 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/987df27c-52c5-4950-be0d-72bbd4164ea6-etc-machine-id\") pod \"987df27c-52c5-4950-be0d-72bbd4164ea6\" (UID: \"987df27c-52c5-4950-be0d-72bbd4164ea6\") " Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.051208 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/987df27c-52c5-4950-be0d-72bbd4164ea6-db-sync-config-data\") pod \"987df27c-52c5-4950-be0d-72bbd4164ea6\" (UID: \"987df27c-52c5-4950-be0d-72bbd4164ea6\") " Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.051276 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/987df27c-52c5-4950-be0d-72bbd4164ea6-combined-ca-bundle\") pod \"987df27c-52c5-4950-be0d-72bbd4164ea6\" (UID: \"987df27c-52c5-4950-be0d-72bbd4164ea6\") " Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.051521 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52ql5\" (UniqueName: \"kubernetes.io/projected/17ce0d64-cfcb-48c4-8282-b53ae002e25e-kube-api-access-52ql5\") pod \"barbican-api-65447d49b6-tcbqk\" (UID: \"17ce0d64-cfcb-48c4-8282-b53ae002e25e\") " pod="openstack/barbican-api-65447d49b6-tcbqk" Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.051565 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17ce0d64-cfcb-48c4-8282-b53ae002e25e-config-data-custom\") pod \"barbican-api-65447d49b6-tcbqk\" (UID: \"17ce0d64-cfcb-48c4-8282-b53ae002e25e\") " pod="openstack/barbican-api-65447d49b6-tcbqk" Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.051590 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17ce0d64-cfcb-48c4-8282-b53ae002e25e-config-data\") pod \"barbican-api-65447d49b6-tcbqk\" (UID: \"17ce0d64-cfcb-48c4-8282-b53ae002e25e\") " pod="openstack/barbican-api-65447d49b6-tcbqk" Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.051612 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17ce0d64-cfcb-48c4-8282-b53ae002e25e-combined-ca-bundle\") pod \"barbican-api-65447d49b6-tcbqk\" (UID: \"17ce0d64-cfcb-48c4-8282-b53ae002e25e\") " pod="openstack/barbican-api-65447d49b6-tcbqk" Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.051692 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17ce0d64-cfcb-48c4-8282-b53ae002e25e-logs\") pod \"barbican-api-65447d49b6-tcbqk\" (UID: \"17ce0d64-cfcb-48c4-8282-b53ae002e25e\") " pod="openstack/barbican-api-65447d49b6-tcbqk" Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.054566 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/987df27c-52c5-4950-be0d-72bbd4164ea6-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "987df27c-52c5-4950-be0d-72bbd4164ea6" (UID: "987df27c-52c5-4950-be0d-72bbd4164ea6"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.061027 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/987df27c-52c5-4950-be0d-72bbd4164ea6-kube-api-access-dtjkq" (OuterVolumeSpecName: "kube-api-access-dtjkq") pod "987df27c-52c5-4950-be0d-72bbd4164ea6" (UID: "987df27c-52c5-4950-be0d-72bbd4164ea6"). InnerVolumeSpecName "kube-api-access-dtjkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.061225 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17ce0d64-cfcb-48c4-8282-b53ae002e25e-logs\") pod \"barbican-api-65447d49b6-tcbqk\" (UID: \"17ce0d64-cfcb-48c4-8282-b53ae002e25e\") " pod="openstack/barbican-api-65447d49b6-tcbqk" Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.065049 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/987df27c-52c5-4950-be0d-72bbd4164ea6-scripts" (OuterVolumeSpecName: "scripts") pod "987df27c-52c5-4950-be0d-72bbd4164ea6" (UID: "987df27c-52c5-4950-be0d-72bbd4164ea6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.066718 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17ce0d64-cfcb-48c4-8282-b53ae002e25e-config-data\") pod \"barbican-api-65447d49b6-tcbqk\" (UID: \"17ce0d64-cfcb-48c4-8282-b53ae002e25e\") " pod="openstack/barbican-api-65447d49b6-tcbqk" Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.067226 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17ce0d64-cfcb-48c4-8282-b53ae002e25e-combined-ca-bundle\") pod \"barbican-api-65447d49b6-tcbqk\" (UID: \"17ce0d64-cfcb-48c4-8282-b53ae002e25e\") " pod="openstack/barbican-api-65447d49b6-tcbqk" Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.070547 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/987df27c-52c5-4950-be0d-72bbd4164ea6-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "987df27c-52c5-4950-be0d-72bbd4164ea6" (UID: "987df27c-52c5-4950-be0d-72bbd4164ea6"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.076169 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17ce0d64-cfcb-48c4-8282-b53ae002e25e-config-data-custom\") pod \"barbican-api-65447d49b6-tcbqk\" (UID: \"17ce0d64-cfcb-48c4-8282-b53ae002e25e\") " pod="openstack/barbican-api-65447d49b6-tcbqk" Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.084131 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52ql5\" (UniqueName: \"kubernetes.io/projected/17ce0d64-cfcb-48c4-8282-b53ae002e25e-kube-api-access-52ql5\") pod \"barbican-api-65447d49b6-tcbqk\" (UID: \"17ce0d64-cfcb-48c4-8282-b53ae002e25e\") " pod="openstack/barbican-api-65447d49b6-tcbqk" Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.086011 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b6cf4bd7c-6flfl" Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.127455 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/987df27c-52c5-4950-be0d-72bbd4164ea6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "987df27c-52c5-4950-be0d-72bbd4164ea6" (UID: "987df27c-52c5-4950-be0d-72bbd4164ea6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.146727 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/987df27c-52c5-4950-be0d-72bbd4164ea6-config-data" (OuterVolumeSpecName: "config-data") pod "987df27c-52c5-4950-be0d-72bbd4164ea6" (UID: "987df27c-52c5-4950-be0d-72bbd4164ea6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.154914 4724 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/987df27c-52c5-4950-be0d-72bbd4164ea6-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.154947 4724 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/987df27c-52c5-4950-be0d-72bbd4164ea6-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.154963 4724 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/987df27c-52c5-4950-be0d-72bbd4164ea6-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.154974 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/987df27c-52c5-4950-be0d-72bbd4164ea6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.154985 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/987df27c-52c5-4950-be0d-72bbd4164ea6-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.154995 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtjkq\" (UniqueName: \"kubernetes.io/projected/987df27c-52c5-4950-be0d-72bbd4164ea6-kube-api-access-dtjkq\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.206230 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-65447d49b6-tcbqk" Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.390132 4724 generic.go:334] "Generic (PLEG): container finished" podID="7a589efc-e414-47aa-90d8-14b2ad1f542e" containerID="d88672f21ece6f9c8b57a6221022fb9dea8ccaff2517cb59f9c19ead7d02e2b5" exitCode=0 Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.390174 4724 generic.go:334] "Generic (PLEG): container finished" podID="7a589efc-e414-47aa-90d8-14b2ad1f542e" containerID="daa172e2828ce21702379f3d032a44553407ffd0e5e3a6dbd3e72bf44e56fd19" exitCode=2 Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.390184 4724 generic.go:334] "Generic (PLEG): container finished" podID="7a589efc-e414-47aa-90d8-14b2ad1f542e" containerID="d9fecb18242066d76feca02682eee3c73ddfba742dc5358eaf55e3998693314e" exitCode=0 Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.390226 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7a589efc-e414-47aa-90d8-14b2ad1f542e","Type":"ContainerDied","Data":"d88672f21ece6f9c8b57a6221022fb9dea8ccaff2517cb59f9c19ead7d02e2b5"} Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.390257 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7a589efc-e414-47aa-90d8-14b2ad1f542e","Type":"ContainerDied","Data":"daa172e2828ce21702379f3d032a44553407ffd0e5e3a6dbd3e72bf44e56fd19"} Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.390267 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7a589efc-e414-47aa-90d8-14b2ad1f542e","Type":"ContainerDied","Data":"d9fecb18242066d76feca02682eee3c73ddfba742dc5358eaf55e3998693314e"} Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.395190 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-kbqzq" Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.395811 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-kbqzq" event={"ID":"987df27c-52c5-4950-be0d-72bbd4164ea6","Type":"ContainerDied","Data":"c0d761e6ef001c122b42c8fdf345dd69477b5b41f6dc60b888a2cc584dafce72"} Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.395854 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0d761e6ef001c122b42c8fdf345dd69477b5b41f6dc60b888a2cc584dafce72" Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.556808 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.557530 4724 scope.go:117] "RemoveContainer" containerID="b281f0460f6aacdba1984f52546e50487809ee08e50ef5a7dbd8f741cfd7e606" Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.557875 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.581745 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-68f84cbc4f-9ns6x"] Feb 23 17:50:11 crc kubenswrapper[4724]: W0223 17:50:11.642128 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0195d90_e7f7_4cba_b83a_75b5e0a1bcd1.slice/crio-5959a868b1accea08d21fd9a345136a95507e47ef5b1cc9bbe493ed9c3ee18d3 WatchSource:0}: Error finding container 5959a868b1accea08d21fd9a345136a95507e47ef5b1cc9bbe493ed9c3ee18d3: Status 404 returned error can't find the container with id 5959a868b1accea08d21fd9a345136a95507e47ef5b1cc9bbe493ed9c3ee18d3 Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.689772 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 23 17:50:11 crc kubenswrapper[4724]: E0223 17:50:11.698458 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="987df27c-52c5-4950-be0d-72bbd4164ea6" containerName="cinder-db-sync" Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.698495 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="987df27c-52c5-4950-be0d-72bbd4164ea6" containerName="cinder-db-sync" Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.698710 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="987df27c-52c5-4950-be0d-72bbd4164ea6" containerName="cinder-db-sync" Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.699721 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.703415 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-d6rts" Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.703635 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.704573 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.710182 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.738228 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.762183 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b6cf4bd7c-6flfl"] Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.801258 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4de93300-165d-4da7-b999-032b6f89038f-config-data\") pod \"cinder-scheduler-0\" (UID: \"4de93300-165d-4da7-b999-032b6f89038f\") " pod="openstack/cinder-scheduler-0" Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.801306 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4de93300-165d-4da7-b999-032b6f89038f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4de93300-165d-4da7-b999-032b6f89038f\") " pod="openstack/cinder-scheduler-0" Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.801364 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4de93300-165d-4da7-b999-032b6f89038f-scripts\") pod \"cinder-scheduler-0\" (UID: \"4de93300-165d-4da7-b999-032b6f89038f\") " pod="openstack/cinder-scheduler-0" Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.801418 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4de93300-165d-4da7-b999-032b6f89038f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4de93300-165d-4da7-b999-032b6f89038f\") " pod="openstack/cinder-scheduler-0" Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.801543 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4de93300-165d-4da7-b999-032b6f89038f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4de93300-165d-4da7-b999-032b6f89038f\") " pod="openstack/cinder-scheduler-0" Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.801572 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jp79\" (UniqueName: \"kubernetes.io/projected/4de93300-165d-4da7-b999-032b6f89038f-kube-api-access-9jp79\") pod \"cinder-scheduler-0\" (UID: \"4de93300-165d-4da7-b999-032b6f89038f\") " pod="openstack/cinder-scheduler-0" Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.826537 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7cbfcdd8bd-6sfgm"] Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.856789 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b6cf4bd7c-6flfl"] Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.883751 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-c8bdf9fff-r2h5q"] Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.885996 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c8bdf9fff-r2h5q" Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.892252 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c8bdf9fff-r2h5q"] Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.902636 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4de93300-165d-4da7-b999-032b6f89038f-scripts\") pod \"cinder-scheduler-0\" (UID: \"4de93300-165d-4da7-b999-032b6f89038f\") " pod="openstack/cinder-scheduler-0" Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.902689 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4de93300-165d-4da7-b999-032b6f89038f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4de93300-165d-4da7-b999-032b6f89038f\") " pod="openstack/cinder-scheduler-0" Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.902772 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4de93300-165d-4da7-b999-032b6f89038f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4de93300-165d-4da7-b999-032b6f89038f\") " pod="openstack/cinder-scheduler-0" Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.902803 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jp79\" (UniqueName: \"kubernetes.io/projected/4de93300-165d-4da7-b999-032b6f89038f-kube-api-access-9jp79\") pod \"cinder-scheduler-0\" (UID: \"4de93300-165d-4da7-b999-032b6f89038f\") " pod="openstack/cinder-scheduler-0" Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.902852 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4de93300-165d-4da7-b999-032b6f89038f-config-data\") pod \"cinder-scheduler-0\" (UID: \"4de93300-165d-4da7-b999-032b6f89038f\") " pod="openstack/cinder-scheduler-0" Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.902876 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4de93300-165d-4da7-b999-032b6f89038f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4de93300-165d-4da7-b999-032b6f89038f\") " pod="openstack/cinder-scheduler-0" Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.906530 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4de93300-165d-4da7-b999-032b6f89038f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4de93300-165d-4da7-b999-032b6f89038f\") " pod="openstack/cinder-scheduler-0" Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.932543 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4de93300-165d-4da7-b999-032b6f89038f-scripts\") pod \"cinder-scheduler-0\" (UID: \"4de93300-165d-4da7-b999-032b6f89038f\") " pod="openstack/cinder-scheduler-0" Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.932592 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4de93300-165d-4da7-b999-032b6f89038f-config-data\") pod \"cinder-scheduler-0\" (UID: \"4de93300-165d-4da7-b999-032b6f89038f\") " pod="openstack/cinder-scheduler-0" Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.934136 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4de93300-165d-4da7-b999-032b6f89038f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4de93300-165d-4da7-b999-032b6f89038f\") " pod="openstack/cinder-scheduler-0" Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.939037 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4de93300-165d-4da7-b999-032b6f89038f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4de93300-165d-4da7-b999-032b6f89038f\") " pod="openstack/cinder-scheduler-0" Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.943687 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jp79\" (UniqueName: \"kubernetes.io/projected/4de93300-165d-4da7-b999-032b6f89038f-kube-api-access-9jp79\") pod \"cinder-scheduler-0\" (UID: \"4de93300-165d-4da7-b999-032b6f89038f\") " pod="openstack/cinder-scheduler-0" Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.991936 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 23 17:50:11 crc kubenswrapper[4724]: I0223 17:50:11.996615 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 23 17:50:12 crc kubenswrapper[4724]: I0223 17:50:12.001611 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 23 17:50:12 crc kubenswrapper[4724]: I0223 17:50:12.004170 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkszq\" (UniqueName: \"kubernetes.io/projected/9d06e4b0-b516-436c-9c9f-054cfd2dd68f-kube-api-access-fkszq\") pod \"dnsmasq-dns-c8bdf9fff-r2h5q\" (UID: \"9d06e4b0-b516-436c-9c9f-054cfd2dd68f\") " pod="openstack/dnsmasq-dns-c8bdf9fff-r2h5q" Feb 23 17:50:12 crc kubenswrapper[4724]: I0223 17:50:12.004270 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d06e4b0-b516-436c-9c9f-054cfd2dd68f-dns-swift-storage-0\") pod \"dnsmasq-dns-c8bdf9fff-r2h5q\" (UID: \"9d06e4b0-b516-436c-9c9f-054cfd2dd68f\") " pod="openstack/dnsmasq-dns-c8bdf9fff-r2h5q" Feb 23 17:50:12 crc kubenswrapper[4724]: I0223 17:50:12.004302 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d06e4b0-b516-436c-9c9f-054cfd2dd68f-ovsdbserver-nb\") pod \"dnsmasq-dns-c8bdf9fff-r2h5q\" (UID: \"9d06e4b0-b516-436c-9c9f-054cfd2dd68f\") " pod="openstack/dnsmasq-dns-c8bdf9fff-r2h5q" Feb 23 17:50:12 crc kubenswrapper[4724]: I0223 17:50:12.004324 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d06e4b0-b516-436c-9c9f-054cfd2dd68f-ovsdbserver-sb\") pod \"dnsmasq-dns-c8bdf9fff-r2h5q\" (UID: \"9d06e4b0-b516-436c-9c9f-054cfd2dd68f\") " pod="openstack/dnsmasq-dns-c8bdf9fff-r2h5q" Feb 23 17:50:12 crc kubenswrapper[4724]: I0223 17:50:12.004346 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d06e4b0-b516-436c-9c9f-054cfd2dd68f-config\") pod \"dnsmasq-dns-c8bdf9fff-r2h5q\" (UID: \"9d06e4b0-b516-436c-9c9f-054cfd2dd68f\") " pod="openstack/dnsmasq-dns-c8bdf9fff-r2h5q" Feb 23 17:50:12 crc kubenswrapper[4724]: I0223 17:50:12.004371 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d06e4b0-b516-436c-9c9f-054cfd2dd68f-dns-svc\") pod \"dnsmasq-dns-c8bdf9fff-r2h5q\" (UID: \"9d06e4b0-b516-436c-9c9f-054cfd2dd68f\") " pod="openstack/dnsmasq-dns-c8bdf9fff-r2h5q" Feb 23 17:50:12 crc kubenswrapper[4724]: I0223 17:50:12.023056 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 23 17:50:12 crc kubenswrapper[4724]: I0223 17:50:12.031486 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 23 17:50:12 crc kubenswrapper[4724]: I0223 17:50:12.054483 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-65447d49b6-tcbqk"] Feb 23 17:50:12 crc kubenswrapper[4724]: W0223 17:50:12.074352 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17ce0d64_cfcb_48c4_8282_b53ae002e25e.slice/crio-98fb66647564374737df1e7ea0f42184211a133921bec0fccba1e58041a865dc WatchSource:0}: Error finding container 98fb66647564374737df1e7ea0f42184211a133921bec0fccba1e58041a865dc: Status 404 returned error can't find the container with id 98fb66647564374737df1e7ea0f42184211a133921bec0fccba1e58041a865dc Feb 23 17:50:12 crc kubenswrapper[4724]: I0223 17:50:12.114917 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbch8\" (UniqueName: \"kubernetes.io/projected/3be48d90-f238-4e9e-83ca-c91030530489-kube-api-access-qbch8\") pod \"cinder-api-0\" (UID: \"3be48d90-f238-4e9e-83ca-c91030530489\") " pod="openstack/cinder-api-0" Feb 23 17:50:12 crc kubenswrapper[4724]: I0223 17:50:12.114966 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d06e4b0-b516-436c-9c9f-054cfd2dd68f-ovsdbserver-nb\") pod \"dnsmasq-dns-c8bdf9fff-r2h5q\" (UID: \"9d06e4b0-b516-436c-9c9f-054cfd2dd68f\") " pod="openstack/dnsmasq-dns-c8bdf9fff-r2h5q" Feb 23 17:50:12 crc kubenswrapper[4724]: I0223 17:50:12.115001 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d06e4b0-b516-436c-9c9f-054cfd2dd68f-ovsdbserver-sb\") pod \"dnsmasq-dns-c8bdf9fff-r2h5q\" (UID: \"9d06e4b0-b516-436c-9c9f-054cfd2dd68f\") " pod="openstack/dnsmasq-dns-c8bdf9fff-r2h5q" Feb 23 17:50:12 crc kubenswrapper[4724]: I0223 17:50:12.115024 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d06e4b0-b516-436c-9c9f-054cfd2dd68f-config\") pod \"dnsmasq-dns-c8bdf9fff-r2h5q\" (UID: \"9d06e4b0-b516-436c-9c9f-054cfd2dd68f\") " pod="openstack/dnsmasq-dns-c8bdf9fff-r2h5q" Feb 23 17:50:12 crc kubenswrapper[4724]: I0223 17:50:12.115050 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3be48d90-f238-4e9e-83ca-c91030530489-scripts\") pod \"cinder-api-0\" (UID: \"3be48d90-f238-4e9e-83ca-c91030530489\") " pod="openstack/cinder-api-0" Feb 23 17:50:12 crc kubenswrapper[4724]: I0223 17:50:12.115067 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3be48d90-f238-4e9e-83ca-c91030530489-logs\") pod \"cinder-api-0\" (UID: \"3be48d90-f238-4e9e-83ca-c91030530489\") " pod="openstack/cinder-api-0" Feb 23 17:50:12 crc kubenswrapper[4724]: I0223 17:50:12.115086 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3be48d90-f238-4e9e-83ca-c91030530489-config-data-custom\") pod \"cinder-api-0\" (UID: \"3be48d90-f238-4e9e-83ca-c91030530489\") " pod="openstack/cinder-api-0" Feb 23 17:50:12 crc kubenswrapper[4724]: I0223 17:50:12.115109 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d06e4b0-b516-436c-9c9f-054cfd2dd68f-dns-svc\") pod \"dnsmasq-dns-c8bdf9fff-r2h5q\" (UID: \"9d06e4b0-b516-436c-9c9f-054cfd2dd68f\") " pod="openstack/dnsmasq-dns-c8bdf9fff-r2h5q" Feb 23 17:50:12 crc kubenswrapper[4724]: I0223 17:50:12.115184 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3be48d90-f238-4e9e-83ca-c91030530489-config-data\") pod \"cinder-api-0\" (UID: \"3be48d90-f238-4e9e-83ca-c91030530489\") " pod="openstack/cinder-api-0" Feb 23 17:50:12 crc kubenswrapper[4724]: I0223 17:50:12.115207 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkszq\" (UniqueName: \"kubernetes.io/projected/9d06e4b0-b516-436c-9c9f-054cfd2dd68f-kube-api-access-fkszq\") pod \"dnsmasq-dns-c8bdf9fff-r2h5q\" (UID: \"9d06e4b0-b516-436c-9c9f-054cfd2dd68f\") " pod="openstack/dnsmasq-dns-c8bdf9fff-r2h5q" Feb 23 17:50:12 crc kubenswrapper[4724]: I0223 17:50:12.115252 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3be48d90-f238-4e9e-83ca-c91030530489-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3be48d90-f238-4e9e-83ca-c91030530489\") " pod="openstack/cinder-api-0" Feb 23 17:50:12 crc kubenswrapper[4724]: I0223 17:50:12.115285 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3be48d90-f238-4e9e-83ca-c91030530489-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3be48d90-f238-4e9e-83ca-c91030530489\") " pod="openstack/cinder-api-0" Feb 23 17:50:12 crc kubenswrapper[4724]: I0223 17:50:12.115313 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d06e4b0-b516-436c-9c9f-054cfd2dd68f-dns-swift-storage-0\") pod \"dnsmasq-dns-c8bdf9fff-r2h5q\" (UID: \"9d06e4b0-b516-436c-9c9f-054cfd2dd68f\") " pod="openstack/dnsmasq-dns-c8bdf9fff-r2h5q" Feb 23 17:50:12 crc kubenswrapper[4724]: I0223 17:50:12.116164 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d06e4b0-b516-436c-9c9f-054cfd2dd68f-dns-swift-storage-0\") pod \"dnsmasq-dns-c8bdf9fff-r2h5q\" (UID: \"9d06e4b0-b516-436c-9c9f-054cfd2dd68f\") " pod="openstack/dnsmasq-dns-c8bdf9fff-r2h5q" Feb 23 17:50:12 crc kubenswrapper[4724]: I0223 17:50:12.116551 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d06e4b0-b516-436c-9c9f-054cfd2dd68f-ovsdbserver-nb\") pod \"dnsmasq-dns-c8bdf9fff-r2h5q\" (UID: \"9d06e4b0-b516-436c-9c9f-054cfd2dd68f\") " pod="openstack/dnsmasq-dns-c8bdf9fff-r2h5q" Feb 23 17:50:12 crc kubenswrapper[4724]: I0223 17:50:12.116762 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d06e4b0-b516-436c-9c9f-054cfd2dd68f-dns-svc\") pod \"dnsmasq-dns-c8bdf9fff-r2h5q\" (UID: \"9d06e4b0-b516-436c-9c9f-054cfd2dd68f\") " pod="openstack/dnsmasq-dns-c8bdf9fff-r2h5q" Feb 23 17:50:12 crc kubenswrapper[4724]: I0223 17:50:12.118666 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d06e4b0-b516-436c-9c9f-054cfd2dd68f-ovsdbserver-sb\") pod \"dnsmasq-dns-c8bdf9fff-r2h5q\" (UID: \"9d06e4b0-b516-436c-9c9f-054cfd2dd68f\") " pod="openstack/dnsmasq-dns-c8bdf9fff-r2h5q" Feb 23 17:50:12 crc kubenswrapper[4724]: I0223 17:50:12.119363 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d06e4b0-b516-436c-9c9f-054cfd2dd68f-config\") pod \"dnsmasq-dns-c8bdf9fff-r2h5q\" (UID: \"9d06e4b0-b516-436c-9c9f-054cfd2dd68f\") " pod="openstack/dnsmasq-dns-c8bdf9fff-r2h5q" Feb 23 17:50:12 crc kubenswrapper[4724]: I0223 17:50:12.154477 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkszq\" (UniqueName: \"kubernetes.io/projected/9d06e4b0-b516-436c-9c9f-054cfd2dd68f-kube-api-access-fkszq\") pod \"dnsmasq-dns-c8bdf9fff-r2h5q\" (UID: \"9d06e4b0-b516-436c-9c9f-054cfd2dd68f\") " pod="openstack/dnsmasq-dns-c8bdf9fff-r2h5q" Feb 23 17:50:12 crc kubenswrapper[4724]: I0223 17:50:12.218659 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3be48d90-f238-4e9e-83ca-c91030530489-config-data\") pod \"cinder-api-0\" (UID: \"3be48d90-f238-4e9e-83ca-c91030530489\") " pod="openstack/cinder-api-0" Feb 23 17:50:12 crc kubenswrapper[4724]: I0223 17:50:12.218742 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3be48d90-f238-4e9e-83ca-c91030530489-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3be48d90-f238-4e9e-83ca-c91030530489\") " pod="openstack/cinder-api-0" Feb 23 17:50:12 crc kubenswrapper[4724]: I0223 17:50:12.218768 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3be48d90-f238-4e9e-83ca-c91030530489-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3be48d90-f238-4e9e-83ca-c91030530489\") " pod="openstack/cinder-api-0" Feb 23 17:50:12 crc kubenswrapper[4724]: I0223 17:50:12.218814 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbch8\" (UniqueName: \"kubernetes.io/projected/3be48d90-f238-4e9e-83ca-c91030530489-kube-api-access-qbch8\") pod \"cinder-api-0\" (UID: \"3be48d90-f238-4e9e-83ca-c91030530489\") " pod="openstack/cinder-api-0" Feb 23 17:50:12 crc kubenswrapper[4724]: I0223 17:50:12.218849 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3be48d90-f238-4e9e-83ca-c91030530489-scripts\") pod \"cinder-api-0\" (UID: \"3be48d90-f238-4e9e-83ca-c91030530489\") " pod="openstack/cinder-api-0" Feb 23 17:50:12 crc kubenswrapper[4724]: I0223 17:50:12.218864 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3be48d90-f238-4e9e-83ca-c91030530489-logs\") pod \"cinder-api-0\" (UID: \"3be48d90-f238-4e9e-83ca-c91030530489\") " pod="openstack/cinder-api-0" Feb 23 17:50:12 crc kubenswrapper[4724]: I0223 17:50:12.218883 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3be48d90-f238-4e9e-83ca-c91030530489-config-data-custom\") pod \"cinder-api-0\" (UID: \"3be48d90-f238-4e9e-83ca-c91030530489\") " pod="openstack/cinder-api-0" Feb 23 17:50:12 crc kubenswrapper[4724]: I0223 17:50:12.219662 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3be48d90-f238-4e9e-83ca-c91030530489-logs\") pod \"cinder-api-0\" (UID: \"3be48d90-f238-4e9e-83ca-c91030530489\") " pod="openstack/cinder-api-0" Feb 23 17:50:12 crc kubenswrapper[4724]: I0223 17:50:12.220097 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3be48d90-f238-4e9e-83ca-c91030530489-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3be48d90-f238-4e9e-83ca-c91030530489\") " pod="openstack/cinder-api-0" Feb 23 17:50:12 crc kubenswrapper[4724]: I0223 17:50:12.233862 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c8bdf9fff-r2h5q" Feb 23 17:50:12 crc kubenswrapper[4724]: I0223 17:50:12.463939 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-65447d49b6-tcbqk" event={"ID":"17ce0d64-cfcb-48c4-8282-b53ae002e25e","Type":"ContainerStarted","Data":"98fb66647564374737df1e7ea0f42184211a133921bec0fccba1e58041a865dc"} Feb 23 17:50:12 crc kubenswrapper[4724]: I0223 17:50:12.473215 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7cbfcdd8bd-6sfgm" event={"ID":"c06fc526-bdf8-419c-8261-29fca2da229c","Type":"ContainerStarted","Data":"6b6d6a1d7311a7a7a7a00c9ae0104777939053c190f9323e89c2b1ade91fcd34"} Feb 23 17:50:12 crc kubenswrapper[4724]: I0223 17:50:12.474512 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b6cf4bd7c-6flfl" event={"ID":"95b1fddb-4398-4c03-bb76-82fd6cba5f5a","Type":"ContainerStarted","Data":"9ced01e241deec85ccffc2307dc3aaa0e3150190065f2ed0e4df61a04baa6a59"} Feb 23 17:50:12 crc kubenswrapper[4724]: I0223 17:50:12.476484 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3be48d90-f238-4e9e-83ca-c91030530489-config-data-custom\") pod \"cinder-api-0\" (UID: \"3be48d90-f238-4e9e-83ca-c91030530489\") " pod="openstack/cinder-api-0" Feb 23 17:50:12 crc kubenswrapper[4724]: I0223 17:50:12.479058 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3be48d90-f238-4e9e-83ca-c91030530489-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3be48d90-f238-4e9e-83ca-c91030530489\") " pod="openstack/cinder-api-0" Feb 23 17:50:12 crc kubenswrapper[4724]: I0223 17:50:12.480818 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-68f84cbc4f-9ns6x" event={"ID":"d0195d90-e7f7-4cba-b83a-75b5e0a1bcd1","Type":"ContainerStarted","Data":"5959a868b1accea08d21fd9a345136a95507e47ef5b1cc9bbe493ed9c3ee18d3"} Feb 23 17:50:12 crc kubenswrapper[4724]: I0223 17:50:12.486893 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3be48d90-f238-4e9e-83ca-c91030530489-scripts\") pod \"cinder-api-0\" (UID: \"3be48d90-f238-4e9e-83ca-c91030530489\") " pod="openstack/cinder-api-0" Feb 23 17:50:12 crc kubenswrapper[4724]: I0223 17:50:12.487070 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"86eb7ff0-87b2-4538-8c5b-9126768e810b","Type":"ContainerStarted","Data":"eea15f12b37ff5426ac01301fcccf9eee8bdd329a3a18203c6e4bee6ba83abfd"} Feb 23 17:50:12 crc kubenswrapper[4724]: I0223 17:50:12.490949 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbch8\" (UniqueName: \"kubernetes.io/projected/3be48d90-f238-4e9e-83ca-c91030530489-kube-api-access-qbch8\") pod \"cinder-api-0\" (UID: \"3be48d90-f238-4e9e-83ca-c91030530489\") " pod="openstack/cinder-api-0" Feb 23 17:50:12 crc kubenswrapper[4724]: I0223 17:50:12.507503 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3be48d90-f238-4e9e-83ca-c91030530489-config-data\") pod \"cinder-api-0\" (UID: \"3be48d90-f238-4e9e-83ca-c91030530489\") " pod="openstack/cinder-api-0" Feb 23 17:50:12 crc kubenswrapper[4724]: I0223 17:50:12.647766 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 23 17:50:12 crc kubenswrapper[4724]: I0223 17:50:12.914456 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 23 17:50:13 crc kubenswrapper[4724]: I0223 17:50:13.163913 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c8bdf9fff-r2h5q"] Feb 23 17:50:13 crc kubenswrapper[4724]: I0223 17:50:13.308978 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 23 17:50:13 crc kubenswrapper[4724]: I0223 17:50:13.498363 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-65447d49b6-tcbqk" event={"ID":"17ce0d64-cfcb-48c4-8282-b53ae002e25e","Type":"ContainerStarted","Data":"3c27d217b62b9e5c67a623e30d659d8ce85148f7f8ed614ae26e06a56de4f4f5"} Feb 23 17:50:13 crc kubenswrapper[4724]: I0223 17:50:13.498450 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-65447d49b6-tcbqk" event={"ID":"17ce0d64-cfcb-48c4-8282-b53ae002e25e","Type":"ContainerStarted","Data":"0d6d4db65804f270e3f7f8a2961bb50d7eb0ee662792b30682061db1a2b978fe"} Feb 23 17:50:13 crc kubenswrapper[4724]: I0223 17:50:13.498496 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-65447d49b6-tcbqk" Feb 23 17:50:13 crc kubenswrapper[4724]: I0223 17:50:13.498524 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-65447d49b6-tcbqk" Feb 23 17:50:13 crc kubenswrapper[4724]: I0223 17:50:13.500475 4724 generic.go:334] "Generic (PLEG): container finished" podID="95b1fddb-4398-4c03-bb76-82fd6cba5f5a" containerID="3ec76ec8f6d0df94efd288ccfb808d1e25a1eb1d9f28cb4c1988d895b966a3bb" exitCode=0 Feb 23 17:50:13 crc kubenswrapper[4724]: I0223 17:50:13.500557 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b6cf4bd7c-6flfl" event={"ID":"95b1fddb-4398-4c03-bb76-82fd6cba5f5a","Type":"ContainerDied","Data":"3ec76ec8f6d0df94efd288ccfb808d1e25a1eb1d9f28cb4c1988d895b966a3bb"} Feb 23 17:50:13 crc kubenswrapper[4724]: I0223 17:50:13.509009 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4de93300-165d-4da7-b999-032b6f89038f","Type":"ContainerStarted","Data":"dc56615ad236af45cf2e546df222eec4fab6f4194713b39ad4349b71aee90fc3"} Feb 23 17:50:13 crc kubenswrapper[4724]: I0223 17:50:13.585976 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-65447d49b6-tcbqk" podStartSLOduration=3.585959435 podStartE2EDuration="3.585959435s" podCreationTimestamp="2026-02-23 17:50:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:50:13.55486603 +0000 UTC m=+1169.371065620" watchObservedRunningTime="2026-02-23 17:50:13.585959435 +0000 UTC m=+1169.402159035" Feb 23 17:50:14 crc kubenswrapper[4724]: I0223 17:50:14.009137 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 23 17:50:14 crc kubenswrapper[4724]: E0223 17:50:14.051641 4724 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c0efb1b_cbc1_4ac1_b969_ce5ae7b03857.slice/crio-conmon-fb492313bded3525683787416d1463c506739ef103a7abf059baf18d6f79a5a7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a589efc_e414_47aa_90d8_14b2ad1f542e.slice/crio-conmon-5892e8d7bcde1c2d53816d81acf28f0f496ad8a2b3a54385c84447994d93d5d6.scope\": RecentStats: unable to find data in memory cache]" Feb 23 17:50:14 crc kubenswrapper[4724]: I0223 17:50:14.261981 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-74674fd4f8-mmmpd" Feb 23 17:50:14 crc kubenswrapper[4724]: I0223 17:50:14.336037 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 23 17:50:14 crc kubenswrapper[4724]: I0223 17:50:14.470369 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5b4b6c94fb-ttctl" Feb 23 17:50:14 crc kubenswrapper[4724]: I0223 17:50:14.522823 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b6cf4bd7c-6flfl" Feb 23 17:50:14 crc kubenswrapper[4724]: I0223 17:50:14.532189 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b6cf4bd7c-6flfl" event={"ID":"95b1fddb-4398-4c03-bb76-82fd6cba5f5a","Type":"ContainerDied","Data":"9ced01e241deec85ccffc2307dc3aaa0e3150190065f2ed0e4df61a04baa6a59"} Feb 23 17:50:14 crc kubenswrapper[4724]: I0223 17:50:14.532237 4724 scope.go:117] "RemoveContainer" containerID="3ec76ec8f6d0df94efd288ccfb808d1e25a1eb1d9f28cb4c1988d895b966a3bb" Feb 23 17:50:14 crc kubenswrapper[4724]: I0223 17:50:14.536045 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c8bdf9fff-r2h5q" event={"ID":"9d06e4b0-b516-436c-9c9f-054cfd2dd68f","Type":"ContainerStarted","Data":"055834f770288674a53f529da3838bdfc532017fc9e8c0ea2d43b5a92e9979ee"} Feb 23 17:50:14 crc kubenswrapper[4724]: I0223 17:50:14.541080 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3be48d90-f238-4e9e-83ca-c91030530489","Type":"ContainerStarted","Data":"22651926b0e1f6ef8ea3fb870a5ea3830f4e073323e4032ef8adfef054a93e38"} Feb 23 17:50:14 crc kubenswrapper[4724]: I0223 17:50:14.568905 4724 generic.go:334] "Generic (PLEG): container finished" podID="7a589efc-e414-47aa-90d8-14b2ad1f542e" containerID="5892e8d7bcde1c2d53816d81acf28f0f496ad8a2b3a54385c84447994d93d5d6" exitCode=0 Feb 23 17:50:14 crc kubenswrapper[4724]: I0223 17:50:14.569788 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7a589efc-e414-47aa-90d8-14b2ad1f542e","Type":"ContainerDied","Data":"5892e8d7bcde1c2d53816d81acf28f0f496ad8a2b3a54385c84447994d93d5d6"} Feb 23 17:50:14 crc kubenswrapper[4724]: I0223 17:50:14.705676 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Feb 23 17:50:14 crc kubenswrapper[4724]: I0223 17:50:14.705976 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 23 17:50:14 crc kubenswrapper[4724]: I0223 17:50:14.711452 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/95b1fddb-4398-4c03-bb76-82fd6cba5f5a-dns-swift-storage-0\") pod \"95b1fddb-4398-4c03-bb76-82fd6cba5f5a\" (UID: \"95b1fddb-4398-4c03-bb76-82fd6cba5f5a\") " Feb 23 17:50:14 crc kubenswrapper[4724]: I0223 17:50:14.711501 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52bg5\" (UniqueName: \"kubernetes.io/projected/95b1fddb-4398-4c03-bb76-82fd6cba5f5a-kube-api-access-52bg5\") pod \"95b1fddb-4398-4c03-bb76-82fd6cba5f5a\" (UID: \"95b1fddb-4398-4c03-bb76-82fd6cba5f5a\") " Feb 23 17:50:14 crc kubenswrapper[4724]: I0223 17:50:14.711540 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95b1fddb-4398-4c03-bb76-82fd6cba5f5a-config\") pod \"95b1fddb-4398-4c03-bb76-82fd6cba5f5a\" (UID: \"95b1fddb-4398-4c03-bb76-82fd6cba5f5a\") " Feb 23 17:50:14 crc kubenswrapper[4724]: I0223 17:50:14.711736 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95b1fddb-4398-4c03-bb76-82fd6cba5f5a-ovsdbserver-sb\") pod \"95b1fddb-4398-4c03-bb76-82fd6cba5f5a\" (UID: \"95b1fddb-4398-4c03-bb76-82fd6cba5f5a\") " Feb 23 17:50:14 crc kubenswrapper[4724]: I0223 17:50:14.711774 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95b1fddb-4398-4c03-bb76-82fd6cba5f5a-dns-svc\") pod \"95b1fddb-4398-4c03-bb76-82fd6cba5f5a\" (UID: \"95b1fddb-4398-4c03-bb76-82fd6cba5f5a\") " Feb 23 17:50:14 crc kubenswrapper[4724]: I0223 17:50:14.711817 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95b1fddb-4398-4c03-bb76-82fd6cba5f5a-ovsdbserver-nb\") pod \"95b1fddb-4398-4c03-bb76-82fd6cba5f5a\" (UID: \"95b1fddb-4398-4c03-bb76-82fd6cba5f5a\") " Feb 23 17:50:14 crc kubenswrapper[4724]: I0223 17:50:14.721652 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Feb 23 17:50:14 crc kubenswrapper[4724]: I0223 17:50:14.722533 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95b1fddb-4398-4c03-bb76-82fd6cba5f5a-kube-api-access-52bg5" (OuterVolumeSpecName: "kube-api-access-52bg5") pod "95b1fddb-4398-4c03-bb76-82fd6cba5f5a" (UID: "95b1fddb-4398-4c03-bb76-82fd6cba5f5a"). InnerVolumeSpecName "kube-api-access-52bg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:50:14 crc kubenswrapper[4724]: I0223 17:50:14.781613 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95b1fddb-4398-4c03-bb76-82fd6cba5f5a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "95b1fddb-4398-4c03-bb76-82fd6cba5f5a" (UID: "95b1fddb-4398-4c03-bb76-82fd6cba5f5a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:50:14 crc kubenswrapper[4724]: I0223 17:50:14.787852 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95b1fddb-4398-4c03-bb76-82fd6cba5f5a-config" (OuterVolumeSpecName: "config") pod "95b1fddb-4398-4c03-bb76-82fd6cba5f5a" (UID: "95b1fddb-4398-4c03-bb76-82fd6cba5f5a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:50:14 crc kubenswrapper[4724]: I0223 17:50:14.788061 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95b1fddb-4398-4c03-bb76-82fd6cba5f5a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "95b1fddb-4398-4c03-bb76-82fd6cba5f5a" (UID: "95b1fddb-4398-4c03-bb76-82fd6cba5f5a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:50:14 crc kubenswrapper[4724]: I0223 17:50:14.799471 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95b1fddb-4398-4c03-bb76-82fd6cba5f5a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "95b1fddb-4398-4c03-bb76-82fd6cba5f5a" (UID: "95b1fddb-4398-4c03-bb76-82fd6cba5f5a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:50:14 crc kubenswrapper[4724]: I0223 17:50:14.821563 4724 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95b1fddb-4398-4c03-bb76-82fd6cba5f5a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:14 crc kubenswrapper[4724]: I0223 17:50:14.821590 4724 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95b1fddb-4398-4c03-bb76-82fd6cba5f5a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:14 crc kubenswrapper[4724]: I0223 17:50:14.821600 4724 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95b1fddb-4398-4c03-bb76-82fd6cba5f5a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:14 crc kubenswrapper[4724]: I0223 17:50:14.821608 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52bg5\" (UniqueName: \"kubernetes.io/projected/95b1fddb-4398-4c03-bb76-82fd6cba5f5a-kube-api-access-52bg5\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:14 crc kubenswrapper[4724]: I0223 17:50:14.821617 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95b1fddb-4398-4c03-bb76-82fd6cba5f5a-config\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:14 crc kubenswrapper[4724]: I0223 17:50:14.824419 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95b1fddb-4398-4c03-bb76-82fd6cba5f5a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "95b1fddb-4398-4c03-bb76-82fd6cba5f5a" (UID: "95b1fddb-4398-4c03-bb76-82fd6cba5f5a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:50:14 crc kubenswrapper[4724]: I0223 17:50:14.924810 4724 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/95b1fddb-4398-4c03-bb76-82fd6cba5f5a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:15 crc kubenswrapper[4724]: I0223 17:50:15.595500 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b6cf4bd7c-6flfl" Feb 23 17:50:15 crc kubenswrapper[4724]: I0223 17:50:15.609360 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7a589efc-e414-47aa-90d8-14b2ad1f542e","Type":"ContainerDied","Data":"6cb63541959b5fde6a783cfd26fd51ab7fb584c31c987591a57c4c4b17889b55"} Feb 23 17:50:15 crc kubenswrapper[4724]: I0223 17:50:15.609425 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cb63541959b5fde6a783cfd26fd51ab7fb584c31c987591a57c4c4b17889b55" Feb 23 17:50:15 crc kubenswrapper[4724]: I0223 17:50:15.628806 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 23 17:50:15 crc kubenswrapper[4724]: I0223 17:50:15.743589 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 17:50:15 crc kubenswrapper[4724]: I0223 17:50:15.793769 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b6cf4bd7c-6flfl"] Feb 23 17:50:15 crc kubenswrapper[4724]: I0223 17:50:15.802023 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7b6cf4bd7c-6flfl"] Feb 23 17:50:15 crc kubenswrapper[4724]: I0223 17:50:15.849356 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a589efc-e414-47aa-90d8-14b2ad1f542e-config-data\") pod \"7a589efc-e414-47aa-90d8-14b2ad1f542e\" (UID: \"7a589efc-e414-47aa-90d8-14b2ad1f542e\") " Feb 23 17:50:15 crc kubenswrapper[4724]: I0223 17:50:15.849535 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a589efc-e414-47aa-90d8-14b2ad1f542e-combined-ca-bundle\") pod \"7a589efc-e414-47aa-90d8-14b2ad1f542e\" (UID: \"7a589efc-e414-47aa-90d8-14b2ad1f542e\") " Feb 23 17:50:15 crc kubenswrapper[4724]: I0223 17:50:15.849573 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7a589efc-e414-47aa-90d8-14b2ad1f542e-log-httpd\") pod \"7a589efc-e414-47aa-90d8-14b2ad1f542e\" (UID: \"7a589efc-e414-47aa-90d8-14b2ad1f542e\") " Feb 23 17:50:15 crc kubenswrapper[4724]: I0223 17:50:15.849615 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4d6m\" (UniqueName: \"kubernetes.io/projected/7a589efc-e414-47aa-90d8-14b2ad1f542e-kube-api-access-s4d6m\") pod \"7a589efc-e414-47aa-90d8-14b2ad1f542e\" (UID: \"7a589efc-e414-47aa-90d8-14b2ad1f542e\") " Feb 23 17:50:15 crc kubenswrapper[4724]: I0223 17:50:15.849781 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a589efc-e414-47aa-90d8-14b2ad1f542e-scripts\") pod \"7a589efc-e414-47aa-90d8-14b2ad1f542e\" (UID: \"7a589efc-e414-47aa-90d8-14b2ad1f542e\") " Feb 23 17:50:15 crc kubenswrapper[4724]: I0223 17:50:15.849855 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7a589efc-e414-47aa-90d8-14b2ad1f542e-run-httpd\") pod \"7a589efc-e414-47aa-90d8-14b2ad1f542e\" (UID: \"7a589efc-e414-47aa-90d8-14b2ad1f542e\") " Feb 23 17:50:15 crc kubenswrapper[4724]: I0223 17:50:15.849885 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7a589efc-e414-47aa-90d8-14b2ad1f542e-sg-core-conf-yaml\") pod \"7a589efc-e414-47aa-90d8-14b2ad1f542e\" (UID: \"7a589efc-e414-47aa-90d8-14b2ad1f542e\") " Feb 23 17:50:15 crc kubenswrapper[4724]: I0223 17:50:15.854491 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a589efc-e414-47aa-90d8-14b2ad1f542e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7a589efc-e414-47aa-90d8-14b2ad1f542e" (UID: "7a589efc-e414-47aa-90d8-14b2ad1f542e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:50:15 crc kubenswrapper[4724]: I0223 17:50:15.854791 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a589efc-e414-47aa-90d8-14b2ad1f542e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7a589efc-e414-47aa-90d8-14b2ad1f542e" (UID: "7a589efc-e414-47aa-90d8-14b2ad1f542e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:50:15 crc kubenswrapper[4724]: I0223 17:50:15.865524 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a589efc-e414-47aa-90d8-14b2ad1f542e-kube-api-access-s4d6m" (OuterVolumeSpecName: "kube-api-access-s4d6m") pod "7a589efc-e414-47aa-90d8-14b2ad1f542e" (UID: "7a589efc-e414-47aa-90d8-14b2ad1f542e"). InnerVolumeSpecName "kube-api-access-s4d6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:50:15 crc kubenswrapper[4724]: I0223 17:50:15.883598 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a589efc-e414-47aa-90d8-14b2ad1f542e-scripts" (OuterVolumeSpecName: "scripts") pod "7a589efc-e414-47aa-90d8-14b2ad1f542e" (UID: "7a589efc-e414-47aa-90d8-14b2ad1f542e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:50:15 crc kubenswrapper[4724]: I0223 17:50:15.952331 4724 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7a589efc-e414-47aa-90d8-14b2ad1f542e-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:15 crc kubenswrapper[4724]: I0223 17:50:15.952374 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4d6m\" (UniqueName: \"kubernetes.io/projected/7a589efc-e414-47aa-90d8-14b2ad1f542e-kube-api-access-s4d6m\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:15 crc kubenswrapper[4724]: I0223 17:50:15.952409 4724 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a589efc-e414-47aa-90d8-14b2ad1f542e-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:15 crc kubenswrapper[4724]: I0223 17:50:15.952420 4724 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7a589efc-e414-47aa-90d8-14b2ad1f542e-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:15 crc kubenswrapper[4724]: I0223 17:50:15.997192 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a589efc-e414-47aa-90d8-14b2ad1f542e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7a589efc-e414-47aa-90d8-14b2ad1f542e" (UID: "7a589efc-e414-47aa-90d8-14b2ad1f542e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:50:16 crc kubenswrapper[4724]: I0223 17:50:16.053679 4724 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7a589efc-e414-47aa-90d8-14b2ad1f542e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:16 crc kubenswrapper[4724]: I0223 17:50:16.059533 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a589efc-e414-47aa-90d8-14b2ad1f542e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7a589efc-e414-47aa-90d8-14b2ad1f542e" (UID: "7a589efc-e414-47aa-90d8-14b2ad1f542e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:50:16 crc kubenswrapper[4724]: I0223 17:50:16.065348 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a589efc-e414-47aa-90d8-14b2ad1f542e-config-data" (OuterVolumeSpecName: "config-data") pod "7a589efc-e414-47aa-90d8-14b2ad1f542e" (UID: "7a589efc-e414-47aa-90d8-14b2ad1f542e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:50:16 crc kubenswrapper[4724]: I0223 17:50:16.155460 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a589efc-e414-47aa-90d8-14b2ad1f542e-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:16 crc kubenswrapper[4724]: I0223 17:50:16.155486 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a589efc-e414-47aa-90d8-14b2ad1f542e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:16 crc kubenswrapper[4724]: I0223 17:50:16.629811 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4de93300-165d-4da7-b999-032b6f89038f","Type":"ContainerStarted","Data":"ab179323ab428a13897d37711aeab0e67cbfdc13c121f99008984d343b13c578"} Feb 23 17:50:16 crc kubenswrapper[4724]: I0223 17:50:16.636746 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-68f84cbc4f-9ns6x" event={"ID":"d0195d90-e7f7-4cba-b83a-75b5e0a1bcd1","Type":"ContainerStarted","Data":"256c95f6c3e7e0c8a406b3dfd79b8a0c2b3244454e09bfcaaf7287393305f567"} Feb 23 17:50:16 crc kubenswrapper[4724]: I0223 17:50:16.636801 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-68f84cbc4f-9ns6x" event={"ID":"d0195d90-e7f7-4cba-b83a-75b5e0a1bcd1","Type":"ContainerStarted","Data":"279229c336b0c6331c56e353e4fae3b897f8e0de8de823e395d8be42d56d699c"} Feb 23 17:50:16 crc kubenswrapper[4724]: I0223 17:50:16.666014 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-68f84cbc4f-9ns6x" podStartSLOduration=2.917367734 podStartE2EDuration="6.665993475s" podCreationTimestamp="2026-02-23 17:50:10 +0000 UTC" firstStartedPulling="2026-02-23 17:50:11.711261621 +0000 UTC m=+1167.527461221" lastFinishedPulling="2026-02-23 17:50:15.459887362 +0000 UTC m=+1171.276086962" observedRunningTime="2026-02-23 17:50:16.663538463 +0000 UTC m=+1172.479738083" watchObservedRunningTime="2026-02-23 17:50:16.665993475 +0000 UTC m=+1172.482193075" Feb 23 17:50:16 crc kubenswrapper[4724]: I0223 17:50:16.673607 4724 generic.go:334] "Generic (PLEG): container finished" podID="9d06e4b0-b516-436c-9c9f-054cfd2dd68f" containerID="3de46e65402a8d7a6f5944d471d9ae5153f298bbd9e9ce7341c3e50f05f53251" exitCode=0 Feb 23 17:50:16 crc kubenswrapper[4724]: I0223 17:50:16.673733 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c8bdf9fff-r2h5q" event={"ID":"9d06e4b0-b516-436c-9c9f-054cfd2dd68f","Type":"ContainerDied","Data":"3de46e65402a8d7a6f5944d471d9ae5153f298bbd9e9ce7341c3e50f05f53251"} Feb 23 17:50:16 crc kubenswrapper[4724]: I0223 17:50:16.703068 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7cbfcdd8bd-6sfgm" event={"ID":"c06fc526-bdf8-419c-8261-29fca2da229c","Type":"ContainerStarted","Data":"aae12cb64d9ba41a2bfd2dcde2415407d25ed64e046c1ca82decd9f0bb936ad4"} Feb 23 17:50:16 crc kubenswrapper[4724]: I0223 17:50:16.703117 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7cbfcdd8bd-6sfgm" event={"ID":"c06fc526-bdf8-419c-8261-29fca2da229c","Type":"ContainerStarted","Data":"cc50232804c273811af1ca16e1afb9fcf8b8d3a2f6f303db2aec9f6b532f9bc2"} Feb 23 17:50:16 crc kubenswrapper[4724]: I0223 17:50:16.714126 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 17:50:16 crc kubenswrapper[4724]: I0223 17:50:16.715620 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3be48d90-f238-4e9e-83ca-c91030530489","Type":"ContainerStarted","Data":"812fa5738325c6e1f6b16ffbfe57f31e025f829ffea575c001088200bfa4705a"} Feb 23 17:50:16 crc kubenswrapper[4724]: I0223 17:50:16.736912 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-7cbfcdd8bd-6sfgm" podStartSLOduration=3.139120193 podStartE2EDuration="6.736886745s" podCreationTimestamp="2026-02-23 17:50:10 +0000 UTC" firstStartedPulling="2026-02-23 17:50:11.862099749 +0000 UTC m=+1167.678299349" lastFinishedPulling="2026-02-23 17:50:15.459866301 +0000 UTC m=+1171.276065901" observedRunningTime="2026-02-23 17:50:16.733000336 +0000 UTC m=+1172.549199936" watchObservedRunningTime="2026-02-23 17:50:16.736886745 +0000 UTC m=+1172.553086345" Feb 23 17:50:16 crc kubenswrapper[4724]: I0223 17:50:16.770848 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 17:50:16 crc kubenswrapper[4724]: I0223 17:50:16.779851 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 23 17:50:16 crc kubenswrapper[4724]: I0223 17:50:16.828950 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 23 17:50:16 crc kubenswrapper[4724]: E0223 17:50:16.829511 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a589efc-e414-47aa-90d8-14b2ad1f542e" containerName="ceilometer-notification-agent" Feb 23 17:50:16 crc kubenswrapper[4724]: I0223 17:50:16.829529 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a589efc-e414-47aa-90d8-14b2ad1f542e" containerName="ceilometer-notification-agent" Feb 23 17:50:16 crc kubenswrapper[4724]: E0223 17:50:16.829566 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a589efc-e414-47aa-90d8-14b2ad1f542e" containerName="sg-core" Feb 23 17:50:16 crc kubenswrapper[4724]: I0223 17:50:16.829572 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a589efc-e414-47aa-90d8-14b2ad1f542e" containerName="sg-core" Feb 23 17:50:16 crc kubenswrapper[4724]: E0223 17:50:16.829596 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95b1fddb-4398-4c03-bb76-82fd6cba5f5a" containerName="init" Feb 23 17:50:16 crc kubenswrapper[4724]: I0223 17:50:16.829603 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="95b1fddb-4398-4c03-bb76-82fd6cba5f5a" containerName="init" Feb 23 17:50:16 crc kubenswrapper[4724]: E0223 17:50:16.829613 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a589efc-e414-47aa-90d8-14b2ad1f542e" containerName="ceilometer-central-agent" Feb 23 17:50:16 crc kubenswrapper[4724]: I0223 17:50:16.829620 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a589efc-e414-47aa-90d8-14b2ad1f542e" containerName="ceilometer-central-agent" Feb 23 17:50:16 crc kubenswrapper[4724]: E0223 17:50:16.829632 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a589efc-e414-47aa-90d8-14b2ad1f542e" containerName="proxy-httpd" Feb 23 17:50:16 crc kubenswrapper[4724]: I0223 17:50:16.829639 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a589efc-e414-47aa-90d8-14b2ad1f542e" containerName="proxy-httpd" Feb 23 17:50:16 crc kubenswrapper[4724]: I0223 17:50:16.829821 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="95b1fddb-4398-4c03-bb76-82fd6cba5f5a" containerName="init" Feb 23 17:50:16 crc kubenswrapper[4724]: I0223 17:50:16.829835 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a589efc-e414-47aa-90d8-14b2ad1f542e" containerName="sg-core" Feb 23 17:50:16 crc kubenswrapper[4724]: I0223 17:50:16.829846 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a589efc-e414-47aa-90d8-14b2ad1f542e" containerName="ceilometer-central-agent" Feb 23 17:50:16 crc kubenswrapper[4724]: I0223 17:50:16.829861 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a589efc-e414-47aa-90d8-14b2ad1f542e" containerName="proxy-httpd" Feb 23 17:50:16 crc kubenswrapper[4724]: I0223 17:50:16.829871 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a589efc-e414-47aa-90d8-14b2ad1f542e" containerName="ceilometer-notification-agent" Feb 23 17:50:16 crc kubenswrapper[4724]: I0223 17:50:16.834585 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 17:50:16 crc kubenswrapper[4724]: I0223 17:50:16.839107 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 23 17:50:16 crc kubenswrapper[4724]: I0223 17:50:16.839245 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 23 17:50:16 crc kubenswrapper[4724]: I0223 17:50:16.847922 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 17:50:16 crc kubenswrapper[4724]: I0223 17:50:16.964347 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a589efc-e414-47aa-90d8-14b2ad1f542e" path="/var/lib/kubelet/pods/7a589efc-e414-47aa-90d8-14b2ad1f542e/volumes" Feb 23 17:50:16 crc kubenswrapper[4724]: I0223 17:50:16.965112 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95b1fddb-4398-4c03-bb76-82fd6cba5f5a" path="/var/lib/kubelet/pods/95b1fddb-4398-4c03-bb76-82fd6cba5f5a/volumes" Feb 23 17:50:16 crc kubenswrapper[4724]: I0223 17:50:16.988323 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c5fc101-99f6-43b3-ad94-6e23741a2f27-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0c5fc101-99f6-43b3-ad94-6e23741a2f27\") " pod="openstack/ceilometer-0" Feb 23 17:50:16 crc kubenswrapper[4724]: I0223 17:50:16.988635 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c5fc101-99f6-43b3-ad94-6e23741a2f27-config-data\") pod \"ceilometer-0\" (UID: \"0c5fc101-99f6-43b3-ad94-6e23741a2f27\") " pod="openstack/ceilometer-0" Feb 23 17:50:16 crc kubenswrapper[4724]: I0223 17:50:16.988746 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c5fc101-99f6-43b3-ad94-6e23741a2f27-run-httpd\") pod \"ceilometer-0\" (UID: \"0c5fc101-99f6-43b3-ad94-6e23741a2f27\") " pod="openstack/ceilometer-0" Feb 23 17:50:16 crc kubenswrapper[4724]: I0223 17:50:16.988836 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c5fc101-99f6-43b3-ad94-6e23741a2f27-scripts\") pod \"ceilometer-0\" (UID: \"0c5fc101-99f6-43b3-ad94-6e23741a2f27\") " pod="openstack/ceilometer-0" Feb 23 17:50:16 crc kubenswrapper[4724]: I0223 17:50:16.988969 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c5fc101-99f6-43b3-ad94-6e23741a2f27-log-httpd\") pod \"ceilometer-0\" (UID: \"0c5fc101-99f6-43b3-ad94-6e23741a2f27\") " pod="openstack/ceilometer-0" Feb 23 17:50:16 crc kubenswrapper[4724]: I0223 17:50:16.989049 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2wv7\" (UniqueName: \"kubernetes.io/projected/0c5fc101-99f6-43b3-ad94-6e23741a2f27-kube-api-access-z2wv7\") pod \"ceilometer-0\" (UID: \"0c5fc101-99f6-43b3-ad94-6e23741a2f27\") " pod="openstack/ceilometer-0" Feb 23 17:50:16 crc kubenswrapper[4724]: I0223 17:50:16.989154 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0c5fc101-99f6-43b3-ad94-6e23741a2f27-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0c5fc101-99f6-43b3-ad94-6e23741a2f27\") " pod="openstack/ceilometer-0" Feb 23 17:50:17 crc kubenswrapper[4724]: I0223 17:50:17.090866 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c5fc101-99f6-43b3-ad94-6e23741a2f27-config-data\") pod \"ceilometer-0\" (UID: \"0c5fc101-99f6-43b3-ad94-6e23741a2f27\") " pod="openstack/ceilometer-0" Feb 23 17:50:17 crc kubenswrapper[4724]: I0223 17:50:17.090924 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c5fc101-99f6-43b3-ad94-6e23741a2f27-run-httpd\") pod \"ceilometer-0\" (UID: \"0c5fc101-99f6-43b3-ad94-6e23741a2f27\") " pod="openstack/ceilometer-0" Feb 23 17:50:17 crc kubenswrapper[4724]: I0223 17:50:17.090955 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c5fc101-99f6-43b3-ad94-6e23741a2f27-scripts\") pod \"ceilometer-0\" (UID: \"0c5fc101-99f6-43b3-ad94-6e23741a2f27\") " pod="openstack/ceilometer-0" Feb 23 17:50:17 crc kubenswrapper[4724]: I0223 17:50:17.090993 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c5fc101-99f6-43b3-ad94-6e23741a2f27-log-httpd\") pod \"ceilometer-0\" (UID: \"0c5fc101-99f6-43b3-ad94-6e23741a2f27\") " pod="openstack/ceilometer-0" Feb 23 17:50:17 crc kubenswrapper[4724]: I0223 17:50:17.091012 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2wv7\" (UniqueName: \"kubernetes.io/projected/0c5fc101-99f6-43b3-ad94-6e23741a2f27-kube-api-access-z2wv7\") pod \"ceilometer-0\" (UID: \"0c5fc101-99f6-43b3-ad94-6e23741a2f27\") " pod="openstack/ceilometer-0" Feb 23 17:50:17 crc kubenswrapper[4724]: I0223 17:50:17.091049 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0c5fc101-99f6-43b3-ad94-6e23741a2f27-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0c5fc101-99f6-43b3-ad94-6e23741a2f27\") " pod="openstack/ceilometer-0" Feb 23 17:50:17 crc kubenswrapper[4724]: I0223 17:50:17.091106 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c5fc101-99f6-43b3-ad94-6e23741a2f27-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0c5fc101-99f6-43b3-ad94-6e23741a2f27\") " pod="openstack/ceilometer-0" Feb 23 17:50:17 crc kubenswrapper[4724]: I0223 17:50:17.091642 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c5fc101-99f6-43b3-ad94-6e23741a2f27-run-httpd\") pod \"ceilometer-0\" (UID: \"0c5fc101-99f6-43b3-ad94-6e23741a2f27\") " pod="openstack/ceilometer-0" Feb 23 17:50:17 crc kubenswrapper[4724]: I0223 17:50:17.091990 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c5fc101-99f6-43b3-ad94-6e23741a2f27-log-httpd\") pod \"ceilometer-0\" (UID: \"0c5fc101-99f6-43b3-ad94-6e23741a2f27\") " pod="openstack/ceilometer-0" Feb 23 17:50:17 crc kubenswrapper[4724]: I0223 17:50:17.099777 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0c5fc101-99f6-43b3-ad94-6e23741a2f27-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0c5fc101-99f6-43b3-ad94-6e23741a2f27\") " pod="openstack/ceilometer-0" Feb 23 17:50:17 crc kubenswrapper[4724]: I0223 17:50:17.100415 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c5fc101-99f6-43b3-ad94-6e23741a2f27-scripts\") pod \"ceilometer-0\" (UID: \"0c5fc101-99f6-43b3-ad94-6e23741a2f27\") " pod="openstack/ceilometer-0" Feb 23 17:50:17 crc kubenswrapper[4724]: I0223 17:50:17.101039 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c5fc101-99f6-43b3-ad94-6e23741a2f27-config-data\") pod \"ceilometer-0\" (UID: \"0c5fc101-99f6-43b3-ad94-6e23741a2f27\") " pod="openstack/ceilometer-0" Feb 23 17:50:17 crc kubenswrapper[4724]: I0223 17:50:17.103322 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c5fc101-99f6-43b3-ad94-6e23741a2f27-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0c5fc101-99f6-43b3-ad94-6e23741a2f27\") " pod="openstack/ceilometer-0" Feb 23 17:50:17 crc kubenswrapper[4724]: I0223 17:50:17.113100 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2wv7\" (UniqueName: \"kubernetes.io/projected/0c5fc101-99f6-43b3-ad94-6e23741a2f27-kube-api-access-z2wv7\") pod \"ceilometer-0\" (UID: \"0c5fc101-99f6-43b3-ad94-6e23741a2f27\") " pod="openstack/ceilometer-0" Feb 23 17:50:17 crc kubenswrapper[4724]: I0223 17:50:17.180738 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 17:50:17 crc kubenswrapper[4724]: I0223 17:50:17.624433 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6f4c5b5ccd-7xcmx"] Feb 23 17:50:17 crc kubenswrapper[4724]: I0223 17:50:17.629248 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6f4c5b5ccd-7xcmx" Feb 23 17:50:17 crc kubenswrapper[4724]: I0223 17:50:17.645546 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6f4c5b5ccd-7xcmx"] Feb 23 17:50:17 crc kubenswrapper[4724]: I0223 17:50:17.648365 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 23 17:50:17 crc kubenswrapper[4724]: I0223 17:50:17.648364 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 23 17:50:17 crc kubenswrapper[4724]: I0223 17:50:17.715865 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 17:50:17 crc kubenswrapper[4724]: I0223 17:50:17.743360 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c8bdf9fff-r2h5q" event={"ID":"9d06e4b0-b516-436c-9c9f-054cfd2dd68f","Type":"ContainerStarted","Data":"a35bb29586365a3c7e1d8dfa69147effd24fcaf6b4a7a6c19d16ad6dfdd3adb2"} Feb 23 17:50:17 crc kubenswrapper[4724]: I0223 17:50:17.773136 4724 generic.go:334] "Generic (PLEG): container finished" podID="86eb7ff0-87b2-4538-8c5b-9126768e810b" containerID="eea15f12b37ff5426ac01301fcccf9eee8bdd329a3a18203c6e4bee6ba83abfd" exitCode=1 Feb 23 17:50:17 crc kubenswrapper[4724]: I0223 17:50:17.773593 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"86eb7ff0-87b2-4538-8c5b-9126768e810b","Type":"ContainerDied","Data":"eea15f12b37ff5426ac01301fcccf9eee8bdd329a3a18203c6e4bee6ba83abfd"} Feb 23 17:50:17 crc kubenswrapper[4724]: I0223 17:50:17.774499 4724 scope.go:117] "RemoveContainer" containerID="b281f0460f6aacdba1984f52546e50487809ee08e50ef5a7dbd8f741cfd7e606" Feb 23 17:50:17 crc kubenswrapper[4724]: I0223 17:50:17.775369 4724 scope.go:117] "RemoveContainer" containerID="eea15f12b37ff5426ac01301fcccf9eee8bdd329a3a18203c6e4bee6ba83abfd" Feb 23 17:50:17 crc kubenswrapper[4724]: E0223 17:50:17.775699 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(86eb7ff0-87b2-4538-8c5b-9126768e810b)\"" pod="openstack/watcher-decision-engine-0" podUID="86eb7ff0-87b2-4538-8c5b-9126768e810b" Feb 23 17:50:17 crc kubenswrapper[4724]: I0223 17:50:17.809678 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e93c91f5-d9d7-4322-97c0-8d2b9ab82714-config-data-custom\") pod \"barbican-api-6f4c5b5ccd-7xcmx\" (UID: \"e93c91f5-d9d7-4322-97c0-8d2b9ab82714\") " pod="openstack/barbican-api-6f4c5b5ccd-7xcmx" Feb 23 17:50:17 crc kubenswrapper[4724]: I0223 17:50:17.809749 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e93c91f5-d9d7-4322-97c0-8d2b9ab82714-config-data\") pod \"barbican-api-6f4c5b5ccd-7xcmx\" (UID: \"e93c91f5-d9d7-4322-97c0-8d2b9ab82714\") " pod="openstack/barbican-api-6f4c5b5ccd-7xcmx" Feb 23 17:50:17 crc kubenswrapper[4724]: I0223 17:50:17.809767 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e93c91f5-d9d7-4322-97c0-8d2b9ab82714-public-tls-certs\") pod \"barbican-api-6f4c5b5ccd-7xcmx\" (UID: \"e93c91f5-d9d7-4322-97c0-8d2b9ab82714\") " pod="openstack/barbican-api-6f4c5b5ccd-7xcmx" Feb 23 17:50:17 crc kubenswrapper[4724]: I0223 17:50:17.809796 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e93c91f5-d9d7-4322-97c0-8d2b9ab82714-logs\") pod \"barbican-api-6f4c5b5ccd-7xcmx\" (UID: \"e93c91f5-d9d7-4322-97c0-8d2b9ab82714\") " pod="openstack/barbican-api-6f4c5b5ccd-7xcmx" Feb 23 17:50:17 crc kubenswrapper[4724]: I0223 17:50:17.809822 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e93c91f5-d9d7-4322-97c0-8d2b9ab82714-internal-tls-certs\") pod \"barbican-api-6f4c5b5ccd-7xcmx\" (UID: \"e93c91f5-d9d7-4322-97c0-8d2b9ab82714\") " pod="openstack/barbican-api-6f4c5b5ccd-7xcmx" Feb 23 17:50:17 crc kubenswrapper[4724]: I0223 17:50:17.809855 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69nzv\" (UniqueName: \"kubernetes.io/projected/e93c91f5-d9d7-4322-97c0-8d2b9ab82714-kube-api-access-69nzv\") pod \"barbican-api-6f4c5b5ccd-7xcmx\" (UID: \"e93c91f5-d9d7-4322-97c0-8d2b9ab82714\") " pod="openstack/barbican-api-6f4c5b5ccd-7xcmx" Feb 23 17:50:17 crc kubenswrapper[4724]: I0223 17:50:17.809879 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e93c91f5-d9d7-4322-97c0-8d2b9ab82714-combined-ca-bundle\") pod \"barbican-api-6f4c5b5ccd-7xcmx\" (UID: \"e93c91f5-d9d7-4322-97c0-8d2b9ab82714\") " pod="openstack/barbican-api-6f4c5b5ccd-7xcmx" Feb 23 17:50:17 crc kubenswrapper[4724]: I0223 17:50:17.911299 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e93c91f5-d9d7-4322-97c0-8d2b9ab82714-config-data-custom\") pod \"barbican-api-6f4c5b5ccd-7xcmx\" (UID: \"e93c91f5-d9d7-4322-97c0-8d2b9ab82714\") " pod="openstack/barbican-api-6f4c5b5ccd-7xcmx" Feb 23 17:50:17 crc kubenswrapper[4724]: I0223 17:50:17.911446 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e93c91f5-d9d7-4322-97c0-8d2b9ab82714-config-data\") pod \"barbican-api-6f4c5b5ccd-7xcmx\" (UID: \"e93c91f5-d9d7-4322-97c0-8d2b9ab82714\") " pod="openstack/barbican-api-6f4c5b5ccd-7xcmx" Feb 23 17:50:17 crc kubenswrapper[4724]: I0223 17:50:17.911472 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e93c91f5-d9d7-4322-97c0-8d2b9ab82714-public-tls-certs\") pod \"barbican-api-6f4c5b5ccd-7xcmx\" (UID: \"e93c91f5-d9d7-4322-97c0-8d2b9ab82714\") " pod="openstack/barbican-api-6f4c5b5ccd-7xcmx" Feb 23 17:50:17 crc kubenswrapper[4724]: I0223 17:50:17.911499 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e93c91f5-d9d7-4322-97c0-8d2b9ab82714-logs\") pod \"barbican-api-6f4c5b5ccd-7xcmx\" (UID: \"e93c91f5-d9d7-4322-97c0-8d2b9ab82714\") " pod="openstack/barbican-api-6f4c5b5ccd-7xcmx" Feb 23 17:50:17 crc kubenswrapper[4724]: I0223 17:50:17.911526 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e93c91f5-d9d7-4322-97c0-8d2b9ab82714-internal-tls-certs\") pod \"barbican-api-6f4c5b5ccd-7xcmx\" (UID: \"e93c91f5-d9d7-4322-97c0-8d2b9ab82714\") " pod="openstack/barbican-api-6f4c5b5ccd-7xcmx" Feb 23 17:50:17 crc kubenswrapper[4724]: I0223 17:50:17.911561 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69nzv\" (UniqueName: \"kubernetes.io/projected/e93c91f5-d9d7-4322-97c0-8d2b9ab82714-kube-api-access-69nzv\") pod \"barbican-api-6f4c5b5ccd-7xcmx\" (UID: \"e93c91f5-d9d7-4322-97c0-8d2b9ab82714\") " pod="openstack/barbican-api-6f4c5b5ccd-7xcmx" Feb 23 17:50:17 crc kubenswrapper[4724]: I0223 17:50:17.911583 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e93c91f5-d9d7-4322-97c0-8d2b9ab82714-combined-ca-bundle\") pod \"barbican-api-6f4c5b5ccd-7xcmx\" (UID: \"e93c91f5-d9d7-4322-97c0-8d2b9ab82714\") " pod="openstack/barbican-api-6f4c5b5ccd-7xcmx" Feb 23 17:50:17 crc kubenswrapper[4724]: I0223 17:50:17.912618 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e93c91f5-d9d7-4322-97c0-8d2b9ab82714-logs\") pod \"barbican-api-6f4c5b5ccd-7xcmx\" (UID: \"e93c91f5-d9d7-4322-97c0-8d2b9ab82714\") " pod="openstack/barbican-api-6f4c5b5ccd-7xcmx" Feb 23 17:50:17 crc kubenswrapper[4724]: I0223 17:50:17.917501 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e93c91f5-d9d7-4322-97c0-8d2b9ab82714-combined-ca-bundle\") pod \"barbican-api-6f4c5b5ccd-7xcmx\" (UID: \"e93c91f5-d9d7-4322-97c0-8d2b9ab82714\") " pod="openstack/barbican-api-6f4c5b5ccd-7xcmx" Feb 23 17:50:17 crc kubenswrapper[4724]: I0223 17:50:17.917667 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e93c91f5-d9d7-4322-97c0-8d2b9ab82714-public-tls-certs\") pod \"barbican-api-6f4c5b5ccd-7xcmx\" (UID: \"e93c91f5-d9d7-4322-97c0-8d2b9ab82714\") " pod="openstack/barbican-api-6f4c5b5ccd-7xcmx" Feb 23 17:50:17 crc kubenswrapper[4724]: I0223 17:50:17.918026 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e93c91f5-d9d7-4322-97c0-8d2b9ab82714-config-data-custom\") pod \"barbican-api-6f4c5b5ccd-7xcmx\" (UID: \"e93c91f5-d9d7-4322-97c0-8d2b9ab82714\") " pod="openstack/barbican-api-6f4c5b5ccd-7xcmx" Feb 23 17:50:17 crc kubenswrapper[4724]: I0223 17:50:17.936250 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69nzv\" (UniqueName: \"kubernetes.io/projected/e93c91f5-d9d7-4322-97c0-8d2b9ab82714-kube-api-access-69nzv\") pod \"barbican-api-6f4c5b5ccd-7xcmx\" (UID: \"e93c91f5-d9d7-4322-97c0-8d2b9ab82714\") " pod="openstack/barbican-api-6f4c5b5ccd-7xcmx" Feb 23 17:50:17 crc kubenswrapper[4724]: I0223 17:50:17.937146 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e93c91f5-d9d7-4322-97c0-8d2b9ab82714-internal-tls-certs\") pod \"barbican-api-6f4c5b5ccd-7xcmx\" (UID: \"e93c91f5-d9d7-4322-97c0-8d2b9ab82714\") " pod="openstack/barbican-api-6f4c5b5ccd-7xcmx" Feb 23 17:50:17 crc kubenswrapper[4724]: I0223 17:50:17.937305 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e93c91f5-d9d7-4322-97c0-8d2b9ab82714-config-data\") pod \"barbican-api-6f4c5b5ccd-7xcmx\" (UID: \"e93c91f5-d9d7-4322-97c0-8d2b9ab82714\") " pod="openstack/barbican-api-6f4c5b5ccd-7xcmx" Feb 23 17:50:17 crc kubenswrapper[4724]: I0223 17:50:17.972692 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6f4c5b5ccd-7xcmx" Feb 23 17:50:18 crc kubenswrapper[4724]: I0223 17:50:18.460596 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6f4c5b5ccd-7xcmx"] Feb 23 17:50:18 crc kubenswrapper[4724]: W0223 17:50:18.469517 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode93c91f5_d9d7_4322_97c0_8d2b9ab82714.slice/crio-09729fe78e5c2b269b2ac4e5fdce19e2dabd678a055e1d48dc31c4ca30ee91c3 WatchSource:0}: Error finding container 09729fe78e5c2b269b2ac4e5fdce19e2dabd678a055e1d48dc31c4ca30ee91c3: Status 404 returned error can't find the container with id 09729fe78e5c2b269b2ac4e5fdce19e2dabd678a055e1d48dc31c4ca30ee91c3 Feb 23 17:50:18 crc kubenswrapper[4724]: I0223 17:50:18.784162 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6f4c5b5ccd-7xcmx" event={"ID":"e93c91f5-d9d7-4322-97c0-8d2b9ab82714","Type":"ContainerStarted","Data":"09729fe78e5c2b269b2ac4e5fdce19e2dabd678a055e1d48dc31c4ca30ee91c3"} Feb 23 17:50:18 crc kubenswrapper[4724]: I0223 17:50:18.789686 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3be48d90-f238-4e9e-83ca-c91030530489","Type":"ContainerStarted","Data":"5de0dc6fbea53f23211a32ac5ef2f314d7a3c7372205893cc1dbde96b16b5bc4"} Feb 23 17:50:18 crc kubenswrapper[4724]: I0223 17:50:18.789837 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="3be48d90-f238-4e9e-83ca-c91030530489" containerName="cinder-api-log" containerID="cri-o://812fa5738325c6e1f6b16ffbfe57f31e025f829ffea575c001088200bfa4705a" gracePeriod=30 Feb 23 17:50:18 crc kubenswrapper[4724]: I0223 17:50:18.789902 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 23 17:50:18 crc kubenswrapper[4724]: I0223 17:50:18.790186 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="3be48d90-f238-4e9e-83ca-c91030530489" containerName="cinder-api" containerID="cri-o://5de0dc6fbea53f23211a32ac5ef2f314d7a3c7372205893cc1dbde96b16b5bc4" gracePeriod=30 Feb 23 17:50:18 crc kubenswrapper[4724]: I0223 17:50:18.795406 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4de93300-165d-4da7-b999-032b6f89038f","Type":"ContainerStarted","Data":"869014d3274d9ca0cb36371ee693c86ef374d2273f0965a3aefd57e1b1a052d7"} Feb 23 17:50:18 crc kubenswrapper[4724]: I0223 17:50:18.802125 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c5fc101-99f6-43b3-ad94-6e23741a2f27","Type":"ContainerStarted","Data":"1d29bc40c77f9e8e9d6cb383cf7ca77df5caa22b5e6d621e4d1c0f71c01af7c9"} Feb 23 17:50:18 crc kubenswrapper[4724]: I0223 17:50:18.802159 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-c8bdf9fff-r2h5q" Feb 23 17:50:18 crc kubenswrapper[4724]: I0223 17:50:18.822046 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=7.822022694 podStartE2EDuration="7.822022694s" podCreationTimestamp="2026-02-23 17:50:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:50:18.808880212 +0000 UTC m=+1174.625079822" watchObservedRunningTime="2026-02-23 17:50:18.822022694 +0000 UTC m=+1174.638222294" Feb 23 17:50:18 crc kubenswrapper[4724]: I0223 17:50:18.860367 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-c8bdf9fff-r2h5q" podStartSLOduration=7.860341092 podStartE2EDuration="7.860341092s" podCreationTimestamp="2026-02-23 17:50:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:50:18.856441593 +0000 UTC m=+1174.672641193" watchObservedRunningTime="2026-02-23 17:50:18.860341092 +0000 UTC m=+1174.676540692" Feb 23 17:50:18 crc kubenswrapper[4724]: I0223 17:50:18.905159 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.469573815 podStartE2EDuration="7.905130872s" podCreationTimestamp="2026-02-23 17:50:11 +0000 UTC" firstStartedPulling="2026-02-23 17:50:12.92752389 +0000 UTC m=+1168.743723490" lastFinishedPulling="2026-02-23 17:50:15.363080947 +0000 UTC m=+1171.179280547" observedRunningTime="2026-02-23 17:50:18.873009661 +0000 UTC m=+1174.689209261" watchObservedRunningTime="2026-02-23 17:50:18.905130872 +0000 UTC m=+1174.721330492" Feb 23 17:50:19 crc kubenswrapper[4724]: I0223 17:50:19.266616 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-74674fd4f8-mmmpd" podUID="df53406b-fb3c-41f5-86af-b78ac8d5df6d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.167:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 17:50:19 crc kubenswrapper[4724]: I0223 17:50:19.476644 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5b4b6c94fb-ttctl" podUID="07785399-35e6-432b-8835-4412fa3ff02b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.168:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 17:50:19 crc kubenswrapper[4724]: I0223 17:50:19.827805 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c5fc101-99f6-43b3-ad94-6e23741a2f27","Type":"ContainerStarted","Data":"2e3509ce0c599b64874076cb4246093bb7533108b1b48d984371226170d9a24b"} Feb 23 17:50:19 crc kubenswrapper[4724]: I0223 17:50:19.828176 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c5fc101-99f6-43b3-ad94-6e23741a2f27","Type":"ContainerStarted","Data":"bf473fe5890516a889ea856b4fde051ffec9276727356b83b255d1c656f4e24f"} Feb 23 17:50:19 crc kubenswrapper[4724]: I0223 17:50:19.829976 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6f4c5b5ccd-7xcmx" event={"ID":"e93c91f5-d9d7-4322-97c0-8d2b9ab82714","Type":"ContainerStarted","Data":"1a2454022d087aa03c09dfc437a93d353b2356d85e14d67332fab1f7ec89083e"} Feb 23 17:50:19 crc kubenswrapper[4724]: I0223 17:50:19.830019 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6f4c5b5ccd-7xcmx" event={"ID":"e93c91f5-d9d7-4322-97c0-8d2b9ab82714","Type":"ContainerStarted","Data":"f525c5947180f9f6cd02bb9a1c700ebf3db036f218c0463358cd756aa822042d"} Feb 23 17:50:19 crc kubenswrapper[4724]: I0223 17:50:19.831295 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6f4c5b5ccd-7xcmx" Feb 23 17:50:19 crc kubenswrapper[4724]: I0223 17:50:19.831319 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6f4c5b5ccd-7xcmx" Feb 23 17:50:19 crc kubenswrapper[4724]: I0223 17:50:19.840163 4724 generic.go:334] "Generic (PLEG): container finished" podID="3be48d90-f238-4e9e-83ca-c91030530489" containerID="812fa5738325c6e1f6b16ffbfe57f31e025f829ffea575c001088200bfa4705a" exitCode=143 Feb 23 17:50:19 crc kubenswrapper[4724]: I0223 17:50:19.841923 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3be48d90-f238-4e9e-83ca-c91030530489","Type":"ContainerDied","Data":"812fa5738325c6e1f6b16ffbfe57f31e025f829ffea575c001088200bfa4705a"} Feb 23 17:50:19 crc kubenswrapper[4724]: I0223 17:50:19.877859 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6f4c5b5ccd-7xcmx" podStartSLOduration=2.877822242 podStartE2EDuration="2.877822242s" podCreationTimestamp="2026-02-23 17:50:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:50:19.860466723 +0000 UTC m=+1175.676666334" watchObservedRunningTime="2026-02-23 17:50:19.877822242 +0000 UTC m=+1175.694021842" Feb 23 17:50:20 crc kubenswrapper[4724]: I0223 17:50:20.532055 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5b67f89948-r429p" Feb 23 17:50:20 crc kubenswrapper[4724]: I0223 17:50:20.807581 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-588b89dd65-d4wqn"] Feb 23 17:50:20 crc kubenswrapper[4724]: I0223 17:50:20.808281 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-588b89dd65-d4wqn" podUID="3ffc40a2-ae26-4a8a-bb72-828751c04730" containerName="neutron-httpd" containerID="cri-o://3ff2fd843ef53d5f1691f9303bdfe9ea0bf6b364f7a11e401a752d438181038b" gracePeriod=30 Feb 23 17:50:20 crc kubenswrapper[4724]: I0223 17:50:20.808061 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-588b89dd65-d4wqn" podUID="3ffc40a2-ae26-4a8a-bb72-828751c04730" containerName="neutron-api" containerID="cri-o://ab8707cba41239a181b76258ee6c61d599341579a7b0daa4365b3c09dc031b3b" gracePeriod=30 Feb 23 17:50:20 crc kubenswrapper[4724]: I0223 17:50:20.844242 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-84d9ddfbc9-spsrv"] Feb 23 17:50:20 crc kubenswrapper[4724]: I0223 17:50:20.847137 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-84d9ddfbc9-spsrv" Feb 23 17:50:20 crc kubenswrapper[4724]: I0223 17:50:20.855619 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-84d9ddfbc9-spsrv"] Feb 23 17:50:20 crc kubenswrapper[4724]: I0223 17:50:20.869456 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c5fc101-99f6-43b3-ad94-6e23741a2f27","Type":"ContainerStarted","Data":"7ab2bfa7b1161871b03757a8a18275e2871a51c7fa5e6ff190f263bee734ae60"} Feb 23 17:50:20 crc kubenswrapper[4724]: I0223 17:50:20.907849 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-588b89dd65-d4wqn" podUID="3ffc40a2-ae26-4a8a-bb72-828751c04730" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.175:9696/\": read tcp 10.217.0.2:55506->10.217.0.175:9696: read: connection reset by peer" Feb 23 17:50:20 crc kubenswrapper[4724]: I0223 17:50:20.987467 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/59037714-7bc4-4c52-95d7-a791923f67fe-httpd-config\") pod \"neutron-84d9ddfbc9-spsrv\" (UID: \"59037714-7bc4-4c52-95d7-a791923f67fe\") " pod="openstack/neutron-84d9ddfbc9-spsrv" Feb 23 17:50:20 crc kubenswrapper[4724]: I0223 17:50:20.987522 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/59037714-7bc4-4c52-95d7-a791923f67fe-config\") pod \"neutron-84d9ddfbc9-spsrv\" (UID: \"59037714-7bc4-4c52-95d7-a791923f67fe\") " pod="openstack/neutron-84d9ddfbc9-spsrv" Feb 23 17:50:20 crc kubenswrapper[4724]: I0223 17:50:20.987541 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/59037714-7bc4-4c52-95d7-a791923f67fe-internal-tls-certs\") pod \"neutron-84d9ddfbc9-spsrv\" (UID: \"59037714-7bc4-4c52-95d7-a791923f67fe\") " pod="openstack/neutron-84d9ddfbc9-spsrv" Feb 23 17:50:20 crc kubenswrapper[4724]: I0223 17:50:20.987571 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/59037714-7bc4-4c52-95d7-a791923f67fe-public-tls-certs\") pod \"neutron-84d9ddfbc9-spsrv\" (UID: \"59037714-7bc4-4c52-95d7-a791923f67fe\") " pod="openstack/neutron-84d9ddfbc9-spsrv" Feb 23 17:50:20 crc kubenswrapper[4724]: I0223 17:50:20.987586 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/59037714-7bc4-4c52-95d7-a791923f67fe-ovndb-tls-certs\") pod \"neutron-84d9ddfbc9-spsrv\" (UID: \"59037714-7bc4-4c52-95d7-a791923f67fe\") " pod="openstack/neutron-84d9ddfbc9-spsrv" Feb 23 17:50:20 crc kubenswrapper[4724]: I0223 17:50:20.987646 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59037714-7bc4-4c52-95d7-a791923f67fe-combined-ca-bundle\") pod \"neutron-84d9ddfbc9-spsrv\" (UID: \"59037714-7bc4-4c52-95d7-a791923f67fe\") " pod="openstack/neutron-84d9ddfbc9-spsrv" Feb 23 17:50:20 crc kubenswrapper[4724]: I0223 17:50:20.987688 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zl5p\" (UniqueName: \"kubernetes.io/projected/59037714-7bc4-4c52-95d7-a791923f67fe-kube-api-access-9zl5p\") pod \"neutron-84d9ddfbc9-spsrv\" (UID: \"59037714-7bc4-4c52-95d7-a791923f67fe\") " pod="openstack/neutron-84d9ddfbc9-spsrv" Feb 23 17:50:21 crc kubenswrapper[4724]: I0223 17:50:21.089600 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59037714-7bc4-4c52-95d7-a791923f67fe-combined-ca-bundle\") pod \"neutron-84d9ddfbc9-spsrv\" (UID: \"59037714-7bc4-4c52-95d7-a791923f67fe\") " pod="openstack/neutron-84d9ddfbc9-spsrv" Feb 23 17:50:21 crc kubenswrapper[4724]: I0223 17:50:21.089959 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zl5p\" (UniqueName: \"kubernetes.io/projected/59037714-7bc4-4c52-95d7-a791923f67fe-kube-api-access-9zl5p\") pod \"neutron-84d9ddfbc9-spsrv\" (UID: \"59037714-7bc4-4c52-95d7-a791923f67fe\") " pod="openstack/neutron-84d9ddfbc9-spsrv" Feb 23 17:50:21 crc kubenswrapper[4724]: I0223 17:50:21.090227 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/59037714-7bc4-4c52-95d7-a791923f67fe-httpd-config\") pod \"neutron-84d9ddfbc9-spsrv\" (UID: \"59037714-7bc4-4c52-95d7-a791923f67fe\") " pod="openstack/neutron-84d9ddfbc9-spsrv" Feb 23 17:50:21 crc kubenswrapper[4724]: I0223 17:50:21.090368 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/59037714-7bc4-4c52-95d7-a791923f67fe-config\") pod \"neutron-84d9ddfbc9-spsrv\" (UID: \"59037714-7bc4-4c52-95d7-a791923f67fe\") " pod="openstack/neutron-84d9ddfbc9-spsrv" Feb 23 17:50:21 crc kubenswrapper[4724]: I0223 17:50:21.090472 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/59037714-7bc4-4c52-95d7-a791923f67fe-internal-tls-certs\") pod \"neutron-84d9ddfbc9-spsrv\" (UID: \"59037714-7bc4-4c52-95d7-a791923f67fe\") " pod="openstack/neutron-84d9ddfbc9-spsrv" Feb 23 17:50:21 crc kubenswrapper[4724]: I0223 17:50:21.090613 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/59037714-7bc4-4c52-95d7-a791923f67fe-public-tls-certs\") pod \"neutron-84d9ddfbc9-spsrv\" (UID: \"59037714-7bc4-4c52-95d7-a791923f67fe\") " pod="openstack/neutron-84d9ddfbc9-spsrv" Feb 23 17:50:21 crc kubenswrapper[4724]: I0223 17:50:21.090693 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/59037714-7bc4-4c52-95d7-a791923f67fe-ovndb-tls-certs\") pod \"neutron-84d9ddfbc9-spsrv\" (UID: \"59037714-7bc4-4c52-95d7-a791923f67fe\") " pod="openstack/neutron-84d9ddfbc9-spsrv" Feb 23 17:50:21 crc kubenswrapper[4724]: I0223 17:50:21.098206 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/59037714-7bc4-4c52-95d7-a791923f67fe-httpd-config\") pod \"neutron-84d9ddfbc9-spsrv\" (UID: \"59037714-7bc4-4c52-95d7-a791923f67fe\") " pod="openstack/neutron-84d9ddfbc9-spsrv" Feb 23 17:50:21 crc kubenswrapper[4724]: I0223 17:50:21.099116 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59037714-7bc4-4c52-95d7-a791923f67fe-combined-ca-bundle\") pod \"neutron-84d9ddfbc9-spsrv\" (UID: \"59037714-7bc4-4c52-95d7-a791923f67fe\") " pod="openstack/neutron-84d9ddfbc9-spsrv" Feb 23 17:50:21 crc kubenswrapper[4724]: I0223 17:50:21.103041 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/59037714-7bc4-4c52-95d7-a791923f67fe-ovndb-tls-certs\") pod \"neutron-84d9ddfbc9-spsrv\" (UID: \"59037714-7bc4-4c52-95d7-a791923f67fe\") " pod="openstack/neutron-84d9ddfbc9-spsrv" Feb 23 17:50:21 crc kubenswrapper[4724]: I0223 17:50:21.103124 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/59037714-7bc4-4c52-95d7-a791923f67fe-internal-tls-certs\") pod \"neutron-84d9ddfbc9-spsrv\" (UID: \"59037714-7bc4-4c52-95d7-a791923f67fe\") " pod="openstack/neutron-84d9ddfbc9-spsrv" Feb 23 17:50:21 crc kubenswrapper[4724]: I0223 17:50:21.103177 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/59037714-7bc4-4c52-95d7-a791923f67fe-config\") pod \"neutron-84d9ddfbc9-spsrv\" (UID: \"59037714-7bc4-4c52-95d7-a791923f67fe\") " pod="openstack/neutron-84d9ddfbc9-spsrv" Feb 23 17:50:21 crc kubenswrapper[4724]: I0223 17:50:21.113366 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/59037714-7bc4-4c52-95d7-a791923f67fe-public-tls-certs\") pod \"neutron-84d9ddfbc9-spsrv\" (UID: \"59037714-7bc4-4c52-95d7-a791923f67fe\") " pod="openstack/neutron-84d9ddfbc9-spsrv" Feb 23 17:50:21 crc kubenswrapper[4724]: I0223 17:50:21.114677 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zl5p\" (UniqueName: \"kubernetes.io/projected/59037714-7bc4-4c52-95d7-a791923f67fe-kube-api-access-9zl5p\") pod \"neutron-84d9ddfbc9-spsrv\" (UID: \"59037714-7bc4-4c52-95d7-a791923f67fe\") " pod="openstack/neutron-84d9ddfbc9-spsrv" Feb 23 17:50:21 crc kubenswrapper[4724]: I0223 17:50:21.171151 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-84d9ddfbc9-spsrv" Feb 23 17:50:21 crc kubenswrapper[4724]: I0223 17:50:21.556770 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 23 17:50:21 crc kubenswrapper[4724]: I0223 17:50:21.557021 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 23 17:50:21 crc kubenswrapper[4724]: I0223 17:50:21.557618 4724 scope.go:117] "RemoveContainer" containerID="eea15f12b37ff5426ac01301fcccf9eee8bdd329a3a18203c6e4bee6ba83abfd" Feb 23 17:50:21 crc kubenswrapper[4724]: E0223 17:50:21.557822 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(86eb7ff0-87b2-4538-8c5b-9126768e810b)\"" pod="openstack/watcher-decision-engine-0" podUID="86eb7ff0-87b2-4538-8c5b-9126768e810b" Feb 23 17:50:21 crc kubenswrapper[4724]: W0223 17:50:21.849201 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59037714_7bc4_4c52_95d7_a791923f67fe.slice/crio-59e60f1a93f655374e01f95712fd9b455fd936d95ef3592d3a9d2bef9ace369b WatchSource:0}: Error finding container 59e60f1a93f655374e01f95712fd9b455fd936d95ef3592d3a9d2bef9ace369b: Status 404 returned error can't find the container with id 59e60f1a93f655374e01f95712fd9b455fd936d95ef3592d3a9d2bef9ace369b Feb 23 17:50:21 crc kubenswrapper[4724]: I0223 17:50:21.852436 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-84d9ddfbc9-spsrv"] Feb 23 17:50:21 crc kubenswrapper[4724]: I0223 17:50:21.895602 4724 generic.go:334] "Generic (PLEG): container finished" podID="3ffc40a2-ae26-4a8a-bb72-828751c04730" containerID="3ff2fd843ef53d5f1691f9303bdfe9ea0bf6b364f7a11e401a752d438181038b" exitCode=0 Feb 23 17:50:21 crc kubenswrapper[4724]: I0223 17:50:21.895701 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-588b89dd65-d4wqn" event={"ID":"3ffc40a2-ae26-4a8a-bb72-828751c04730","Type":"ContainerDied","Data":"3ff2fd843ef53d5f1691f9303bdfe9ea0bf6b364f7a11e401a752d438181038b"} Feb 23 17:50:21 crc kubenswrapper[4724]: I0223 17:50:21.899292 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84d9ddfbc9-spsrv" event={"ID":"59037714-7bc4-4c52-95d7-a791923f67fe","Type":"ContainerStarted","Data":"59e60f1a93f655374e01f95712fd9b455fd936d95ef3592d3a9d2bef9ace369b"} Feb 23 17:50:22 crc kubenswrapper[4724]: I0223 17:50:22.033308 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 23 17:50:22 crc kubenswrapper[4724]: I0223 17:50:22.238581 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-c8bdf9fff-r2h5q" Feb 23 17:50:22 crc kubenswrapper[4724]: I0223 17:50:22.322535 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cdfc95f79-n8pfz"] Feb 23 17:50:22 crc kubenswrapper[4724]: I0223 17:50:22.322771 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7cdfc95f79-n8pfz" podUID="539cdd64-b5ce-475b-aed3-ebe41fcf5896" containerName="dnsmasq-dns" containerID="cri-o://ae7e304eebf05ec14f85c6e9641c5f41c4a563f12af2a5e28a766e4f9a807922" gracePeriod=10 Feb 23 17:50:22 crc kubenswrapper[4724]: I0223 17:50:22.335662 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 23 17:50:22 crc kubenswrapper[4724]: I0223 17:50:22.849889 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cdfc95f79-n8pfz" Feb 23 17:50:22 crc kubenswrapper[4724]: I0223 17:50:22.922922 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c5fc101-99f6-43b3-ad94-6e23741a2f27","Type":"ContainerStarted","Data":"c8a028a120a605183199fdabd4832a6f82658283eaf199811af1e536477344fb"} Feb 23 17:50:22 crc kubenswrapper[4724]: I0223 17:50:22.924192 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 23 17:50:22 crc kubenswrapper[4724]: I0223 17:50:22.935959 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84d9ddfbc9-spsrv" event={"ID":"59037714-7bc4-4c52-95d7-a791923f67fe","Type":"ContainerStarted","Data":"7ac2395b7cbc9e40c7a51ab3545a7c39ab8506738398ffa9b9b6971727c7ad85"} Feb 23 17:50:22 crc kubenswrapper[4724]: I0223 17:50:22.936006 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84d9ddfbc9-spsrv" event={"ID":"59037714-7bc4-4c52-95d7-a791923f67fe","Type":"ContainerStarted","Data":"87805d31d98f34d696d705a9b5277ac0c5215d47be3013ef509fbc1bb6d7e322"} Feb 23 17:50:22 crc kubenswrapper[4724]: I0223 17:50:22.936869 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-84d9ddfbc9-spsrv" Feb 23 17:50:22 crc kubenswrapper[4724]: I0223 17:50:22.947493 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-588b89dd65-d4wqn" podUID="3ffc40a2-ae26-4a8a-bb72-828751c04730" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.175:9696/\": dial tcp 10.217.0.175:9696: connect: connection refused" Feb 23 17:50:22 crc kubenswrapper[4724]: I0223 17:50:22.948559 4724 generic.go:334] "Generic (PLEG): container finished" podID="539cdd64-b5ce-475b-aed3-ebe41fcf5896" containerID="ae7e304eebf05ec14f85c6e9641c5f41c4a563f12af2a5e28a766e4f9a807922" exitCode=0 Feb 23 17:50:22 crc kubenswrapper[4724]: I0223 17:50:22.949428 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cdfc95f79-n8pfz" Feb 23 17:50:22 crc kubenswrapper[4724]: I0223 17:50:22.951088 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cdfc95f79-n8pfz" event={"ID":"539cdd64-b5ce-475b-aed3-ebe41fcf5896","Type":"ContainerDied","Data":"ae7e304eebf05ec14f85c6e9641c5f41c4a563f12af2a5e28a766e4f9a807922"} Feb 23 17:50:22 crc kubenswrapper[4724]: I0223 17:50:22.953342 4724 scope.go:117] "RemoveContainer" containerID="ae7e304eebf05ec14f85c6e9641c5f41c4a563f12af2a5e28a766e4f9a807922" Feb 23 17:50:22 crc kubenswrapper[4724]: I0223 17:50:22.958263 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/539cdd64-b5ce-475b-aed3-ebe41fcf5896-config\") pod \"539cdd64-b5ce-475b-aed3-ebe41fcf5896\" (UID: \"539cdd64-b5ce-475b-aed3-ebe41fcf5896\") " Feb 23 17:50:22 crc kubenswrapper[4724]: I0223 17:50:22.958385 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/539cdd64-b5ce-475b-aed3-ebe41fcf5896-ovsdbserver-nb\") pod \"539cdd64-b5ce-475b-aed3-ebe41fcf5896\" (UID: \"539cdd64-b5ce-475b-aed3-ebe41fcf5896\") " Feb 23 17:50:22 crc kubenswrapper[4724]: I0223 17:50:22.958496 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/539cdd64-b5ce-475b-aed3-ebe41fcf5896-dns-swift-storage-0\") pod \"539cdd64-b5ce-475b-aed3-ebe41fcf5896\" (UID: \"539cdd64-b5ce-475b-aed3-ebe41fcf5896\") " Feb 23 17:50:22 crc kubenswrapper[4724]: I0223 17:50:22.958556 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/539cdd64-b5ce-475b-aed3-ebe41fcf5896-ovsdbserver-sb\") pod \"539cdd64-b5ce-475b-aed3-ebe41fcf5896\" (UID: \"539cdd64-b5ce-475b-aed3-ebe41fcf5896\") " Feb 23 17:50:22 crc kubenswrapper[4724]: I0223 17:50:22.958631 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/539cdd64-b5ce-475b-aed3-ebe41fcf5896-dns-svc\") pod \"539cdd64-b5ce-475b-aed3-ebe41fcf5896\" (UID: \"539cdd64-b5ce-475b-aed3-ebe41fcf5896\") " Feb 23 17:50:22 crc kubenswrapper[4724]: I0223 17:50:22.958667 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-829gx\" (UniqueName: \"kubernetes.io/projected/539cdd64-b5ce-475b-aed3-ebe41fcf5896-kube-api-access-829gx\") pod \"539cdd64-b5ce-475b-aed3-ebe41fcf5896\" (UID: \"539cdd64-b5ce-475b-aed3-ebe41fcf5896\") " Feb 23 17:50:22 crc kubenswrapper[4724]: I0223 17:50:22.987608 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/539cdd64-b5ce-475b-aed3-ebe41fcf5896-kube-api-access-829gx" (OuterVolumeSpecName: "kube-api-access-829gx") pod "539cdd64-b5ce-475b-aed3-ebe41fcf5896" (UID: "539cdd64-b5ce-475b-aed3-ebe41fcf5896"). InnerVolumeSpecName "kube-api-access-829gx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:50:23 crc kubenswrapper[4724]: I0223 17:50:23.013611 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.052506713 podStartE2EDuration="7.013587269s" podCreationTimestamp="2026-02-23 17:50:16 +0000 UTC" firstStartedPulling="2026-02-23 17:50:17.734505504 +0000 UTC m=+1173.550705104" lastFinishedPulling="2026-02-23 17:50:21.69558606 +0000 UTC m=+1177.511785660" observedRunningTime="2026-02-23 17:50:22.945742276 +0000 UTC m=+1178.761941876" watchObservedRunningTime="2026-02-23 17:50:23.013587269 +0000 UTC m=+1178.829786869" Feb 23 17:50:23 crc kubenswrapper[4724]: I0223 17:50:23.016680 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-84d9ddfbc9-spsrv" podStartSLOduration=3.016672437 podStartE2EDuration="3.016672437s" podCreationTimestamp="2026-02-23 17:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:50:22.98076717 +0000 UTC m=+1178.796966760" watchObservedRunningTime="2026-02-23 17:50:23.016672437 +0000 UTC m=+1178.832872037" Feb 23 17:50:23 crc kubenswrapper[4724]: I0223 17:50:23.073857 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-829gx\" (UniqueName: \"kubernetes.io/projected/539cdd64-b5ce-475b-aed3-ebe41fcf5896-kube-api-access-829gx\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:23 crc kubenswrapper[4724]: I0223 17:50:23.104161 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/539cdd64-b5ce-475b-aed3-ebe41fcf5896-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "539cdd64-b5ce-475b-aed3-ebe41fcf5896" (UID: "539cdd64-b5ce-475b-aed3-ebe41fcf5896"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:50:23 crc kubenswrapper[4724]: I0223 17:50:23.148332 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/539cdd64-b5ce-475b-aed3-ebe41fcf5896-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "539cdd64-b5ce-475b-aed3-ebe41fcf5896" (UID: "539cdd64-b5ce-475b-aed3-ebe41fcf5896"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:50:23 crc kubenswrapper[4724]: I0223 17:50:23.168338 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/539cdd64-b5ce-475b-aed3-ebe41fcf5896-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "539cdd64-b5ce-475b-aed3-ebe41fcf5896" (UID: "539cdd64-b5ce-475b-aed3-ebe41fcf5896"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:50:23 crc kubenswrapper[4724]: I0223 17:50:23.175928 4724 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/539cdd64-b5ce-475b-aed3-ebe41fcf5896-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:23 crc kubenswrapper[4724]: I0223 17:50:23.175984 4724 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/539cdd64-b5ce-475b-aed3-ebe41fcf5896-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:23 crc kubenswrapper[4724]: I0223 17:50:23.175998 4724 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/539cdd64-b5ce-475b-aed3-ebe41fcf5896-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:23 crc kubenswrapper[4724]: I0223 17:50:23.176811 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/539cdd64-b5ce-475b-aed3-ebe41fcf5896-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "539cdd64-b5ce-475b-aed3-ebe41fcf5896" (UID: "539cdd64-b5ce-475b-aed3-ebe41fcf5896"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:50:23 crc kubenswrapper[4724]: I0223 17:50:23.196138 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/539cdd64-b5ce-475b-aed3-ebe41fcf5896-config" (OuterVolumeSpecName: "config") pod "539cdd64-b5ce-475b-aed3-ebe41fcf5896" (UID: "539cdd64-b5ce-475b-aed3-ebe41fcf5896"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:50:23 crc kubenswrapper[4724]: I0223 17:50:23.224142 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cdfc95f79-n8pfz" event={"ID":"539cdd64-b5ce-475b-aed3-ebe41fcf5896","Type":"ContainerDied","Data":"4c3432aec99b46263d7aa22a8274f2a8d8b3a0e3ca25efd83fbc63030ac1ab39"} Feb 23 17:50:23 crc kubenswrapper[4724]: I0223 17:50:23.224191 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 23 17:50:23 crc kubenswrapper[4724]: I0223 17:50:23.236979 4724 scope.go:117] "RemoveContainer" containerID="db6f6fa0097d6058dcf1d192857924d16f1a2eaa24e925627c6da1067252b471" Feb 23 17:50:23 crc kubenswrapper[4724]: I0223 17:50:23.277256 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/539cdd64-b5ce-475b-aed3-ebe41fcf5896-config\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:23 crc kubenswrapper[4724]: I0223 17:50:23.277290 4724 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/539cdd64-b5ce-475b-aed3-ebe41fcf5896-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:23 crc kubenswrapper[4724]: I0223 17:50:23.278557 4724 scope.go:117] "RemoveContainer" containerID="ae7e304eebf05ec14f85c6e9641c5f41c4a563f12af2a5e28a766e4f9a807922" Feb 23 17:50:23 crc kubenswrapper[4724]: E0223 17:50:23.288084 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae7e304eebf05ec14f85c6e9641c5f41c4a563f12af2a5e28a766e4f9a807922\": container with ID starting with ae7e304eebf05ec14f85c6e9641c5f41c4a563f12af2a5e28a766e4f9a807922 not found: ID does not exist" containerID="ae7e304eebf05ec14f85c6e9641c5f41c4a563f12af2a5e28a766e4f9a807922" Feb 23 17:50:23 crc kubenswrapper[4724]: I0223 17:50:23.288131 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae7e304eebf05ec14f85c6e9641c5f41c4a563f12af2a5e28a766e4f9a807922"} err="failed to get container status \"ae7e304eebf05ec14f85c6e9641c5f41c4a563f12af2a5e28a766e4f9a807922\": rpc error: code = NotFound desc = could not find container \"ae7e304eebf05ec14f85c6e9641c5f41c4a563f12af2a5e28a766e4f9a807922\": container with ID starting with ae7e304eebf05ec14f85c6e9641c5f41c4a563f12af2a5e28a766e4f9a807922 not found: ID does not exist" Feb 23 17:50:23 crc kubenswrapper[4724]: I0223 17:50:23.288157 4724 scope.go:117] "RemoveContainer" containerID="db6f6fa0097d6058dcf1d192857924d16f1a2eaa24e925627c6da1067252b471" Feb 23 17:50:23 crc kubenswrapper[4724]: E0223 17:50:23.288726 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db6f6fa0097d6058dcf1d192857924d16f1a2eaa24e925627c6da1067252b471\": container with ID starting with db6f6fa0097d6058dcf1d192857924d16f1a2eaa24e925627c6da1067252b471 not found: ID does not exist" containerID="db6f6fa0097d6058dcf1d192857924d16f1a2eaa24e925627c6da1067252b471" Feb 23 17:50:23 crc kubenswrapper[4724]: I0223 17:50:23.288796 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db6f6fa0097d6058dcf1d192857924d16f1a2eaa24e925627c6da1067252b471"} err="failed to get container status \"db6f6fa0097d6058dcf1d192857924d16f1a2eaa24e925627c6da1067252b471\": rpc error: code = NotFound desc = could not find container \"db6f6fa0097d6058dcf1d192857924d16f1a2eaa24e925627c6da1067252b471\": container with ID starting with db6f6fa0097d6058dcf1d192857924d16f1a2eaa24e925627c6da1067252b471 not found: ID does not exist" Feb 23 17:50:23 crc kubenswrapper[4724]: I0223 17:50:23.297450 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cdfc95f79-n8pfz"] Feb 23 17:50:23 crc kubenswrapper[4724]: I0223 17:50:23.302586 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7cdfc95f79-n8pfz"] Feb 23 17:50:23 crc kubenswrapper[4724]: I0223 17:50:23.366175 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-74674fd4f8-mmmpd" Feb 23 17:50:23 crc kubenswrapper[4724]: I0223 17:50:23.564756 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-65447d49b6-tcbqk" Feb 23 17:50:23 crc kubenswrapper[4724]: I0223 17:50:23.636944 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-65447d49b6-tcbqk" Feb 23 17:50:23 crc kubenswrapper[4724]: I0223 17:50:23.961865 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="4de93300-165d-4da7-b999-032b6f89038f" containerName="cinder-scheduler" containerID="cri-o://ab179323ab428a13897d37711aeab0e67cbfdc13c121f99008984d343b13c578" gracePeriod=30 Feb 23 17:50:23 crc kubenswrapper[4724]: I0223 17:50:23.961925 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="4de93300-165d-4da7-b999-032b6f89038f" containerName="probe" containerID="cri-o://869014d3274d9ca0cb36371ee693c86ef374d2273f0965a3aefd57e1b1a052d7" gracePeriod=30 Feb 23 17:50:24 crc kubenswrapper[4724]: I0223 17:50:24.036045 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-5b4b6c94fb-ttctl" Feb 23 17:50:24 crc kubenswrapper[4724]: I0223 17:50:24.108676 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-74674fd4f8-mmmpd"] Feb 23 17:50:24 crc kubenswrapper[4724]: I0223 17:50:24.108883 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-74674fd4f8-mmmpd" podUID="df53406b-fb3c-41f5-86af-b78ac8d5df6d" containerName="horizon-log" containerID="cri-o://e2211db7088619e4eb64abce15e5e8d41646526a13426ffbd781e4629c000ebd" gracePeriod=30 Feb 23 17:50:24 crc kubenswrapper[4724]: I0223 17:50:24.109311 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-74674fd4f8-mmmpd" podUID="df53406b-fb3c-41f5-86af-b78ac8d5df6d" containerName="horizon" containerID="cri-o://1655072b2b368448156effff044965d4dd72cc86d075ab29bd3d947a764a0158" gracePeriod=30 Feb 23 17:50:24 crc kubenswrapper[4724]: E0223 17:50:24.602523 4724 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c0efb1b_cbc1_4ac1_b969_ce5ae7b03857.slice/crio-conmon-fb492313bded3525683787416d1463c506739ef103a7abf059baf18d6f79a5a7.scope\": RecentStats: unable to find data in memory cache]" Feb 23 17:50:24 crc kubenswrapper[4724]: I0223 17:50:24.968654 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="539cdd64-b5ce-475b-aed3-ebe41fcf5896" path="/var/lib/kubelet/pods/539cdd64-b5ce-475b-aed3-ebe41fcf5896/volumes" Feb 23 17:50:24 crc kubenswrapper[4724]: I0223 17:50:24.979454 4724 generic.go:334] "Generic (PLEG): container finished" podID="3ffc40a2-ae26-4a8a-bb72-828751c04730" containerID="ab8707cba41239a181b76258ee6c61d599341579a7b0daa4365b3c09dc031b3b" exitCode=0 Feb 23 17:50:24 crc kubenswrapper[4724]: I0223 17:50:24.980537 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-588b89dd65-d4wqn" event={"ID":"3ffc40a2-ae26-4a8a-bb72-828751c04730","Type":"ContainerDied","Data":"ab8707cba41239a181b76258ee6c61d599341579a7b0daa4365b3c09dc031b3b"} Feb 23 17:50:25 crc kubenswrapper[4724]: I0223 17:50:25.199017 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-588b89dd65-d4wqn" Feb 23 17:50:25 crc kubenswrapper[4724]: I0223 17:50:25.348319 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vrd4\" (UniqueName: \"kubernetes.io/projected/3ffc40a2-ae26-4a8a-bb72-828751c04730-kube-api-access-6vrd4\") pod \"3ffc40a2-ae26-4a8a-bb72-828751c04730\" (UID: \"3ffc40a2-ae26-4a8a-bb72-828751c04730\") " Feb 23 17:50:25 crc kubenswrapper[4724]: I0223 17:50:25.348412 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ffc40a2-ae26-4a8a-bb72-828751c04730-combined-ca-bundle\") pod \"3ffc40a2-ae26-4a8a-bb72-828751c04730\" (UID: \"3ffc40a2-ae26-4a8a-bb72-828751c04730\") " Feb 23 17:50:25 crc kubenswrapper[4724]: I0223 17:50:25.348442 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ffc40a2-ae26-4a8a-bb72-828751c04730-public-tls-certs\") pod \"3ffc40a2-ae26-4a8a-bb72-828751c04730\" (UID: \"3ffc40a2-ae26-4a8a-bb72-828751c04730\") " Feb 23 17:50:25 crc kubenswrapper[4724]: I0223 17:50:25.348510 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3ffc40a2-ae26-4a8a-bb72-828751c04730-config\") pod \"3ffc40a2-ae26-4a8a-bb72-828751c04730\" (UID: \"3ffc40a2-ae26-4a8a-bb72-828751c04730\") " Feb 23 17:50:25 crc kubenswrapper[4724]: I0223 17:50:25.348532 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ffc40a2-ae26-4a8a-bb72-828751c04730-internal-tls-certs\") pod \"3ffc40a2-ae26-4a8a-bb72-828751c04730\" (UID: \"3ffc40a2-ae26-4a8a-bb72-828751c04730\") " Feb 23 17:50:25 crc kubenswrapper[4724]: I0223 17:50:25.348550 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ffc40a2-ae26-4a8a-bb72-828751c04730-ovndb-tls-certs\") pod \"3ffc40a2-ae26-4a8a-bb72-828751c04730\" (UID: \"3ffc40a2-ae26-4a8a-bb72-828751c04730\") " Feb 23 17:50:25 crc kubenswrapper[4724]: I0223 17:50:25.348694 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3ffc40a2-ae26-4a8a-bb72-828751c04730-httpd-config\") pod \"3ffc40a2-ae26-4a8a-bb72-828751c04730\" (UID: \"3ffc40a2-ae26-4a8a-bb72-828751c04730\") " Feb 23 17:50:25 crc kubenswrapper[4724]: I0223 17:50:25.368896 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ffc40a2-ae26-4a8a-bb72-828751c04730-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "3ffc40a2-ae26-4a8a-bb72-828751c04730" (UID: "3ffc40a2-ae26-4a8a-bb72-828751c04730"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:50:25 crc kubenswrapper[4724]: I0223 17:50:25.384595 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ffc40a2-ae26-4a8a-bb72-828751c04730-kube-api-access-6vrd4" (OuterVolumeSpecName: "kube-api-access-6vrd4") pod "3ffc40a2-ae26-4a8a-bb72-828751c04730" (UID: "3ffc40a2-ae26-4a8a-bb72-828751c04730"). InnerVolumeSpecName "kube-api-access-6vrd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:50:25 crc kubenswrapper[4724]: I0223 17:50:25.451693 4724 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3ffc40a2-ae26-4a8a-bb72-828751c04730-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:25 crc kubenswrapper[4724]: I0223 17:50:25.451723 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vrd4\" (UniqueName: \"kubernetes.io/projected/3ffc40a2-ae26-4a8a-bb72-828751c04730-kube-api-access-6vrd4\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:25 crc kubenswrapper[4724]: I0223 17:50:25.465522 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ffc40a2-ae26-4a8a-bb72-828751c04730-config" (OuterVolumeSpecName: "config") pod "3ffc40a2-ae26-4a8a-bb72-828751c04730" (UID: "3ffc40a2-ae26-4a8a-bb72-828751c04730"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:50:25 crc kubenswrapper[4724]: I0223 17:50:25.470550 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ffc40a2-ae26-4a8a-bb72-828751c04730-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ffc40a2-ae26-4a8a-bb72-828751c04730" (UID: "3ffc40a2-ae26-4a8a-bb72-828751c04730"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:50:25 crc kubenswrapper[4724]: I0223 17:50:25.476412 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ffc40a2-ae26-4a8a-bb72-828751c04730-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3ffc40a2-ae26-4a8a-bb72-828751c04730" (UID: "3ffc40a2-ae26-4a8a-bb72-828751c04730"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:50:25 crc kubenswrapper[4724]: I0223 17:50:25.495741 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ffc40a2-ae26-4a8a-bb72-828751c04730-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3ffc40a2-ae26-4a8a-bb72-828751c04730" (UID: "3ffc40a2-ae26-4a8a-bb72-828751c04730"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:50:25 crc kubenswrapper[4724]: I0223 17:50:25.524735 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ffc40a2-ae26-4a8a-bb72-828751c04730-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "3ffc40a2-ae26-4a8a-bb72-828751c04730" (UID: "3ffc40a2-ae26-4a8a-bb72-828751c04730"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:50:25 crc kubenswrapper[4724]: I0223 17:50:25.556946 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/3ffc40a2-ae26-4a8a-bb72-828751c04730-config\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:25 crc kubenswrapper[4724]: I0223 17:50:25.556983 4724 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ffc40a2-ae26-4a8a-bb72-828751c04730-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:25 crc kubenswrapper[4724]: I0223 17:50:25.556994 4724 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ffc40a2-ae26-4a8a-bb72-828751c04730-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:25 crc kubenswrapper[4724]: I0223 17:50:25.557003 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ffc40a2-ae26-4a8a-bb72-828751c04730-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:25 crc kubenswrapper[4724]: I0223 17:50:25.557011 4724 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ffc40a2-ae26-4a8a-bb72-828751c04730-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:25 crc kubenswrapper[4724]: I0223 17:50:25.694885 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 23 17:50:25 crc kubenswrapper[4724]: I0223 17:50:25.861645 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4de93300-165d-4da7-b999-032b6f89038f-etc-machine-id\") pod \"4de93300-165d-4da7-b999-032b6f89038f\" (UID: \"4de93300-165d-4da7-b999-032b6f89038f\") " Feb 23 17:50:25 crc kubenswrapper[4724]: I0223 17:50:25.861805 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4de93300-165d-4da7-b999-032b6f89038f-config-data\") pod \"4de93300-165d-4da7-b999-032b6f89038f\" (UID: \"4de93300-165d-4da7-b999-032b6f89038f\") " Feb 23 17:50:25 crc kubenswrapper[4724]: I0223 17:50:25.861910 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jp79\" (UniqueName: \"kubernetes.io/projected/4de93300-165d-4da7-b999-032b6f89038f-kube-api-access-9jp79\") pod \"4de93300-165d-4da7-b999-032b6f89038f\" (UID: \"4de93300-165d-4da7-b999-032b6f89038f\") " Feb 23 17:50:25 crc kubenswrapper[4724]: I0223 17:50:25.861929 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4de93300-165d-4da7-b999-032b6f89038f-combined-ca-bundle\") pod \"4de93300-165d-4da7-b999-032b6f89038f\" (UID: \"4de93300-165d-4da7-b999-032b6f89038f\") " Feb 23 17:50:25 crc kubenswrapper[4724]: I0223 17:50:25.862016 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4de93300-165d-4da7-b999-032b6f89038f-scripts\") pod \"4de93300-165d-4da7-b999-032b6f89038f\" (UID: \"4de93300-165d-4da7-b999-032b6f89038f\") " Feb 23 17:50:25 crc kubenswrapper[4724]: I0223 17:50:25.862049 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4de93300-165d-4da7-b999-032b6f89038f-config-data-custom\") pod \"4de93300-165d-4da7-b999-032b6f89038f\" (UID: \"4de93300-165d-4da7-b999-032b6f89038f\") " Feb 23 17:50:25 crc kubenswrapper[4724]: I0223 17:50:25.862254 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4de93300-165d-4da7-b999-032b6f89038f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "4de93300-165d-4da7-b999-032b6f89038f" (UID: "4de93300-165d-4da7-b999-032b6f89038f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:50:25 crc kubenswrapper[4724]: I0223 17:50:25.862777 4724 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4de93300-165d-4da7-b999-032b6f89038f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:25 crc kubenswrapper[4724]: I0223 17:50:25.867927 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4de93300-165d-4da7-b999-032b6f89038f-kube-api-access-9jp79" (OuterVolumeSpecName: "kube-api-access-9jp79") pod "4de93300-165d-4da7-b999-032b6f89038f" (UID: "4de93300-165d-4da7-b999-032b6f89038f"). InnerVolumeSpecName "kube-api-access-9jp79". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:50:25 crc kubenswrapper[4724]: I0223 17:50:25.868535 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4de93300-165d-4da7-b999-032b6f89038f-scripts" (OuterVolumeSpecName: "scripts") pod "4de93300-165d-4da7-b999-032b6f89038f" (UID: "4de93300-165d-4da7-b999-032b6f89038f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:50:25 crc kubenswrapper[4724]: I0223 17:50:25.872734 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4de93300-165d-4da7-b999-032b6f89038f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4de93300-165d-4da7-b999-032b6f89038f" (UID: "4de93300-165d-4da7-b999-032b6f89038f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:50:25 crc kubenswrapper[4724]: I0223 17:50:25.921880 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4de93300-165d-4da7-b999-032b6f89038f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4de93300-165d-4da7-b999-032b6f89038f" (UID: "4de93300-165d-4da7-b999-032b6f89038f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:50:25 crc kubenswrapper[4724]: I0223 17:50:25.962895 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4de93300-165d-4da7-b999-032b6f89038f-config-data" (OuterVolumeSpecName: "config-data") pod "4de93300-165d-4da7-b999-032b6f89038f" (UID: "4de93300-165d-4da7-b999-032b6f89038f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:50:25 crc kubenswrapper[4724]: I0223 17:50:25.964780 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jp79\" (UniqueName: \"kubernetes.io/projected/4de93300-165d-4da7-b999-032b6f89038f-kube-api-access-9jp79\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:25 crc kubenswrapper[4724]: I0223 17:50:25.964951 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4de93300-165d-4da7-b999-032b6f89038f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:25 crc kubenswrapper[4724]: I0223 17:50:25.965036 4724 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4de93300-165d-4da7-b999-032b6f89038f-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:25 crc kubenswrapper[4724]: I0223 17:50:25.965123 4724 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4de93300-165d-4da7-b999-032b6f89038f-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:25 crc kubenswrapper[4724]: I0223 17:50:25.965243 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4de93300-165d-4da7-b999-032b6f89038f-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:25 crc kubenswrapper[4724]: I0223 17:50:25.995285 4724 generic.go:334] "Generic (PLEG): container finished" podID="4de93300-165d-4da7-b999-032b6f89038f" containerID="869014d3274d9ca0cb36371ee693c86ef374d2273f0965a3aefd57e1b1a052d7" exitCode=0 Feb 23 17:50:25 crc kubenswrapper[4724]: I0223 17:50:25.995927 4724 generic.go:334] "Generic (PLEG): container finished" podID="4de93300-165d-4da7-b999-032b6f89038f" containerID="ab179323ab428a13897d37711aeab0e67cbfdc13c121f99008984d343b13c578" exitCode=0 Feb 23 17:50:25 crc kubenswrapper[4724]: I0223 17:50:25.995511 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4de93300-165d-4da7-b999-032b6f89038f","Type":"ContainerDied","Data":"869014d3274d9ca0cb36371ee693c86ef374d2273f0965a3aefd57e1b1a052d7"} Feb 23 17:50:25 crc kubenswrapper[4724]: I0223 17:50:25.996165 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4de93300-165d-4da7-b999-032b6f89038f","Type":"ContainerDied","Data":"ab179323ab428a13897d37711aeab0e67cbfdc13c121f99008984d343b13c578"} Feb 23 17:50:25 crc kubenswrapper[4724]: I0223 17:50:25.996248 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4de93300-165d-4da7-b999-032b6f89038f","Type":"ContainerDied","Data":"dc56615ad236af45cf2e546df222eec4fab6f4194713b39ad4349b71aee90fc3"} Feb 23 17:50:25 crc kubenswrapper[4724]: I0223 17:50:25.996380 4724 scope.go:117] "RemoveContainer" containerID="869014d3274d9ca0cb36371ee693c86ef374d2273f0965a3aefd57e1b1a052d7" Feb 23 17:50:25 crc kubenswrapper[4724]: I0223 17:50:25.995637 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 23 17:50:26 crc kubenswrapper[4724]: I0223 17:50:26.009378 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-588b89dd65-d4wqn" event={"ID":"3ffc40a2-ae26-4a8a-bb72-828751c04730","Type":"ContainerDied","Data":"ac88ff228f8a130163e6b031b2cfee4b8b99a2e6d92d0c6a60627d9108c447ab"} Feb 23 17:50:26 crc kubenswrapper[4724]: I0223 17:50:26.009610 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-588b89dd65-d4wqn" Feb 23 17:50:26 crc kubenswrapper[4724]: I0223 17:50:26.044471 4724 scope.go:117] "RemoveContainer" containerID="ab179323ab428a13897d37711aeab0e67cbfdc13c121f99008984d343b13c578" Feb 23 17:50:26 crc kubenswrapper[4724]: I0223 17:50:26.058686 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 23 17:50:26 crc kubenswrapper[4724]: I0223 17:50:26.069513 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 23 17:50:26 crc kubenswrapper[4724]: I0223 17:50:26.079712 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 23 17:50:26 crc kubenswrapper[4724]: E0223 17:50:26.080170 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="539cdd64-b5ce-475b-aed3-ebe41fcf5896" containerName="dnsmasq-dns" Feb 23 17:50:26 crc kubenswrapper[4724]: I0223 17:50:26.080186 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="539cdd64-b5ce-475b-aed3-ebe41fcf5896" containerName="dnsmasq-dns" Feb 23 17:50:26 crc kubenswrapper[4724]: E0223 17:50:26.080198 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4de93300-165d-4da7-b999-032b6f89038f" containerName="cinder-scheduler" Feb 23 17:50:26 crc kubenswrapper[4724]: I0223 17:50:26.080204 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="4de93300-165d-4da7-b999-032b6f89038f" containerName="cinder-scheduler" Feb 23 17:50:26 crc kubenswrapper[4724]: E0223 17:50:26.080217 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ffc40a2-ae26-4a8a-bb72-828751c04730" containerName="neutron-httpd" Feb 23 17:50:26 crc kubenswrapper[4724]: I0223 17:50:26.080223 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ffc40a2-ae26-4a8a-bb72-828751c04730" containerName="neutron-httpd" Feb 23 17:50:26 crc kubenswrapper[4724]: E0223 17:50:26.080237 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="539cdd64-b5ce-475b-aed3-ebe41fcf5896" containerName="init" Feb 23 17:50:26 crc kubenswrapper[4724]: I0223 17:50:26.080243 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="539cdd64-b5ce-475b-aed3-ebe41fcf5896" containerName="init" Feb 23 17:50:26 crc kubenswrapper[4724]: E0223 17:50:26.080251 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4de93300-165d-4da7-b999-032b6f89038f" containerName="probe" Feb 23 17:50:26 crc kubenswrapper[4724]: I0223 17:50:26.080256 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="4de93300-165d-4da7-b999-032b6f89038f" containerName="probe" Feb 23 17:50:26 crc kubenswrapper[4724]: E0223 17:50:26.080273 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ffc40a2-ae26-4a8a-bb72-828751c04730" containerName="neutron-api" Feb 23 17:50:26 crc kubenswrapper[4724]: I0223 17:50:26.080279 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ffc40a2-ae26-4a8a-bb72-828751c04730" containerName="neutron-api" Feb 23 17:50:26 crc kubenswrapper[4724]: I0223 17:50:26.080524 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="539cdd64-b5ce-475b-aed3-ebe41fcf5896" containerName="dnsmasq-dns" Feb 23 17:50:26 crc kubenswrapper[4724]: I0223 17:50:26.080548 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="4de93300-165d-4da7-b999-032b6f89038f" containerName="probe" Feb 23 17:50:26 crc kubenswrapper[4724]: I0223 17:50:26.080558 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ffc40a2-ae26-4a8a-bb72-828751c04730" containerName="neutron-api" Feb 23 17:50:26 crc kubenswrapper[4724]: I0223 17:50:26.080567 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="4de93300-165d-4da7-b999-032b6f89038f" containerName="cinder-scheduler" Feb 23 17:50:26 crc kubenswrapper[4724]: I0223 17:50:26.080579 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ffc40a2-ae26-4a8a-bb72-828751c04730" containerName="neutron-httpd" Feb 23 17:50:26 crc kubenswrapper[4724]: I0223 17:50:26.081621 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 23 17:50:26 crc kubenswrapper[4724]: I0223 17:50:26.084460 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 23 17:50:26 crc kubenswrapper[4724]: I0223 17:50:26.091844 4724 scope.go:117] "RemoveContainer" containerID="869014d3274d9ca0cb36371ee693c86ef374d2273f0965a3aefd57e1b1a052d7" Feb 23 17:50:26 crc kubenswrapper[4724]: I0223 17:50:26.091906 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-588b89dd65-d4wqn"] Feb 23 17:50:26 crc kubenswrapper[4724]: E0223 17:50:26.095374 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"869014d3274d9ca0cb36371ee693c86ef374d2273f0965a3aefd57e1b1a052d7\": container with ID starting with 869014d3274d9ca0cb36371ee693c86ef374d2273f0965a3aefd57e1b1a052d7 not found: ID does not exist" containerID="869014d3274d9ca0cb36371ee693c86ef374d2273f0965a3aefd57e1b1a052d7" Feb 23 17:50:26 crc kubenswrapper[4724]: I0223 17:50:26.095417 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"869014d3274d9ca0cb36371ee693c86ef374d2273f0965a3aefd57e1b1a052d7"} err="failed to get container status \"869014d3274d9ca0cb36371ee693c86ef374d2273f0965a3aefd57e1b1a052d7\": rpc error: code = NotFound desc = could not find container \"869014d3274d9ca0cb36371ee693c86ef374d2273f0965a3aefd57e1b1a052d7\": container with ID starting with 869014d3274d9ca0cb36371ee693c86ef374d2273f0965a3aefd57e1b1a052d7 not found: ID does not exist" Feb 23 17:50:26 crc kubenswrapper[4724]: I0223 17:50:26.095479 4724 scope.go:117] "RemoveContainer" containerID="ab179323ab428a13897d37711aeab0e67cbfdc13c121f99008984d343b13c578" Feb 23 17:50:26 crc kubenswrapper[4724]: E0223 17:50:26.095892 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab179323ab428a13897d37711aeab0e67cbfdc13c121f99008984d343b13c578\": container with ID starting with ab179323ab428a13897d37711aeab0e67cbfdc13c121f99008984d343b13c578 not found: ID does not exist" containerID="ab179323ab428a13897d37711aeab0e67cbfdc13c121f99008984d343b13c578" Feb 23 17:50:26 crc kubenswrapper[4724]: I0223 17:50:26.095911 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab179323ab428a13897d37711aeab0e67cbfdc13c121f99008984d343b13c578"} err="failed to get container status \"ab179323ab428a13897d37711aeab0e67cbfdc13c121f99008984d343b13c578\": rpc error: code = NotFound desc = could not find container \"ab179323ab428a13897d37711aeab0e67cbfdc13c121f99008984d343b13c578\": container with ID starting with ab179323ab428a13897d37711aeab0e67cbfdc13c121f99008984d343b13c578 not found: ID does not exist" Feb 23 17:50:26 crc kubenswrapper[4724]: I0223 17:50:26.095926 4724 scope.go:117] "RemoveContainer" containerID="869014d3274d9ca0cb36371ee693c86ef374d2273f0965a3aefd57e1b1a052d7" Feb 23 17:50:26 crc kubenswrapper[4724]: I0223 17:50:26.096247 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"869014d3274d9ca0cb36371ee693c86ef374d2273f0965a3aefd57e1b1a052d7"} err="failed to get container status \"869014d3274d9ca0cb36371ee693c86ef374d2273f0965a3aefd57e1b1a052d7\": rpc error: code = NotFound desc = could not find container \"869014d3274d9ca0cb36371ee693c86ef374d2273f0965a3aefd57e1b1a052d7\": container with ID starting with 869014d3274d9ca0cb36371ee693c86ef374d2273f0965a3aefd57e1b1a052d7 not found: ID does not exist" Feb 23 17:50:26 crc kubenswrapper[4724]: I0223 17:50:26.096283 4724 scope.go:117] "RemoveContainer" containerID="ab179323ab428a13897d37711aeab0e67cbfdc13c121f99008984d343b13c578" Feb 23 17:50:26 crc kubenswrapper[4724]: I0223 17:50:26.096699 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab179323ab428a13897d37711aeab0e67cbfdc13c121f99008984d343b13c578"} err="failed to get container status \"ab179323ab428a13897d37711aeab0e67cbfdc13c121f99008984d343b13c578\": rpc error: code = NotFound desc = could not find container \"ab179323ab428a13897d37711aeab0e67cbfdc13c121f99008984d343b13c578\": container with ID starting with ab179323ab428a13897d37711aeab0e67cbfdc13c121f99008984d343b13c578 not found: ID does not exist" Feb 23 17:50:26 crc kubenswrapper[4724]: I0223 17:50:26.096716 4724 scope.go:117] "RemoveContainer" containerID="3ff2fd843ef53d5f1691f9303bdfe9ea0bf6b364f7a11e401a752d438181038b" Feb 23 17:50:26 crc kubenswrapper[4724]: I0223 17:50:26.098366 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-588b89dd65-d4wqn"] Feb 23 17:50:26 crc kubenswrapper[4724]: I0223 17:50:26.111969 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 23 17:50:26 crc kubenswrapper[4724]: I0223 17:50:26.169710 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6nql\" (UniqueName: \"kubernetes.io/projected/34e8e653-ac7e-4bca-9ce1-e5f9f4b5b2f2-kube-api-access-s6nql\") pod \"cinder-scheduler-0\" (UID: \"34e8e653-ac7e-4bca-9ce1-e5f9f4b5b2f2\") " pod="openstack/cinder-scheduler-0" Feb 23 17:50:26 crc kubenswrapper[4724]: I0223 17:50:26.169821 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34e8e653-ac7e-4bca-9ce1-e5f9f4b5b2f2-scripts\") pod \"cinder-scheduler-0\" (UID: \"34e8e653-ac7e-4bca-9ce1-e5f9f4b5b2f2\") " pod="openstack/cinder-scheduler-0" Feb 23 17:50:26 crc kubenswrapper[4724]: I0223 17:50:26.170006 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/34e8e653-ac7e-4bca-9ce1-e5f9f4b5b2f2-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"34e8e653-ac7e-4bca-9ce1-e5f9f4b5b2f2\") " pod="openstack/cinder-scheduler-0" Feb 23 17:50:26 crc kubenswrapper[4724]: I0223 17:50:26.170052 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/34e8e653-ac7e-4bca-9ce1-e5f9f4b5b2f2-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"34e8e653-ac7e-4bca-9ce1-e5f9f4b5b2f2\") " pod="openstack/cinder-scheduler-0" Feb 23 17:50:26 crc kubenswrapper[4724]: I0223 17:50:26.170247 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34e8e653-ac7e-4bca-9ce1-e5f9f4b5b2f2-config-data\") pod \"cinder-scheduler-0\" (UID: \"34e8e653-ac7e-4bca-9ce1-e5f9f4b5b2f2\") " pod="openstack/cinder-scheduler-0" Feb 23 17:50:26 crc kubenswrapper[4724]: I0223 17:50:26.170323 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34e8e653-ac7e-4bca-9ce1-e5f9f4b5b2f2-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"34e8e653-ac7e-4bca-9ce1-e5f9f4b5b2f2\") " pod="openstack/cinder-scheduler-0" Feb 23 17:50:26 crc kubenswrapper[4724]: I0223 17:50:26.214143 4724 scope.go:117] "RemoveContainer" containerID="ab8707cba41239a181b76258ee6c61d599341579a7b0daa4365b3c09dc031b3b" Feb 23 17:50:26 crc kubenswrapper[4724]: I0223 17:50:26.271911 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6nql\" (UniqueName: \"kubernetes.io/projected/34e8e653-ac7e-4bca-9ce1-e5f9f4b5b2f2-kube-api-access-s6nql\") pod \"cinder-scheduler-0\" (UID: \"34e8e653-ac7e-4bca-9ce1-e5f9f4b5b2f2\") " pod="openstack/cinder-scheduler-0" Feb 23 17:50:26 crc kubenswrapper[4724]: I0223 17:50:26.280617 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34e8e653-ac7e-4bca-9ce1-e5f9f4b5b2f2-scripts\") pod \"cinder-scheduler-0\" (UID: \"34e8e653-ac7e-4bca-9ce1-e5f9f4b5b2f2\") " pod="openstack/cinder-scheduler-0" Feb 23 17:50:26 crc kubenswrapper[4724]: I0223 17:50:26.280828 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/34e8e653-ac7e-4bca-9ce1-e5f9f4b5b2f2-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"34e8e653-ac7e-4bca-9ce1-e5f9f4b5b2f2\") " pod="openstack/cinder-scheduler-0" Feb 23 17:50:26 crc kubenswrapper[4724]: I0223 17:50:26.280866 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/34e8e653-ac7e-4bca-9ce1-e5f9f4b5b2f2-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"34e8e653-ac7e-4bca-9ce1-e5f9f4b5b2f2\") " pod="openstack/cinder-scheduler-0" Feb 23 17:50:26 crc kubenswrapper[4724]: I0223 17:50:26.281032 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34e8e653-ac7e-4bca-9ce1-e5f9f4b5b2f2-config-data\") pod \"cinder-scheduler-0\" (UID: \"34e8e653-ac7e-4bca-9ce1-e5f9f4b5b2f2\") " pod="openstack/cinder-scheduler-0" Feb 23 17:50:26 crc kubenswrapper[4724]: I0223 17:50:26.281095 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34e8e653-ac7e-4bca-9ce1-e5f9f4b5b2f2-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"34e8e653-ac7e-4bca-9ce1-e5f9f4b5b2f2\") " pod="openstack/cinder-scheduler-0" Feb 23 17:50:26 crc kubenswrapper[4724]: I0223 17:50:26.288518 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34e8e653-ac7e-4bca-9ce1-e5f9f4b5b2f2-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"34e8e653-ac7e-4bca-9ce1-e5f9f4b5b2f2\") " pod="openstack/cinder-scheduler-0" Feb 23 17:50:26 crc kubenswrapper[4724]: I0223 17:50:26.289260 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/34e8e653-ac7e-4bca-9ce1-e5f9f4b5b2f2-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"34e8e653-ac7e-4bca-9ce1-e5f9f4b5b2f2\") " pod="openstack/cinder-scheduler-0" Feb 23 17:50:26 crc kubenswrapper[4724]: I0223 17:50:26.293674 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34e8e653-ac7e-4bca-9ce1-e5f9f4b5b2f2-scripts\") pod \"cinder-scheduler-0\" (UID: \"34e8e653-ac7e-4bca-9ce1-e5f9f4b5b2f2\") " pod="openstack/cinder-scheduler-0" Feb 23 17:50:26 crc kubenswrapper[4724]: I0223 17:50:26.294548 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/34e8e653-ac7e-4bca-9ce1-e5f9f4b5b2f2-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"34e8e653-ac7e-4bca-9ce1-e5f9f4b5b2f2\") " pod="openstack/cinder-scheduler-0" Feb 23 17:50:26 crc kubenswrapper[4724]: I0223 17:50:26.295490 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34e8e653-ac7e-4bca-9ce1-e5f9f4b5b2f2-config-data\") pod \"cinder-scheduler-0\" (UID: \"34e8e653-ac7e-4bca-9ce1-e5f9f4b5b2f2\") " pod="openstack/cinder-scheduler-0" Feb 23 17:50:26 crc kubenswrapper[4724]: I0223 17:50:26.298161 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6nql\" (UniqueName: \"kubernetes.io/projected/34e8e653-ac7e-4bca-9ce1-e5f9f4b5b2f2-kube-api-access-s6nql\") pod \"cinder-scheduler-0\" (UID: \"34e8e653-ac7e-4bca-9ce1-e5f9f4b5b2f2\") " pod="openstack/cinder-scheduler-0" Feb 23 17:50:26 crc kubenswrapper[4724]: I0223 17:50:26.520336 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 23 17:50:26 crc kubenswrapper[4724]: I0223 17:50:26.965505 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ffc40a2-ae26-4a8a-bb72-828751c04730" path="/var/lib/kubelet/pods/3ffc40a2-ae26-4a8a-bb72-828751c04730/volumes" Feb 23 17:50:26 crc kubenswrapper[4724]: I0223 17:50:26.966636 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4de93300-165d-4da7-b999-032b6f89038f" path="/var/lib/kubelet/pods/4de93300-165d-4da7-b999-032b6f89038f/volumes" Feb 23 17:50:27 crc kubenswrapper[4724]: I0223 17:50:27.011432 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 23 17:50:27 crc kubenswrapper[4724]: W0223 17:50:27.013175 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34e8e653_ac7e_4bca_9ce1_e5f9f4b5b2f2.slice/crio-bff433bda241d8d5edd5ef1c93e167704a437725bfcf6659b69648b29c46b0be WatchSource:0}: Error finding container bff433bda241d8d5edd5ef1c93e167704a437725bfcf6659b69648b29c46b0be: Status 404 returned error can't find the container with id bff433bda241d8d5edd5ef1c93e167704a437725bfcf6659b69648b29c46b0be Feb 23 17:50:27 crc kubenswrapper[4724]: I0223 17:50:27.029021 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"34e8e653-ac7e-4bca-9ce1-e5f9f4b5b2f2","Type":"ContainerStarted","Data":"bff433bda241d8d5edd5ef1c93e167704a437725bfcf6659b69648b29c46b0be"} Feb 23 17:50:27 crc kubenswrapper[4724]: I0223 17:50:27.041285 4724 generic.go:334] "Generic (PLEG): container finished" podID="df53406b-fb3c-41f5-86af-b78ac8d5df6d" containerID="1655072b2b368448156effff044965d4dd72cc86d075ab29bd3d947a764a0158" exitCode=0 Feb 23 17:50:27 crc kubenswrapper[4724]: I0223 17:50:27.041464 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74674fd4f8-mmmpd" event={"ID":"df53406b-fb3c-41f5-86af-b78ac8d5df6d","Type":"ContainerDied","Data":"1655072b2b368448156effff044965d4dd72cc86d075ab29bd3d947a764a0158"} Feb 23 17:50:28 crc kubenswrapper[4724]: I0223 17:50:28.074714 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"34e8e653-ac7e-4bca-9ce1-e5f9f4b5b2f2","Type":"ContainerStarted","Data":"a057576921f22faabc6597cd8989231943c33edb305d385567b80f90a29483eb"} Feb 23 17:50:29 crc kubenswrapper[4724]: I0223 17:50:29.093026 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"34e8e653-ac7e-4bca-9ce1-e5f9f4b5b2f2","Type":"ContainerStarted","Data":"e9f8692e99c09e1a4a2a131d7a43c24ea497e635606275d17578436cec3454b7"} Feb 23 17:50:29 crc kubenswrapper[4724]: I0223 17:50:29.118593 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.118577516 podStartE2EDuration="3.118577516s" podCreationTimestamp="2026-02-23 17:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:50:29.11476605 +0000 UTC m=+1184.930965680" watchObservedRunningTime="2026-02-23 17:50:29.118577516 +0000 UTC m=+1184.934777116" Feb 23 17:50:30 crc kubenswrapper[4724]: I0223 17:50:30.239578 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6f4c5b5ccd-7xcmx" Feb 23 17:50:30 crc kubenswrapper[4724]: I0223 17:50:30.342007 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6f4c5b5ccd-7xcmx" Feb 23 17:50:30 crc kubenswrapper[4724]: I0223 17:50:30.379609 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-57d985d94b-jc7cf" Feb 23 17:50:30 crc kubenswrapper[4724]: I0223 17:50:30.407761 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-65447d49b6-tcbqk"] Feb 23 17:50:30 crc kubenswrapper[4724]: I0223 17:50:30.407981 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-65447d49b6-tcbqk" podUID="17ce0d64-cfcb-48c4-8282-b53ae002e25e" containerName="barbican-api-log" containerID="cri-o://0d6d4db65804f270e3f7f8a2961bb50d7eb0ee662792b30682061db1a2b978fe" gracePeriod=30 Feb 23 17:50:30 crc kubenswrapper[4724]: I0223 17:50:30.408177 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-65447d49b6-tcbqk" podUID="17ce0d64-cfcb-48c4-8282-b53ae002e25e" containerName="barbican-api" containerID="cri-o://3c27d217b62b9e5c67a623e30d659d8ce85148f7f8ed614ae26e06a56de4f4f5" gracePeriod=30 Feb 23 17:50:30 crc kubenswrapper[4724]: I0223 17:50:30.426698 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-57d985d94b-jc7cf" Feb 23 17:50:30 crc kubenswrapper[4724]: I0223 17:50:30.667296 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-69f7cbf768-jd6kh"] Feb 23 17:50:30 crc kubenswrapper[4724]: I0223 17:50:30.673307 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-69f7cbf768-jd6kh" Feb 23 17:50:30 crc kubenswrapper[4724]: I0223 17:50:30.685348 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-69f7cbf768-jd6kh"] Feb 23 17:50:30 crc kubenswrapper[4724]: I0223 17:50:30.770575 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b2a00ce-727b-4065-b3b4-99f43d28b54d-config-data\") pod \"placement-69f7cbf768-jd6kh\" (UID: \"1b2a00ce-727b-4065-b3b4-99f43d28b54d\") " pod="openstack/placement-69f7cbf768-jd6kh" Feb 23 17:50:30 crc kubenswrapper[4724]: I0223 17:50:30.770643 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5th6\" (UniqueName: \"kubernetes.io/projected/1b2a00ce-727b-4065-b3b4-99f43d28b54d-kube-api-access-h5th6\") pod \"placement-69f7cbf768-jd6kh\" (UID: \"1b2a00ce-727b-4065-b3b4-99f43d28b54d\") " pod="openstack/placement-69f7cbf768-jd6kh" Feb 23 17:50:30 crc kubenswrapper[4724]: I0223 17:50:30.770688 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b2a00ce-727b-4065-b3b4-99f43d28b54d-combined-ca-bundle\") pod \"placement-69f7cbf768-jd6kh\" (UID: \"1b2a00ce-727b-4065-b3b4-99f43d28b54d\") " pod="openstack/placement-69f7cbf768-jd6kh" Feb 23 17:50:30 crc kubenswrapper[4724]: I0223 17:50:30.770706 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b2a00ce-727b-4065-b3b4-99f43d28b54d-public-tls-certs\") pod \"placement-69f7cbf768-jd6kh\" (UID: \"1b2a00ce-727b-4065-b3b4-99f43d28b54d\") " pod="openstack/placement-69f7cbf768-jd6kh" Feb 23 17:50:30 crc kubenswrapper[4724]: I0223 17:50:30.770793 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b2a00ce-727b-4065-b3b4-99f43d28b54d-internal-tls-certs\") pod \"placement-69f7cbf768-jd6kh\" (UID: \"1b2a00ce-727b-4065-b3b4-99f43d28b54d\") " pod="openstack/placement-69f7cbf768-jd6kh" Feb 23 17:50:30 crc kubenswrapper[4724]: I0223 17:50:30.770947 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b2a00ce-727b-4065-b3b4-99f43d28b54d-logs\") pod \"placement-69f7cbf768-jd6kh\" (UID: \"1b2a00ce-727b-4065-b3b4-99f43d28b54d\") " pod="openstack/placement-69f7cbf768-jd6kh" Feb 23 17:50:30 crc kubenswrapper[4724]: I0223 17:50:30.771011 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b2a00ce-727b-4065-b3b4-99f43d28b54d-scripts\") pod \"placement-69f7cbf768-jd6kh\" (UID: \"1b2a00ce-727b-4065-b3b4-99f43d28b54d\") " pod="openstack/placement-69f7cbf768-jd6kh" Feb 23 17:50:30 crc kubenswrapper[4724]: I0223 17:50:30.873137 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b2a00ce-727b-4065-b3b4-99f43d28b54d-config-data\") pod \"placement-69f7cbf768-jd6kh\" (UID: \"1b2a00ce-727b-4065-b3b4-99f43d28b54d\") " pod="openstack/placement-69f7cbf768-jd6kh" Feb 23 17:50:30 crc kubenswrapper[4724]: I0223 17:50:30.873230 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5th6\" (UniqueName: \"kubernetes.io/projected/1b2a00ce-727b-4065-b3b4-99f43d28b54d-kube-api-access-h5th6\") pod \"placement-69f7cbf768-jd6kh\" (UID: \"1b2a00ce-727b-4065-b3b4-99f43d28b54d\") " pod="openstack/placement-69f7cbf768-jd6kh" Feb 23 17:50:30 crc kubenswrapper[4724]: I0223 17:50:30.873303 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b2a00ce-727b-4065-b3b4-99f43d28b54d-combined-ca-bundle\") pod \"placement-69f7cbf768-jd6kh\" (UID: \"1b2a00ce-727b-4065-b3b4-99f43d28b54d\") " pod="openstack/placement-69f7cbf768-jd6kh" Feb 23 17:50:30 crc kubenswrapper[4724]: I0223 17:50:30.873330 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b2a00ce-727b-4065-b3b4-99f43d28b54d-public-tls-certs\") pod \"placement-69f7cbf768-jd6kh\" (UID: \"1b2a00ce-727b-4065-b3b4-99f43d28b54d\") " pod="openstack/placement-69f7cbf768-jd6kh" Feb 23 17:50:30 crc kubenswrapper[4724]: I0223 17:50:30.873433 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b2a00ce-727b-4065-b3b4-99f43d28b54d-internal-tls-certs\") pod \"placement-69f7cbf768-jd6kh\" (UID: \"1b2a00ce-727b-4065-b3b4-99f43d28b54d\") " pod="openstack/placement-69f7cbf768-jd6kh" Feb 23 17:50:30 crc kubenswrapper[4724]: I0223 17:50:30.873504 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b2a00ce-727b-4065-b3b4-99f43d28b54d-logs\") pod \"placement-69f7cbf768-jd6kh\" (UID: \"1b2a00ce-727b-4065-b3b4-99f43d28b54d\") " pod="openstack/placement-69f7cbf768-jd6kh" Feb 23 17:50:30 crc kubenswrapper[4724]: I0223 17:50:30.873532 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b2a00ce-727b-4065-b3b4-99f43d28b54d-scripts\") pod \"placement-69f7cbf768-jd6kh\" (UID: \"1b2a00ce-727b-4065-b3b4-99f43d28b54d\") " pod="openstack/placement-69f7cbf768-jd6kh" Feb 23 17:50:30 crc kubenswrapper[4724]: I0223 17:50:30.874275 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b2a00ce-727b-4065-b3b4-99f43d28b54d-logs\") pod \"placement-69f7cbf768-jd6kh\" (UID: \"1b2a00ce-727b-4065-b3b4-99f43d28b54d\") " pod="openstack/placement-69f7cbf768-jd6kh" Feb 23 17:50:30 crc kubenswrapper[4724]: I0223 17:50:30.880496 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b2a00ce-727b-4065-b3b4-99f43d28b54d-public-tls-certs\") pod \"placement-69f7cbf768-jd6kh\" (UID: \"1b2a00ce-727b-4065-b3b4-99f43d28b54d\") " pod="openstack/placement-69f7cbf768-jd6kh" Feb 23 17:50:30 crc kubenswrapper[4724]: I0223 17:50:30.881962 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b2a00ce-727b-4065-b3b4-99f43d28b54d-combined-ca-bundle\") pod \"placement-69f7cbf768-jd6kh\" (UID: \"1b2a00ce-727b-4065-b3b4-99f43d28b54d\") " pod="openstack/placement-69f7cbf768-jd6kh" Feb 23 17:50:30 crc kubenswrapper[4724]: I0223 17:50:30.883638 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b2a00ce-727b-4065-b3b4-99f43d28b54d-internal-tls-certs\") pod \"placement-69f7cbf768-jd6kh\" (UID: \"1b2a00ce-727b-4065-b3b4-99f43d28b54d\") " pod="openstack/placement-69f7cbf768-jd6kh" Feb 23 17:50:30 crc kubenswrapper[4724]: I0223 17:50:30.885926 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b2a00ce-727b-4065-b3b4-99f43d28b54d-config-data\") pod \"placement-69f7cbf768-jd6kh\" (UID: \"1b2a00ce-727b-4065-b3b4-99f43d28b54d\") " pod="openstack/placement-69f7cbf768-jd6kh" Feb 23 17:50:30 crc kubenswrapper[4724]: I0223 17:50:30.886115 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b2a00ce-727b-4065-b3b4-99f43d28b54d-scripts\") pod \"placement-69f7cbf768-jd6kh\" (UID: \"1b2a00ce-727b-4065-b3b4-99f43d28b54d\") " pod="openstack/placement-69f7cbf768-jd6kh" Feb 23 17:50:30 crc kubenswrapper[4724]: I0223 17:50:30.904179 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-74674fd4f8-mmmpd" podUID="df53406b-fb3c-41f5-86af-b78ac8d5df6d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.167:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.167:8443: connect: connection refused" Feb 23 17:50:30 crc kubenswrapper[4724]: I0223 17:50:30.915992 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5th6\" (UniqueName: \"kubernetes.io/projected/1b2a00ce-727b-4065-b3b4-99f43d28b54d-kube-api-access-h5th6\") pod \"placement-69f7cbf768-jd6kh\" (UID: \"1b2a00ce-727b-4065-b3b4-99f43d28b54d\") " pod="openstack/placement-69f7cbf768-jd6kh" Feb 23 17:50:31 crc kubenswrapper[4724]: I0223 17:50:31.000652 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-69f7cbf768-jd6kh" Feb 23 17:50:31 crc kubenswrapper[4724]: I0223 17:50:31.164905 4724 generic.go:334] "Generic (PLEG): container finished" podID="17ce0d64-cfcb-48c4-8282-b53ae002e25e" containerID="0d6d4db65804f270e3f7f8a2961bb50d7eb0ee662792b30682061db1a2b978fe" exitCode=143 Feb 23 17:50:31 crc kubenswrapper[4724]: I0223 17:50:31.165486 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-65447d49b6-tcbqk" event={"ID":"17ce0d64-cfcb-48c4-8282-b53ae002e25e","Type":"ContainerDied","Data":"0d6d4db65804f270e3f7f8a2961bb50d7eb0ee662792b30682061db1a2b978fe"} Feb 23 17:50:31 crc kubenswrapper[4724]: I0223 17:50:31.521781 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 23 17:50:31 crc kubenswrapper[4724]: I0223 17:50:31.600442 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-69f7cbf768-jd6kh"] Feb 23 17:50:32 crc kubenswrapper[4724]: I0223 17:50:32.028120 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-65447d49b6-tcbqk" podUID="17ce0d64-cfcb-48c4-8282-b53ae002e25e" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.183:9311/healthcheck\": read tcp 10.217.0.2:50150->10.217.0.183:9311: read: connection reset by peer" Feb 23 17:50:32 crc kubenswrapper[4724]: I0223 17:50:32.028188 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-65447d49b6-tcbqk" podUID="17ce0d64-cfcb-48c4-8282-b53ae002e25e" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.183:9311/healthcheck\": read tcp 10.217.0.2:50136->10.217.0.183:9311: read: connection reset by peer" Feb 23 17:50:32 crc kubenswrapper[4724]: I0223 17:50:32.119028 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 23 17:50:32 crc kubenswrapper[4724]: I0223 17:50:32.181091 4724 generic.go:334] "Generic (PLEG): container finished" podID="17ce0d64-cfcb-48c4-8282-b53ae002e25e" containerID="3c27d217b62b9e5c67a623e30d659d8ce85148f7f8ed614ae26e06a56de4f4f5" exitCode=0 Feb 23 17:50:32 crc kubenswrapper[4724]: I0223 17:50:32.181155 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-65447d49b6-tcbqk" event={"ID":"17ce0d64-cfcb-48c4-8282-b53ae002e25e","Type":"ContainerDied","Data":"3c27d217b62b9e5c67a623e30d659d8ce85148f7f8ed614ae26e06a56de4f4f5"} Feb 23 17:50:32 crc kubenswrapper[4724]: I0223 17:50:32.182546 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-69f7cbf768-jd6kh" event={"ID":"1b2a00ce-727b-4065-b3b4-99f43d28b54d","Type":"ContainerStarted","Data":"856203b6fa0ff851d1c450f09d8ea7a497591b13a5bb207026aa773601b7e19a"} Feb 23 17:50:32 crc kubenswrapper[4724]: I0223 17:50:32.182583 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-69f7cbf768-jd6kh" event={"ID":"1b2a00ce-727b-4065-b3b4-99f43d28b54d","Type":"ContainerStarted","Data":"ae1b61c49c48ccb166edb678edbee5379ca886fc2b2e31273dbcb9979c52e9ad"} Feb 23 17:50:32 crc kubenswrapper[4724]: I0223 17:50:32.480319 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5cb5799495-xxmx4" Feb 23 17:50:32 crc kubenswrapper[4724]: I0223 17:50:32.598218 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-65447d49b6-tcbqk" Feb 23 17:50:32 crc kubenswrapper[4724]: I0223 17:50:32.724773 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17ce0d64-cfcb-48c4-8282-b53ae002e25e-combined-ca-bundle\") pod \"17ce0d64-cfcb-48c4-8282-b53ae002e25e\" (UID: \"17ce0d64-cfcb-48c4-8282-b53ae002e25e\") " Feb 23 17:50:32 crc kubenswrapper[4724]: I0223 17:50:32.725017 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52ql5\" (UniqueName: \"kubernetes.io/projected/17ce0d64-cfcb-48c4-8282-b53ae002e25e-kube-api-access-52ql5\") pod \"17ce0d64-cfcb-48c4-8282-b53ae002e25e\" (UID: \"17ce0d64-cfcb-48c4-8282-b53ae002e25e\") " Feb 23 17:50:32 crc kubenswrapper[4724]: I0223 17:50:32.725143 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17ce0d64-cfcb-48c4-8282-b53ae002e25e-logs\") pod \"17ce0d64-cfcb-48c4-8282-b53ae002e25e\" (UID: \"17ce0d64-cfcb-48c4-8282-b53ae002e25e\") " Feb 23 17:50:32 crc kubenswrapper[4724]: I0223 17:50:32.725436 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17ce0d64-cfcb-48c4-8282-b53ae002e25e-config-data-custom\") pod \"17ce0d64-cfcb-48c4-8282-b53ae002e25e\" (UID: \"17ce0d64-cfcb-48c4-8282-b53ae002e25e\") " Feb 23 17:50:32 crc kubenswrapper[4724]: I0223 17:50:32.725658 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17ce0d64-cfcb-48c4-8282-b53ae002e25e-config-data\") pod \"17ce0d64-cfcb-48c4-8282-b53ae002e25e\" (UID: \"17ce0d64-cfcb-48c4-8282-b53ae002e25e\") " Feb 23 17:50:32 crc kubenswrapper[4724]: I0223 17:50:32.725726 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17ce0d64-cfcb-48c4-8282-b53ae002e25e-logs" (OuterVolumeSpecName: "logs") pod "17ce0d64-cfcb-48c4-8282-b53ae002e25e" (UID: "17ce0d64-cfcb-48c4-8282-b53ae002e25e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:50:32 crc kubenswrapper[4724]: I0223 17:50:32.726552 4724 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17ce0d64-cfcb-48c4-8282-b53ae002e25e-logs\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:32 crc kubenswrapper[4724]: I0223 17:50:32.729293 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17ce0d64-cfcb-48c4-8282-b53ae002e25e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "17ce0d64-cfcb-48c4-8282-b53ae002e25e" (UID: "17ce0d64-cfcb-48c4-8282-b53ae002e25e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:50:32 crc kubenswrapper[4724]: I0223 17:50:32.734573 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17ce0d64-cfcb-48c4-8282-b53ae002e25e-kube-api-access-52ql5" (OuterVolumeSpecName: "kube-api-access-52ql5") pod "17ce0d64-cfcb-48c4-8282-b53ae002e25e" (UID: "17ce0d64-cfcb-48c4-8282-b53ae002e25e"). InnerVolumeSpecName "kube-api-access-52ql5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:50:32 crc kubenswrapper[4724]: I0223 17:50:32.756059 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17ce0d64-cfcb-48c4-8282-b53ae002e25e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "17ce0d64-cfcb-48c4-8282-b53ae002e25e" (UID: "17ce0d64-cfcb-48c4-8282-b53ae002e25e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:50:32 crc kubenswrapper[4724]: I0223 17:50:32.785579 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17ce0d64-cfcb-48c4-8282-b53ae002e25e-config-data" (OuterVolumeSpecName: "config-data") pod "17ce0d64-cfcb-48c4-8282-b53ae002e25e" (UID: "17ce0d64-cfcb-48c4-8282-b53ae002e25e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:50:32 crc kubenswrapper[4724]: I0223 17:50:32.828665 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17ce0d64-cfcb-48c4-8282-b53ae002e25e-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:32 crc kubenswrapper[4724]: I0223 17:50:32.828707 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17ce0d64-cfcb-48c4-8282-b53ae002e25e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:32 crc kubenswrapper[4724]: I0223 17:50:32.828719 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52ql5\" (UniqueName: \"kubernetes.io/projected/17ce0d64-cfcb-48c4-8282-b53ae002e25e-kube-api-access-52ql5\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:32 crc kubenswrapper[4724]: I0223 17:50:32.828728 4724 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17ce0d64-cfcb-48c4-8282-b53ae002e25e-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:33 crc kubenswrapper[4724]: I0223 17:50:33.004897 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 23 17:50:33 crc kubenswrapper[4724]: E0223 17:50:33.005284 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17ce0d64-cfcb-48c4-8282-b53ae002e25e" containerName="barbican-api-log" Feb 23 17:50:33 crc kubenswrapper[4724]: I0223 17:50:33.005302 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="17ce0d64-cfcb-48c4-8282-b53ae002e25e" containerName="barbican-api-log" Feb 23 17:50:33 crc kubenswrapper[4724]: E0223 17:50:33.005330 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17ce0d64-cfcb-48c4-8282-b53ae002e25e" containerName="barbican-api" Feb 23 17:50:33 crc kubenswrapper[4724]: I0223 17:50:33.005337 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="17ce0d64-cfcb-48c4-8282-b53ae002e25e" containerName="barbican-api" Feb 23 17:50:33 crc kubenswrapper[4724]: I0223 17:50:33.005525 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="17ce0d64-cfcb-48c4-8282-b53ae002e25e" containerName="barbican-api" Feb 23 17:50:33 crc kubenswrapper[4724]: I0223 17:50:33.005560 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="17ce0d64-cfcb-48c4-8282-b53ae002e25e" containerName="barbican-api-log" Feb 23 17:50:33 crc kubenswrapper[4724]: I0223 17:50:33.006157 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 23 17:50:33 crc kubenswrapper[4724]: I0223 17:50:33.008436 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-56gxd" Feb 23 17:50:33 crc kubenswrapper[4724]: I0223 17:50:33.008462 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 23 17:50:33 crc kubenswrapper[4724]: I0223 17:50:33.008704 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 23 17:50:33 crc kubenswrapper[4724]: I0223 17:50:33.017079 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 23 17:50:33 crc kubenswrapper[4724]: I0223 17:50:33.133469 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f5d061d8-a5d8-48fd-8f20-45eb9def3384-openstack-config\") pod \"openstackclient\" (UID: \"f5d061d8-a5d8-48fd-8f20-45eb9def3384\") " pod="openstack/openstackclient" Feb 23 17:50:33 crc kubenswrapper[4724]: I0223 17:50:33.133620 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f5d061d8-a5d8-48fd-8f20-45eb9def3384-openstack-config-secret\") pod \"openstackclient\" (UID: \"f5d061d8-a5d8-48fd-8f20-45eb9def3384\") " pod="openstack/openstackclient" Feb 23 17:50:33 crc kubenswrapper[4724]: I0223 17:50:33.133696 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpxpq\" (UniqueName: \"kubernetes.io/projected/f5d061d8-a5d8-48fd-8f20-45eb9def3384-kube-api-access-bpxpq\") pod \"openstackclient\" (UID: \"f5d061d8-a5d8-48fd-8f20-45eb9def3384\") " pod="openstack/openstackclient" Feb 23 17:50:33 crc kubenswrapper[4724]: I0223 17:50:33.133776 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5d061d8-a5d8-48fd-8f20-45eb9def3384-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f5d061d8-a5d8-48fd-8f20-45eb9def3384\") " pod="openstack/openstackclient" Feb 23 17:50:33 crc kubenswrapper[4724]: I0223 17:50:33.193404 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-69f7cbf768-jd6kh" event={"ID":"1b2a00ce-727b-4065-b3b4-99f43d28b54d","Type":"ContainerStarted","Data":"1a5911af999b672c2f7bd82c97e21b95c8eb8b422adf1469d4ccf130a9f61db0"} Feb 23 17:50:33 crc kubenswrapper[4724]: I0223 17:50:33.193514 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-69f7cbf768-jd6kh" Feb 23 17:50:33 crc kubenswrapper[4724]: I0223 17:50:33.193576 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-69f7cbf768-jd6kh" Feb 23 17:50:33 crc kubenswrapper[4724]: I0223 17:50:33.195526 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-65447d49b6-tcbqk" event={"ID":"17ce0d64-cfcb-48c4-8282-b53ae002e25e","Type":"ContainerDied","Data":"98fb66647564374737df1e7ea0f42184211a133921bec0fccba1e58041a865dc"} Feb 23 17:50:33 crc kubenswrapper[4724]: I0223 17:50:33.195567 4724 scope.go:117] "RemoveContainer" containerID="3c27d217b62b9e5c67a623e30d659d8ce85148f7f8ed614ae26e06a56de4f4f5" Feb 23 17:50:33 crc kubenswrapper[4724]: I0223 17:50:33.195566 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-65447d49b6-tcbqk" Feb 23 17:50:33 crc kubenswrapper[4724]: I0223 17:50:33.223104 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-69f7cbf768-jd6kh" podStartSLOduration=3.223084883 podStartE2EDuration="3.223084883s" podCreationTimestamp="2026-02-23 17:50:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:50:33.209186092 +0000 UTC m=+1189.025385692" watchObservedRunningTime="2026-02-23 17:50:33.223084883 +0000 UTC m=+1189.039284483" Feb 23 17:50:33 crc kubenswrapper[4724]: I0223 17:50:33.229947 4724 scope.go:117] "RemoveContainer" containerID="0d6d4db65804f270e3f7f8a2961bb50d7eb0ee662792b30682061db1a2b978fe" Feb 23 17:50:33 crc kubenswrapper[4724]: I0223 17:50:33.235464 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f5d061d8-a5d8-48fd-8f20-45eb9def3384-openstack-config\") pod \"openstackclient\" (UID: \"f5d061d8-a5d8-48fd-8f20-45eb9def3384\") " pod="openstack/openstackclient" Feb 23 17:50:33 crc kubenswrapper[4724]: I0223 17:50:33.236320 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f5d061d8-a5d8-48fd-8f20-45eb9def3384-openstack-config\") pod \"openstackclient\" (UID: \"f5d061d8-a5d8-48fd-8f20-45eb9def3384\") " pod="openstack/openstackclient" Feb 23 17:50:33 crc kubenswrapper[4724]: I0223 17:50:33.236463 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f5d061d8-a5d8-48fd-8f20-45eb9def3384-openstack-config-secret\") pod \"openstackclient\" (UID: \"f5d061d8-a5d8-48fd-8f20-45eb9def3384\") " pod="openstack/openstackclient" Feb 23 17:50:33 crc kubenswrapper[4724]: I0223 17:50:33.236513 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpxpq\" (UniqueName: \"kubernetes.io/projected/f5d061d8-a5d8-48fd-8f20-45eb9def3384-kube-api-access-bpxpq\") pod \"openstackclient\" (UID: \"f5d061d8-a5d8-48fd-8f20-45eb9def3384\") " pod="openstack/openstackclient" Feb 23 17:50:33 crc kubenswrapper[4724]: I0223 17:50:33.237103 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5d061d8-a5d8-48fd-8f20-45eb9def3384-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f5d061d8-a5d8-48fd-8f20-45eb9def3384\") " pod="openstack/openstackclient" Feb 23 17:50:33 crc kubenswrapper[4724]: I0223 17:50:33.240971 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f5d061d8-a5d8-48fd-8f20-45eb9def3384-openstack-config-secret\") pod \"openstackclient\" (UID: \"f5d061d8-a5d8-48fd-8f20-45eb9def3384\") " pod="openstack/openstackclient" Feb 23 17:50:33 crc kubenswrapper[4724]: I0223 17:50:33.241025 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-65447d49b6-tcbqk"] Feb 23 17:50:33 crc kubenswrapper[4724]: I0223 17:50:33.242075 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5d061d8-a5d8-48fd-8f20-45eb9def3384-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f5d061d8-a5d8-48fd-8f20-45eb9def3384\") " pod="openstack/openstackclient" Feb 23 17:50:33 crc kubenswrapper[4724]: I0223 17:50:33.249447 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-65447d49b6-tcbqk"] Feb 23 17:50:33 crc kubenswrapper[4724]: I0223 17:50:33.253779 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpxpq\" (UniqueName: \"kubernetes.io/projected/f5d061d8-a5d8-48fd-8f20-45eb9def3384-kube-api-access-bpxpq\") pod \"openstackclient\" (UID: \"f5d061d8-a5d8-48fd-8f20-45eb9def3384\") " pod="openstack/openstackclient" Feb 23 17:50:33 crc kubenswrapper[4724]: I0223 17:50:33.327617 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 23 17:50:33 crc kubenswrapper[4724]: I0223 17:50:33.838409 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 23 17:50:33 crc kubenswrapper[4724]: W0223 17:50:33.840169 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5d061d8_a5d8_48fd_8f20_45eb9def3384.slice/crio-441d45e409b95048373b4479d513436db89ad2ab9ecd8ad7ac725a69e3414c9f WatchSource:0}: Error finding container 441d45e409b95048373b4479d513436db89ad2ab9ecd8ad7ac725a69e3414c9f: Status 404 returned error can't find the container with id 441d45e409b95048373b4479d513436db89ad2ab9ecd8ad7ac725a69e3414c9f Feb 23 17:50:34 crc kubenswrapper[4724]: I0223 17:50:34.219862 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"f5d061d8-a5d8-48fd-8f20-45eb9def3384","Type":"ContainerStarted","Data":"441d45e409b95048373b4479d513436db89ad2ab9ecd8ad7ac725a69e3414c9f"} Feb 23 17:50:34 crc kubenswrapper[4724]: E0223 17:50:34.882826 4724 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c0efb1b_cbc1_4ac1_b969_ce5ae7b03857.slice/crio-conmon-fb492313bded3525683787416d1463c506739ef103a7abf059baf18d6f79a5a7.scope\": RecentStats: unable to find data in memory cache]" Feb 23 17:50:34 crc kubenswrapper[4724]: I0223 17:50:34.965824 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17ce0d64-cfcb-48c4-8282-b53ae002e25e" path="/var/lib/kubelet/pods/17ce0d64-cfcb-48c4-8282-b53ae002e25e/volumes" Feb 23 17:50:36 crc kubenswrapper[4724]: I0223 17:50:36.661691 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 23 17:50:36 crc kubenswrapper[4724]: I0223 17:50:36.952125 4724 scope.go:117] "RemoveContainer" containerID="eea15f12b37ff5426ac01301fcccf9eee8bdd329a3a18203c6e4bee6ba83abfd" Feb 23 17:50:36 crc kubenswrapper[4724]: E0223 17:50:36.952749 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(86eb7ff0-87b2-4538-8c5b-9126768e810b)\"" pod="openstack/watcher-decision-engine-0" podUID="86eb7ff0-87b2-4538-8c5b-9126768e810b" Feb 23 17:50:37 crc kubenswrapper[4724]: I0223 17:50:37.850904 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 17:50:37 crc kubenswrapper[4724]: I0223 17:50:37.851231 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0c5fc101-99f6-43b3-ad94-6e23741a2f27" containerName="ceilometer-central-agent" containerID="cri-o://2e3509ce0c599b64874076cb4246093bb7533108b1b48d984371226170d9a24b" gracePeriod=30 Feb 23 17:50:37 crc kubenswrapper[4724]: I0223 17:50:37.851257 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0c5fc101-99f6-43b3-ad94-6e23741a2f27" containerName="sg-core" containerID="cri-o://7ab2bfa7b1161871b03757a8a18275e2871a51c7fa5e6ff190f263bee734ae60" gracePeriod=30 Feb 23 17:50:37 crc kubenswrapper[4724]: I0223 17:50:37.851303 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0c5fc101-99f6-43b3-ad94-6e23741a2f27" containerName="proxy-httpd" containerID="cri-o://c8a028a120a605183199fdabd4832a6f82658283eaf199811af1e536477344fb" gracePeriod=30 Feb 23 17:50:37 crc kubenswrapper[4724]: I0223 17:50:37.851303 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0c5fc101-99f6-43b3-ad94-6e23741a2f27" containerName="ceilometer-notification-agent" containerID="cri-o://bf473fe5890516a889ea856b4fde051ffec9276727356b83b255d1c656f4e24f" gracePeriod=30 Feb 23 17:50:37 crc kubenswrapper[4724]: I0223 17:50:37.856077 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="0c5fc101-99f6-43b3-ad94-6e23741a2f27" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.187:3000/\": read tcp 10.217.0.2:48652->10.217.0.187:3000: read: connection reset by peer" Feb 23 17:50:38 crc kubenswrapper[4724]: I0223 17:50:38.269476 4724 generic.go:334] "Generic (PLEG): container finished" podID="0c5fc101-99f6-43b3-ad94-6e23741a2f27" containerID="c8a028a120a605183199fdabd4832a6f82658283eaf199811af1e536477344fb" exitCode=0 Feb 23 17:50:38 crc kubenswrapper[4724]: I0223 17:50:38.269782 4724 generic.go:334] "Generic (PLEG): container finished" podID="0c5fc101-99f6-43b3-ad94-6e23741a2f27" containerID="7ab2bfa7b1161871b03757a8a18275e2871a51c7fa5e6ff190f263bee734ae60" exitCode=2 Feb 23 17:50:38 crc kubenswrapper[4724]: I0223 17:50:38.269554 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c5fc101-99f6-43b3-ad94-6e23741a2f27","Type":"ContainerDied","Data":"c8a028a120a605183199fdabd4832a6f82658283eaf199811af1e536477344fb"} Feb 23 17:50:38 crc kubenswrapper[4724]: I0223 17:50:38.269832 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c5fc101-99f6-43b3-ad94-6e23741a2f27","Type":"ContainerDied","Data":"7ab2bfa7b1161871b03757a8a18275e2871a51c7fa5e6ff190f263bee734ae60"} Feb 23 17:50:38 crc kubenswrapper[4724]: I0223 17:50:38.555702 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-f447dffc7-s2mfq"] Feb 23 17:50:38 crc kubenswrapper[4724]: I0223 17:50:38.557253 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-f447dffc7-s2mfq" Feb 23 17:50:38 crc kubenswrapper[4724]: I0223 17:50:38.562269 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 23 17:50:38 crc kubenswrapper[4724]: I0223 17:50:38.562533 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 23 17:50:38 crc kubenswrapper[4724]: I0223 17:50:38.562635 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 23 17:50:38 crc kubenswrapper[4724]: I0223 17:50:38.579380 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-f447dffc7-s2mfq"] Feb 23 17:50:38 crc kubenswrapper[4724]: I0223 17:50:38.644298 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/46ac9fa5-f3bd-48bf-b1e6-1b570b4bda5b-etc-swift\") pod \"swift-proxy-f447dffc7-s2mfq\" (UID: \"46ac9fa5-f3bd-48bf-b1e6-1b570b4bda5b\") " pod="openstack/swift-proxy-f447dffc7-s2mfq" Feb 23 17:50:38 crc kubenswrapper[4724]: I0223 17:50:38.644377 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46ac9fa5-f3bd-48bf-b1e6-1b570b4bda5b-run-httpd\") pod \"swift-proxy-f447dffc7-s2mfq\" (UID: \"46ac9fa5-f3bd-48bf-b1e6-1b570b4bda5b\") " pod="openstack/swift-proxy-f447dffc7-s2mfq" Feb 23 17:50:38 crc kubenswrapper[4724]: I0223 17:50:38.644457 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46ac9fa5-f3bd-48bf-b1e6-1b570b4bda5b-log-httpd\") pod \"swift-proxy-f447dffc7-s2mfq\" (UID: \"46ac9fa5-f3bd-48bf-b1e6-1b570b4bda5b\") " pod="openstack/swift-proxy-f447dffc7-s2mfq" Feb 23 17:50:38 crc kubenswrapper[4724]: I0223 17:50:38.644499 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/46ac9fa5-f3bd-48bf-b1e6-1b570b4bda5b-internal-tls-certs\") pod \"swift-proxy-f447dffc7-s2mfq\" (UID: \"46ac9fa5-f3bd-48bf-b1e6-1b570b4bda5b\") " pod="openstack/swift-proxy-f447dffc7-s2mfq" Feb 23 17:50:38 crc kubenswrapper[4724]: I0223 17:50:38.644526 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46ac9fa5-f3bd-48bf-b1e6-1b570b4bda5b-combined-ca-bundle\") pod \"swift-proxy-f447dffc7-s2mfq\" (UID: \"46ac9fa5-f3bd-48bf-b1e6-1b570b4bda5b\") " pod="openstack/swift-proxy-f447dffc7-s2mfq" Feb 23 17:50:38 crc kubenswrapper[4724]: I0223 17:50:38.644629 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhxtz\" (UniqueName: \"kubernetes.io/projected/46ac9fa5-f3bd-48bf-b1e6-1b570b4bda5b-kube-api-access-rhxtz\") pod \"swift-proxy-f447dffc7-s2mfq\" (UID: \"46ac9fa5-f3bd-48bf-b1e6-1b570b4bda5b\") " pod="openstack/swift-proxy-f447dffc7-s2mfq" Feb 23 17:50:38 crc kubenswrapper[4724]: I0223 17:50:38.644723 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46ac9fa5-f3bd-48bf-b1e6-1b570b4bda5b-config-data\") pod \"swift-proxy-f447dffc7-s2mfq\" (UID: \"46ac9fa5-f3bd-48bf-b1e6-1b570b4bda5b\") " pod="openstack/swift-proxy-f447dffc7-s2mfq" Feb 23 17:50:38 crc kubenswrapper[4724]: I0223 17:50:38.644894 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/46ac9fa5-f3bd-48bf-b1e6-1b570b4bda5b-public-tls-certs\") pod \"swift-proxy-f447dffc7-s2mfq\" (UID: \"46ac9fa5-f3bd-48bf-b1e6-1b570b4bda5b\") " pod="openstack/swift-proxy-f447dffc7-s2mfq" Feb 23 17:50:38 crc kubenswrapper[4724]: I0223 17:50:38.746214 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/46ac9fa5-f3bd-48bf-b1e6-1b570b4bda5b-public-tls-certs\") pod \"swift-proxy-f447dffc7-s2mfq\" (UID: \"46ac9fa5-f3bd-48bf-b1e6-1b570b4bda5b\") " pod="openstack/swift-proxy-f447dffc7-s2mfq" Feb 23 17:50:38 crc kubenswrapper[4724]: I0223 17:50:38.746269 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/46ac9fa5-f3bd-48bf-b1e6-1b570b4bda5b-etc-swift\") pod \"swift-proxy-f447dffc7-s2mfq\" (UID: \"46ac9fa5-f3bd-48bf-b1e6-1b570b4bda5b\") " pod="openstack/swift-proxy-f447dffc7-s2mfq" Feb 23 17:50:38 crc kubenswrapper[4724]: I0223 17:50:38.746319 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46ac9fa5-f3bd-48bf-b1e6-1b570b4bda5b-run-httpd\") pod \"swift-proxy-f447dffc7-s2mfq\" (UID: \"46ac9fa5-f3bd-48bf-b1e6-1b570b4bda5b\") " pod="openstack/swift-proxy-f447dffc7-s2mfq" Feb 23 17:50:38 crc kubenswrapper[4724]: I0223 17:50:38.746355 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46ac9fa5-f3bd-48bf-b1e6-1b570b4bda5b-log-httpd\") pod \"swift-proxy-f447dffc7-s2mfq\" (UID: \"46ac9fa5-f3bd-48bf-b1e6-1b570b4bda5b\") " pod="openstack/swift-proxy-f447dffc7-s2mfq" Feb 23 17:50:38 crc kubenswrapper[4724]: I0223 17:50:38.746425 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/46ac9fa5-f3bd-48bf-b1e6-1b570b4bda5b-internal-tls-certs\") pod \"swift-proxy-f447dffc7-s2mfq\" (UID: \"46ac9fa5-f3bd-48bf-b1e6-1b570b4bda5b\") " pod="openstack/swift-proxy-f447dffc7-s2mfq" Feb 23 17:50:38 crc kubenswrapper[4724]: I0223 17:50:38.746459 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46ac9fa5-f3bd-48bf-b1e6-1b570b4bda5b-combined-ca-bundle\") pod \"swift-proxy-f447dffc7-s2mfq\" (UID: \"46ac9fa5-f3bd-48bf-b1e6-1b570b4bda5b\") " pod="openstack/swift-proxy-f447dffc7-s2mfq" Feb 23 17:50:38 crc kubenswrapper[4724]: I0223 17:50:38.746488 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhxtz\" (UniqueName: \"kubernetes.io/projected/46ac9fa5-f3bd-48bf-b1e6-1b570b4bda5b-kube-api-access-rhxtz\") pod \"swift-proxy-f447dffc7-s2mfq\" (UID: \"46ac9fa5-f3bd-48bf-b1e6-1b570b4bda5b\") " pod="openstack/swift-proxy-f447dffc7-s2mfq" Feb 23 17:50:38 crc kubenswrapper[4724]: I0223 17:50:38.746519 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46ac9fa5-f3bd-48bf-b1e6-1b570b4bda5b-config-data\") pod \"swift-proxy-f447dffc7-s2mfq\" (UID: \"46ac9fa5-f3bd-48bf-b1e6-1b570b4bda5b\") " pod="openstack/swift-proxy-f447dffc7-s2mfq" Feb 23 17:50:38 crc kubenswrapper[4724]: I0223 17:50:38.748099 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46ac9fa5-f3bd-48bf-b1e6-1b570b4bda5b-log-httpd\") pod \"swift-proxy-f447dffc7-s2mfq\" (UID: \"46ac9fa5-f3bd-48bf-b1e6-1b570b4bda5b\") " pod="openstack/swift-proxy-f447dffc7-s2mfq" Feb 23 17:50:38 crc kubenswrapper[4724]: I0223 17:50:38.748221 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46ac9fa5-f3bd-48bf-b1e6-1b570b4bda5b-run-httpd\") pod \"swift-proxy-f447dffc7-s2mfq\" (UID: \"46ac9fa5-f3bd-48bf-b1e6-1b570b4bda5b\") " pod="openstack/swift-proxy-f447dffc7-s2mfq" Feb 23 17:50:38 crc kubenswrapper[4724]: I0223 17:50:38.752913 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46ac9fa5-f3bd-48bf-b1e6-1b570b4bda5b-config-data\") pod \"swift-proxy-f447dffc7-s2mfq\" (UID: \"46ac9fa5-f3bd-48bf-b1e6-1b570b4bda5b\") " pod="openstack/swift-proxy-f447dffc7-s2mfq" Feb 23 17:50:38 crc kubenswrapper[4724]: I0223 17:50:38.753230 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/46ac9fa5-f3bd-48bf-b1e6-1b570b4bda5b-internal-tls-certs\") pod \"swift-proxy-f447dffc7-s2mfq\" (UID: \"46ac9fa5-f3bd-48bf-b1e6-1b570b4bda5b\") " pod="openstack/swift-proxy-f447dffc7-s2mfq" Feb 23 17:50:38 crc kubenswrapper[4724]: I0223 17:50:38.754172 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/46ac9fa5-f3bd-48bf-b1e6-1b570b4bda5b-public-tls-certs\") pod \"swift-proxy-f447dffc7-s2mfq\" (UID: \"46ac9fa5-f3bd-48bf-b1e6-1b570b4bda5b\") " pod="openstack/swift-proxy-f447dffc7-s2mfq" Feb 23 17:50:38 crc kubenswrapper[4724]: I0223 17:50:38.755263 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46ac9fa5-f3bd-48bf-b1e6-1b570b4bda5b-combined-ca-bundle\") pod \"swift-proxy-f447dffc7-s2mfq\" (UID: \"46ac9fa5-f3bd-48bf-b1e6-1b570b4bda5b\") " pod="openstack/swift-proxy-f447dffc7-s2mfq" Feb 23 17:50:38 crc kubenswrapper[4724]: I0223 17:50:38.755990 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/46ac9fa5-f3bd-48bf-b1e6-1b570b4bda5b-etc-swift\") pod \"swift-proxy-f447dffc7-s2mfq\" (UID: \"46ac9fa5-f3bd-48bf-b1e6-1b570b4bda5b\") " pod="openstack/swift-proxy-f447dffc7-s2mfq" Feb 23 17:50:38 crc kubenswrapper[4724]: I0223 17:50:38.771776 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhxtz\" (UniqueName: \"kubernetes.io/projected/46ac9fa5-f3bd-48bf-b1e6-1b570b4bda5b-kube-api-access-rhxtz\") pod \"swift-proxy-f447dffc7-s2mfq\" (UID: \"46ac9fa5-f3bd-48bf-b1e6-1b570b4bda5b\") " pod="openstack/swift-proxy-f447dffc7-s2mfq" Feb 23 17:50:38 crc kubenswrapper[4724]: I0223 17:50:38.887869 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-f447dffc7-s2mfq" Feb 23 17:50:39 crc kubenswrapper[4724]: I0223 17:50:39.290324 4724 generic.go:334] "Generic (PLEG): container finished" podID="0c5fc101-99f6-43b3-ad94-6e23741a2f27" containerID="2e3509ce0c599b64874076cb4246093bb7533108b1b48d984371226170d9a24b" exitCode=0 Feb 23 17:50:39 crc kubenswrapper[4724]: I0223 17:50:39.290383 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c5fc101-99f6-43b3-ad94-6e23741a2f27","Type":"ContainerDied","Data":"2e3509ce0c599b64874076cb4246093bb7533108b1b48d984371226170d9a24b"} Feb 23 17:50:39 crc kubenswrapper[4724]: I0223 17:50:39.506510 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-6wzkp"] Feb 23 17:50:39 crc kubenswrapper[4724]: I0223 17:50:39.508277 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-6wzkp" Feb 23 17:50:39 crc kubenswrapper[4724]: I0223 17:50:39.544252 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-6wzkp"] Feb 23 17:50:39 crc kubenswrapper[4724]: I0223 17:50:39.565050 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9j2dj\" (UniqueName: \"kubernetes.io/projected/e3a4fd93-b17a-411c-9173-a8038523ffac-kube-api-access-9j2dj\") pod \"nova-api-db-create-6wzkp\" (UID: \"e3a4fd93-b17a-411c-9173-a8038523ffac\") " pod="openstack/nova-api-db-create-6wzkp" Feb 23 17:50:39 crc kubenswrapper[4724]: I0223 17:50:39.565191 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3a4fd93-b17a-411c-9173-a8038523ffac-operator-scripts\") pod \"nova-api-db-create-6wzkp\" (UID: \"e3a4fd93-b17a-411c-9173-a8038523ffac\") " pod="openstack/nova-api-db-create-6wzkp" Feb 23 17:50:39 crc kubenswrapper[4724]: I0223 17:50:39.611874 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-a49e-account-create-update-w5ddb"] Feb 23 17:50:39 crc kubenswrapper[4724]: I0223 17:50:39.613242 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a49e-account-create-update-w5ddb" Feb 23 17:50:39 crc kubenswrapper[4724]: I0223 17:50:39.615133 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 23 17:50:39 crc kubenswrapper[4724]: I0223 17:50:39.626313 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-a49e-account-create-update-w5ddb"] Feb 23 17:50:39 crc kubenswrapper[4724]: I0223 17:50:39.666984 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvg4s\" (UniqueName: \"kubernetes.io/projected/b54d2670-b9ee-480a-a622-386abf8656f1-kube-api-access-qvg4s\") pod \"nova-api-a49e-account-create-update-w5ddb\" (UID: \"b54d2670-b9ee-480a-a622-386abf8656f1\") " pod="openstack/nova-api-a49e-account-create-update-w5ddb" Feb 23 17:50:39 crc kubenswrapper[4724]: I0223 17:50:39.667050 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9j2dj\" (UniqueName: \"kubernetes.io/projected/e3a4fd93-b17a-411c-9173-a8038523ffac-kube-api-access-9j2dj\") pod \"nova-api-db-create-6wzkp\" (UID: \"e3a4fd93-b17a-411c-9173-a8038523ffac\") " pod="openstack/nova-api-db-create-6wzkp" Feb 23 17:50:39 crc kubenswrapper[4724]: I0223 17:50:39.667124 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b54d2670-b9ee-480a-a622-386abf8656f1-operator-scripts\") pod \"nova-api-a49e-account-create-update-w5ddb\" (UID: \"b54d2670-b9ee-480a-a622-386abf8656f1\") " pod="openstack/nova-api-a49e-account-create-update-w5ddb" Feb 23 17:50:39 crc kubenswrapper[4724]: I0223 17:50:39.667208 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3a4fd93-b17a-411c-9173-a8038523ffac-operator-scripts\") pod \"nova-api-db-create-6wzkp\" (UID: \"e3a4fd93-b17a-411c-9173-a8038523ffac\") " pod="openstack/nova-api-db-create-6wzkp" Feb 23 17:50:39 crc kubenswrapper[4724]: I0223 17:50:39.668729 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3a4fd93-b17a-411c-9173-a8038523ffac-operator-scripts\") pod \"nova-api-db-create-6wzkp\" (UID: \"e3a4fd93-b17a-411c-9173-a8038523ffac\") " pod="openstack/nova-api-db-create-6wzkp" Feb 23 17:50:39 crc kubenswrapper[4724]: I0223 17:50:39.691174 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9j2dj\" (UniqueName: \"kubernetes.io/projected/e3a4fd93-b17a-411c-9173-a8038523ffac-kube-api-access-9j2dj\") pod \"nova-api-db-create-6wzkp\" (UID: \"e3a4fd93-b17a-411c-9173-a8038523ffac\") " pod="openstack/nova-api-db-create-6wzkp" Feb 23 17:50:39 crc kubenswrapper[4724]: I0223 17:50:39.714643 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-2pkpd"] Feb 23 17:50:39 crc kubenswrapper[4724]: I0223 17:50:39.716655 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-2pkpd" Feb 23 17:50:39 crc kubenswrapper[4724]: I0223 17:50:39.733122 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-2pkpd"] Feb 23 17:50:39 crc kubenswrapper[4724]: I0223 17:50:39.769374 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnft2\" (UniqueName: \"kubernetes.io/projected/34e7be71-74ab-423b-9dfd-bd025758573d-kube-api-access-wnft2\") pod \"nova-cell0-db-create-2pkpd\" (UID: \"34e7be71-74ab-423b-9dfd-bd025758573d\") " pod="openstack/nova-cell0-db-create-2pkpd" Feb 23 17:50:39 crc kubenswrapper[4724]: I0223 17:50:39.769485 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvg4s\" (UniqueName: \"kubernetes.io/projected/b54d2670-b9ee-480a-a622-386abf8656f1-kube-api-access-qvg4s\") pod \"nova-api-a49e-account-create-update-w5ddb\" (UID: \"b54d2670-b9ee-480a-a622-386abf8656f1\") " pod="openstack/nova-api-a49e-account-create-update-w5ddb" Feb 23 17:50:39 crc kubenswrapper[4724]: I0223 17:50:39.769554 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b54d2670-b9ee-480a-a622-386abf8656f1-operator-scripts\") pod \"nova-api-a49e-account-create-update-w5ddb\" (UID: \"b54d2670-b9ee-480a-a622-386abf8656f1\") " pod="openstack/nova-api-a49e-account-create-update-w5ddb" Feb 23 17:50:39 crc kubenswrapper[4724]: I0223 17:50:39.769607 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34e7be71-74ab-423b-9dfd-bd025758573d-operator-scripts\") pod \"nova-cell0-db-create-2pkpd\" (UID: \"34e7be71-74ab-423b-9dfd-bd025758573d\") " pod="openstack/nova-cell0-db-create-2pkpd" Feb 23 17:50:39 crc kubenswrapper[4724]: I0223 17:50:39.770603 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b54d2670-b9ee-480a-a622-386abf8656f1-operator-scripts\") pod \"nova-api-a49e-account-create-update-w5ddb\" (UID: \"b54d2670-b9ee-480a-a622-386abf8656f1\") " pod="openstack/nova-api-a49e-account-create-update-w5ddb" Feb 23 17:50:39 crc kubenswrapper[4724]: I0223 17:50:39.793960 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvg4s\" (UniqueName: \"kubernetes.io/projected/b54d2670-b9ee-480a-a622-386abf8656f1-kube-api-access-qvg4s\") pod \"nova-api-a49e-account-create-update-w5ddb\" (UID: \"b54d2670-b9ee-480a-a622-386abf8656f1\") " pod="openstack/nova-api-a49e-account-create-update-w5ddb" Feb 23 17:50:39 crc kubenswrapper[4724]: I0223 17:50:39.814711 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-phssd"] Feb 23 17:50:39 crc kubenswrapper[4724]: I0223 17:50:39.816009 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-phssd" Feb 23 17:50:39 crc kubenswrapper[4724]: I0223 17:50:39.843846 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-phssd"] Feb 23 17:50:39 crc kubenswrapper[4724]: I0223 17:50:39.846695 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-6wzkp" Feb 23 17:50:39 crc kubenswrapper[4724]: I0223 17:50:39.853259 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-586e-account-create-update-5vfvj"] Feb 23 17:50:39 crc kubenswrapper[4724]: I0223 17:50:39.855558 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-586e-account-create-update-5vfvj" Feb 23 17:50:39 crc kubenswrapper[4724]: I0223 17:50:39.857966 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 23 17:50:39 crc kubenswrapper[4724]: I0223 17:50:39.871625 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bwn7\" (UniqueName: \"kubernetes.io/projected/59c8cc9e-590f-46f6-a1b3-3cdda2e66f5b-kube-api-access-7bwn7\") pod \"nova-cell1-db-create-phssd\" (UID: \"59c8cc9e-590f-46f6-a1b3-3cdda2e66f5b\") " pod="openstack/nova-cell1-db-create-phssd" Feb 23 17:50:39 crc kubenswrapper[4724]: I0223 17:50:39.871679 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnft2\" (UniqueName: \"kubernetes.io/projected/34e7be71-74ab-423b-9dfd-bd025758573d-kube-api-access-wnft2\") pod \"nova-cell0-db-create-2pkpd\" (UID: \"34e7be71-74ab-423b-9dfd-bd025758573d\") " pod="openstack/nova-cell0-db-create-2pkpd" Feb 23 17:50:39 crc kubenswrapper[4724]: I0223 17:50:39.871704 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44b41778-b0c6-4bc1-8754-99fc38f1dad5-operator-scripts\") pod \"nova-cell0-586e-account-create-update-5vfvj\" (UID: \"44b41778-b0c6-4bc1-8754-99fc38f1dad5\") " pod="openstack/nova-cell0-586e-account-create-update-5vfvj" Feb 23 17:50:39 crc kubenswrapper[4724]: I0223 17:50:39.871762 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmgqj\" (UniqueName: \"kubernetes.io/projected/44b41778-b0c6-4bc1-8754-99fc38f1dad5-kube-api-access-kmgqj\") pod \"nova-cell0-586e-account-create-update-5vfvj\" (UID: \"44b41778-b0c6-4bc1-8754-99fc38f1dad5\") " pod="openstack/nova-cell0-586e-account-create-update-5vfvj" Feb 23 17:50:39 crc kubenswrapper[4724]: I0223 17:50:39.871785 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59c8cc9e-590f-46f6-a1b3-3cdda2e66f5b-operator-scripts\") pod \"nova-cell1-db-create-phssd\" (UID: \"59c8cc9e-590f-46f6-a1b3-3cdda2e66f5b\") " pod="openstack/nova-cell1-db-create-phssd" Feb 23 17:50:39 crc kubenswrapper[4724]: I0223 17:50:39.871855 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34e7be71-74ab-423b-9dfd-bd025758573d-operator-scripts\") pod \"nova-cell0-db-create-2pkpd\" (UID: \"34e7be71-74ab-423b-9dfd-bd025758573d\") " pod="openstack/nova-cell0-db-create-2pkpd" Feb 23 17:50:39 crc kubenswrapper[4724]: I0223 17:50:39.872551 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34e7be71-74ab-423b-9dfd-bd025758573d-operator-scripts\") pod \"nova-cell0-db-create-2pkpd\" (UID: \"34e7be71-74ab-423b-9dfd-bd025758573d\") " pod="openstack/nova-cell0-db-create-2pkpd" Feb 23 17:50:39 crc kubenswrapper[4724]: I0223 17:50:39.874587 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-586e-account-create-update-5vfvj"] Feb 23 17:50:39 crc kubenswrapper[4724]: I0223 17:50:39.929569 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnft2\" (UniqueName: \"kubernetes.io/projected/34e7be71-74ab-423b-9dfd-bd025758573d-kube-api-access-wnft2\") pod \"nova-cell0-db-create-2pkpd\" (UID: \"34e7be71-74ab-423b-9dfd-bd025758573d\") " pod="openstack/nova-cell0-db-create-2pkpd" Feb 23 17:50:39 crc kubenswrapper[4724]: I0223 17:50:39.934693 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a49e-account-create-update-w5ddb" Feb 23 17:50:39 crc kubenswrapper[4724]: I0223 17:50:39.973615 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44b41778-b0c6-4bc1-8754-99fc38f1dad5-operator-scripts\") pod \"nova-cell0-586e-account-create-update-5vfvj\" (UID: \"44b41778-b0c6-4bc1-8754-99fc38f1dad5\") " pod="openstack/nova-cell0-586e-account-create-update-5vfvj" Feb 23 17:50:39 crc kubenswrapper[4724]: I0223 17:50:39.973714 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmgqj\" (UniqueName: \"kubernetes.io/projected/44b41778-b0c6-4bc1-8754-99fc38f1dad5-kube-api-access-kmgqj\") pod \"nova-cell0-586e-account-create-update-5vfvj\" (UID: \"44b41778-b0c6-4bc1-8754-99fc38f1dad5\") " pod="openstack/nova-cell0-586e-account-create-update-5vfvj" Feb 23 17:50:39 crc kubenswrapper[4724]: I0223 17:50:39.973737 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59c8cc9e-590f-46f6-a1b3-3cdda2e66f5b-operator-scripts\") pod \"nova-cell1-db-create-phssd\" (UID: \"59c8cc9e-590f-46f6-a1b3-3cdda2e66f5b\") " pod="openstack/nova-cell1-db-create-phssd" Feb 23 17:50:39 crc kubenswrapper[4724]: I0223 17:50:39.973961 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bwn7\" (UniqueName: \"kubernetes.io/projected/59c8cc9e-590f-46f6-a1b3-3cdda2e66f5b-kube-api-access-7bwn7\") pod \"nova-cell1-db-create-phssd\" (UID: \"59c8cc9e-590f-46f6-a1b3-3cdda2e66f5b\") " pod="openstack/nova-cell1-db-create-phssd" Feb 23 17:50:39 crc kubenswrapper[4724]: I0223 17:50:39.977807 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59c8cc9e-590f-46f6-a1b3-3cdda2e66f5b-operator-scripts\") pod \"nova-cell1-db-create-phssd\" (UID: \"59c8cc9e-590f-46f6-a1b3-3cdda2e66f5b\") " pod="openstack/nova-cell1-db-create-phssd" Feb 23 17:50:39 crc kubenswrapper[4724]: I0223 17:50:39.993726 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44b41778-b0c6-4bc1-8754-99fc38f1dad5-operator-scripts\") pod \"nova-cell0-586e-account-create-update-5vfvj\" (UID: \"44b41778-b0c6-4bc1-8754-99fc38f1dad5\") " pod="openstack/nova-cell0-586e-account-create-update-5vfvj" Feb 23 17:50:40 crc kubenswrapper[4724]: I0223 17:50:40.008114 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmgqj\" (UniqueName: \"kubernetes.io/projected/44b41778-b0c6-4bc1-8754-99fc38f1dad5-kube-api-access-kmgqj\") pod \"nova-cell0-586e-account-create-update-5vfvj\" (UID: \"44b41778-b0c6-4bc1-8754-99fc38f1dad5\") " pod="openstack/nova-cell0-586e-account-create-update-5vfvj" Feb 23 17:50:40 crc kubenswrapper[4724]: I0223 17:50:40.041168 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bwn7\" (UniqueName: \"kubernetes.io/projected/59c8cc9e-590f-46f6-a1b3-3cdda2e66f5b-kube-api-access-7bwn7\") pod \"nova-cell1-db-create-phssd\" (UID: \"59c8cc9e-590f-46f6-a1b3-3cdda2e66f5b\") " pod="openstack/nova-cell1-db-create-phssd" Feb 23 17:50:40 crc kubenswrapper[4724]: I0223 17:50:40.067540 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-2pkpd" Feb 23 17:50:40 crc kubenswrapper[4724]: I0223 17:50:40.068648 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-5016-account-create-update-qmmcj"] Feb 23 17:50:40 crc kubenswrapper[4724]: I0223 17:50:40.071247 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5016-account-create-update-qmmcj" Feb 23 17:50:40 crc kubenswrapper[4724]: I0223 17:50:40.073321 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 23 17:50:40 crc kubenswrapper[4724]: I0223 17:50:40.095339 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4cd2bad2-04ed-4658-b65e-c9a4f208114c-operator-scripts\") pod \"nova-cell1-5016-account-create-update-qmmcj\" (UID: \"4cd2bad2-04ed-4658-b65e-c9a4f208114c\") " pod="openstack/nova-cell1-5016-account-create-update-qmmcj" Feb 23 17:50:40 crc kubenswrapper[4724]: I0223 17:50:40.095425 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fj28d\" (UniqueName: \"kubernetes.io/projected/4cd2bad2-04ed-4658-b65e-c9a4f208114c-kube-api-access-fj28d\") pod \"nova-cell1-5016-account-create-update-qmmcj\" (UID: \"4cd2bad2-04ed-4658-b65e-c9a4f208114c\") " pod="openstack/nova-cell1-5016-account-create-update-qmmcj" Feb 23 17:50:40 crc kubenswrapper[4724]: I0223 17:50:40.103469 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-5016-account-create-update-qmmcj"] Feb 23 17:50:40 crc kubenswrapper[4724]: I0223 17:50:40.169211 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-phssd" Feb 23 17:50:40 crc kubenswrapper[4724]: I0223 17:50:40.188156 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-586e-account-create-update-5vfvj" Feb 23 17:50:40 crc kubenswrapper[4724]: I0223 17:50:40.197171 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4cd2bad2-04ed-4658-b65e-c9a4f208114c-operator-scripts\") pod \"nova-cell1-5016-account-create-update-qmmcj\" (UID: \"4cd2bad2-04ed-4658-b65e-c9a4f208114c\") " pod="openstack/nova-cell1-5016-account-create-update-qmmcj" Feb 23 17:50:40 crc kubenswrapper[4724]: I0223 17:50:40.197246 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fj28d\" (UniqueName: \"kubernetes.io/projected/4cd2bad2-04ed-4658-b65e-c9a4f208114c-kube-api-access-fj28d\") pod \"nova-cell1-5016-account-create-update-qmmcj\" (UID: \"4cd2bad2-04ed-4658-b65e-c9a4f208114c\") " pod="openstack/nova-cell1-5016-account-create-update-qmmcj" Feb 23 17:50:40 crc kubenswrapper[4724]: I0223 17:50:40.198152 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4cd2bad2-04ed-4658-b65e-c9a4f208114c-operator-scripts\") pod \"nova-cell1-5016-account-create-update-qmmcj\" (UID: \"4cd2bad2-04ed-4658-b65e-c9a4f208114c\") " pod="openstack/nova-cell1-5016-account-create-update-qmmcj" Feb 23 17:50:40 crc kubenswrapper[4724]: I0223 17:50:40.216272 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fj28d\" (UniqueName: \"kubernetes.io/projected/4cd2bad2-04ed-4658-b65e-c9a4f208114c-kube-api-access-fj28d\") pod \"nova-cell1-5016-account-create-update-qmmcj\" (UID: \"4cd2bad2-04ed-4658-b65e-c9a4f208114c\") " pod="openstack/nova-cell1-5016-account-create-update-qmmcj" Feb 23 17:50:40 crc kubenswrapper[4724]: I0223 17:50:40.421789 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5016-account-create-update-qmmcj" Feb 23 17:50:40 crc kubenswrapper[4724]: I0223 17:50:40.901806 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-74674fd4f8-mmmpd" podUID="df53406b-fb3c-41f5-86af-b78ac8d5df6d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.167:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.167:8443: connect: connection refused" Feb 23 17:50:41 crc kubenswrapper[4724]: I0223 17:50:41.556914 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 23 17:50:41 crc kubenswrapper[4724]: I0223 17:50:41.556962 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 23 17:50:41 crc kubenswrapper[4724]: I0223 17:50:41.557706 4724 scope.go:117] "RemoveContainer" containerID="eea15f12b37ff5426ac01301fcccf9eee8bdd329a3a18203c6e4bee6ba83abfd" Feb 23 17:50:43 crc kubenswrapper[4724]: I0223 17:50:43.330347 4724 generic.go:334] "Generic (PLEG): container finished" podID="0c5fc101-99f6-43b3-ad94-6e23741a2f27" containerID="bf473fe5890516a889ea856b4fde051ffec9276727356b83b255d1c656f4e24f" exitCode=0 Feb 23 17:50:43 crc kubenswrapper[4724]: I0223 17:50:43.330423 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c5fc101-99f6-43b3-ad94-6e23741a2f27","Type":"ContainerDied","Data":"bf473fe5890516a889ea856b4fde051ffec9276727356b83b255d1c656f4e24f"} Feb 23 17:50:44 crc kubenswrapper[4724]: I0223 17:50:44.114825 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 17:50:44 crc kubenswrapper[4724]: I0223 17:50:44.277951 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c5fc101-99f6-43b3-ad94-6e23741a2f27-scripts\") pod \"0c5fc101-99f6-43b3-ad94-6e23741a2f27\" (UID: \"0c5fc101-99f6-43b3-ad94-6e23741a2f27\") " Feb 23 17:50:44 crc kubenswrapper[4724]: I0223 17:50:44.278263 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2wv7\" (UniqueName: \"kubernetes.io/projected/0c5fc101-99f6-43b3-ad94-6e23741a2f27-kube-api-access-z2wv7\") pod \"0c5fc101-99f6-43b3-ad94-6e23741a2f27\" (UID: \"0c5fc101-99f6-43b3-ad94-6e23741a2f27\") " Feb 23 17:50:44 crc kubenswrapper[4724]: I0223 17:50:44.278334 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0c5fc101-99f6-43b3-ad94-6e23741a2f27-sg-core-conf-yaml\") pod \"0c5fc101-99f6-43b3-ad94-6e23741a2f27\" (UID: \"0c5fc101-99f6-43b3-ad94-6e23741a2f27\") " Feb 23 17:50:44 crc kubenswrapper[4724]: I0223 17:50:44.278387 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c5fc101-99f6-43b3-ad94-6e23741a2f27-config-data\") pod \"0c5fc101-99f6-43b3-ad94-6e23741a2f27\" (UID: \"0c5fc101-99f6-43b3-ad94-6e23741a2f27\") " Feb 23 17:50:44 crc kubenswrapper[4724]: I0223 17:50:44.278447 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c5fc101-99f6-43b3-ad94-6e23741a2f27-combined-ca-bundle\") pod \"0c5fc101-99f6-43b3-ad94-6e23741a2f27\" (UID: \"0c5fc101-99f6-43b3-ad94-6e23741a2f27\") " Feb 23 17:50:44 crc kubenswrapper[4724]: I0223 17:50:44.279120 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c5fc101-99f6-43b3-ad94-6e23741a2f27-run-httpd\") pod \"0c5fc101-99f6-43b3-ad94-6e23741a2f27\" (UID: \"0c5fc101-99f6-43b3-ad94-6e23741a2f27\") " Feb 23 17:50:44 crc kubenswrapper[4724]: I0223 17:50:44.279269 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c5fc101-99f6-43b3-ad94-6e23741a2f27-log-httpd\") pod \"0c5fc101-99f6-43b3-ad94-6e23741a2f27\" (UID: \"0c5fc101-99f6-43b3-ad94-6e23741a2f27\") " Feb 23 17:50:44 crc kubenswrapper[4724]: I0223 17:50:44.279744 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c5fc101-99f6-43b3-ad94-6e23741a2f27-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0c5fc101-99f6-43b3-ad94-6e23741a2f27" (UID: "0c5fc101-99f6-43b3-ad94-6e23741a2f27"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:50:44 crc kubenswrapper[4724]: I0223 17:50:44.279857 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c5fc101-99f6-43b3-ad94-6e23741a2f27-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0c5fc101-99f6-43b3-ad94-6e23741a2f27" (UID: "0c5fc101-99f6-43b3-ad94-6e23741a2f27"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:50:44 crc kubenswrapper[4724]: I0223 17:50:44.279989 4724 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c5fc101-99f6-43b3-ad94-6e23741a2f27-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:44 crc kubenswrapper[4724]: I0223 17:50:44.284751 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c5fc101-99f6-43b3-ad94-6e23741a2f27-kube-api-access-z2wv7" (OuterVolumeSpecName: "kube-api-access-z2wv7") pod "0c5fc101-99f6-43b3-ad94-6e23741a2f27" (UID: "0c5fc101-99f6-43b3-ad94-6e23741a2f27"). InnerVolumeSpecName "kube-api-access-z2wv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:50:44 crc kubenswrapper[4724]: I0223 17:50:44.286511 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c5fc101-99f6-43b3-ad94-6e23741a2f27-scripts" (OuterVolumeSpecName: "scripts") pod "0c5fc101-99f6-43b3-ad94-6e23741a2f27" (UID: "0c5fc101-99f6-43b3-ad94-6e23741a2f27"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:50:44 crc kubenswrapper[4724]: I0223 17:50:44.315243 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c5fc101-99f6-43b3-ad94-6e23741a2f27-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0c5fc101-99f6-43b3-ad94-6e23741a2f27" (UID: "0c5fc101-99f6-43b3-ad94-6e23741a2f27"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:50:44 crc kubenswrapper[4724]: I0223 17:50:44.352325 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"f5d061d8-a5d8-48fd-8f20-45eb9def3384","Type":"ContainerStarted","Data":"a2e6602b809c5287167a6d2b759f5ad98f0ba539247f149448185730c5f6f3c4"} Feb 23 17:50:44 crc kubenswrapper[4724]: I0223 17:50:44.356753 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c5fc101-99f6-43b3-ad94-6e23741a2f27","Type":"ContainerDied","Data":"1d29bc40c77f9e8e9d6cb383cf7ca77df5caa22b5e6d621e4d1c0f71c01af7c9"} Feb 23 17:50:44 crc kubenswrapper[4724]: I0223 17:50:44.356818 4724 scope.go:117] "RemoveContainer" containerID="c8a028a120a605183199fdabd4832a6f82658283eaf199811af1e536477344fb" Feb 23 17:50:44 crc kubenswrapper[4724]: I0223 17:50:44.356960 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 17:50:44 crc kubenswrapper[4724]: I0223 17:50:44.365329 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"86eb7ff0-87b2-4538-8c5b-9126768e810b","Type":"ContainerStarted","Data":"14e56e42a874bd312d515edbb00d44d0a514d159d98544acf28fba090f14c860"} Feb 23 17:50:44 crc kubenswrapper[4724]: I0223 17:50:44.374596 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c5fc101-99f6-43b3-ad94-6e23741a2f27-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0c5fc101-99f6-43b3-ad94-6e23741a2f27" (UID: "0c5fc101-99f6-43b3-ad94-6e23741a2f27"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:50:44 crc kubenswrapper[4724]: I0223 17:50:44.380529 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.318620475 podStartE2EDuration="12.380504472s" podCreationTimestamp="2026-02-23 17:50:32 +0000 UTC" firstStartedPulling="2026-02-23 17:50:33.842738369 +0000 UTC m=+1189.658937969" lastFinishedPulling="2026-02-23 17:50:43.904622366 +0000 UTC m=+1199.720821966" observedRunningTime="2026-02-23 17:50:44.370300325 +0000 UTC m=+1200.186499925" watchObservedRunningTime="2026-02-23 17:50:44.380504472 +0000 UTC m=+1200.196704072" Feb 23 17:50:44 crc kubenswrapper[4724]: I0223 17:50:44.382840 4724 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c5fc101-99f6-43b3-ad94-6e23741a2f27-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:44 crc kubenswrapper[4724]: I0223 17:50:44.382869 4724 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c5fc101-99f6-43b3-ad94-6e23741a2f27-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:44 crc kubenswrapper[4724]: I0223 17:50:44.382881 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2wv7\" (UniqueName: \"kubernetes.io/projected/0c5fc101-99f6-43b3-ad94-6e23741a2f27-kube-api-access-z2wv7\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:44 crc kubenswrapper[4724]: I0223 17:50:44.382893 4724 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0c5fc101-99f6-43b3-ad94-6e23741a2f27-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:44 crc kubenswrapper[4724]: I0223 17:50:44.382905 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c5fc101-99f6-43b3-ad94-6e23741a2f27-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:44 crc kubenswrapper[4724]: I0223 17:50:44.408379 4724 scope.go:117] "RemoveContainer" containerID="7ab2bfa7b1161871b03757a8a18275e2871a51c7fa5e6ff190f263bee734ae60" Feb 23 17:50:44 crc kubenswrapper[4724]: I0223 17:50:44.414296 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c5fc101-99f6-43b3-ad94-6e23741a2f27-config-data" (OuterVolumeSpecName: "config-data") pod "0c5fc101-99f6-43b3-ad94-6e23741a2f27" (UID: "0c5fc101-99f6-43b3-ad94-6e23741a2f27"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:50:44 crc kubenswrapper[4724]: I0223 17:50:44.446364 4724 scope.go:117] "RemoveContainer" containerID="bf473fe5890516a889ea856b4fde051ffec9276727356b83b255d1c656f4e24f" Feb 23 17:50:44 crc kubenswrapper[4724]: I0223 17:50:44.485587 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c5fc101-99f6-43b3-ad94-6e23741a2f27-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:44 crc kubenswrapper[4724]: I0223 17:50:44.491637 4724 scope.go:117] "RemoveContainer" containerID="2e3509ce0c599b64874076cb4246093bb7533108b1b48d984371226170d9a24b" Feb 23 17:50:44 crc kubenswrapper[4724]: W0223 17:50:44.497044 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59c8cc9e_590f_46f6_a1b3_3cdda2e66f5b.slice/crio-ef9759eb5b44822266d59632af4319e5a24e04166377ffb5ea5791a269901d7b WatchSource:0}: Error finding container ef9759eb5b44822266d59632af4319e5a24e04166377ffb5ea5791a269901d7b: Status 404 returned error can't find the container with id ef9759eb5b44822266d59632af4319e5a24e04166377ffb5ea5791a269901d7b Feb 23 17:50:44 crc kubenswrapper[4724]: W0223 17:50:44.502276 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod44b41778_b0c6_4bc1_8754_99fc38f1dad5.slice/crio-1fda88a792f912b96c52c3a4094b293af58992d6afa16a1ab88c8e81e61c89b7 WatchSource:0}: Error finding container 1fda88a792f912b96c52c3a4094b293af58992d6afa16a1ab88c8e81e61c89b7: Status 404 returned error can't find the container with id 1fda88a792f912b96c52c3a4094b293af58992d6afa16a1ab88c8e81e61c89b7 Feb 23 17:50:44 crc kubenswrapper[4724]: I0223 17:50:44.504261 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-phssd"] Feb 23 17:50:44 crc kubenswrapper[4724]: I0223 17:50:44.533851 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-586e-account-create-update-5vfvj"] Feb 23 17:50:44 crc kubenswrapper[4724]: I0223 17:50:44.574036 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-f447dffc7-s2mfq"] Feb 23 17:50:44 crc kubenswrapper[4724]: I0223 17:50:44.713848 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-6wzkp"] Feb 23 17:50:44 crc kubenswrapper[4724]: I0223 17:50:44.743153 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-2pkpd"] Feb 23 17:50:44 crc kubenswrapper[4724]: W0223 17:50:44.833775 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4cd2bad2_04ed_4658_b65e_c9a4f208114c.slice/crio-6cc9a5c15108fe3f53bb7ea46ebb67843fba09448faa8005a2b6d5fcc25033f1 WatchSource:0}: Error finding container 6cc9a5c15108fe3f53bb7ea46ebb67843fba09448faa8005a2b6d5fcc25033f1: Status 404 returned error can't find the container with id 6cc9a5c15108fe3f53bb7ea46ebb67843fba09448faa8005a2b6d5fcc25033f1 Feb 23 17:50:44 crc kubenswrapper[4724]: I0223 17:50:44.838619 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-a49e-account-create-update-w5ddb"] Feb 23 17:50:44 crc kubenswrapper[4724]: I0223 17:50:44.853678 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-5016-account-create-update-qmmcj"] Feb 23 17:50:44 crc kubenswrapper[4724]: I0223 17:50:44.860800 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 23 17:50:44 crc kubenswrapper[4724]: I0223 17:50:44.864240 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 23 17:50:45 crc kubenswrapper[4724]: I0223 17:50:45.159786 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 17:50:45 crc kubenswrapper[4724]: I0223 17:50:45.184773 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 23 17:50:45 crc kubenswrapper[4724]: I0223 17:50:45.201832 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 23 17:50:45 crc kubenswrapper[4724]: E0223 17:50:45.205068 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c5fc101-99f6-43b3-ad94-6e23741a2f27" containerName="ceilometer-central-agent" Feb 23 17:50:45 crc kubenswrapper[4724]: I0223 17:50:45.205099 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c5fc101-99f6-43b3-ad94-6e23741a2f27" containerName="ceilometer-central-agent" Feb 23 17:50:45 crc kubenswrapper[4724]: E0223 17:50:45.205119 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c5fc101-99f6-43b3-ad94-6e23741a2f27" containerName="proxy-httpd" Feb 23 17:50:45 crc kubenswrapper[4724]: I0223 17:50:45.205124 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c5fc101-99f6-43b3-ad94-6e23741a2f27" containerName="proxy-httpd" Feb 23 17:50:45 crc kubenswrapper[4724]: E0223 17:50:45.205140 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c5fc101-99f6-43b3-ad94-6e23741a2f27" containerName="sg-core" Feb 23 17:50:45 crc kubenswrapper[4724]: I0223 17:50:45.205149 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c5fc101-99f6-43b3-ad94-6e23741a2f27" containerName="sg-core" Feb 23 17:50:45 crc kubenswrapper[4724]: E0223 17:50:45.205166 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c5fc101-99f6-43b3-ad94-6e23741a2f27" containerName="ceilometer-notification-agent" Feb 23 17:50:45 crc kubenswrapper[4724]: I0223 17:50:45.205171 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c5fc101-99f6-43b3-ad94-6e23741a2f27" containerName="ceilometer-notification-agent" Feb 23 17:50:45 crc kubenswrapper[4724]: I0223 17:50:45.205444 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c5fc101-99f6-43b3-ad94-6e23741a2f27" containerName="sg-core" Feb 23 17:50:45 crc kubenswrapper[4724]: I0223 17:50:45.205473 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c5fc101-99f6-43b3-ad94-6e23741a2f27" containerName="ceilometer-notification-agent" Feb 23 17:50:45 crc kubenswrapper[4724]: I0223 17:50:45.205488 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c5fc101-99f6-43b3-ad94-6e23741a2f27" containerName="ceilometer-central-agent" Feb 23 17:50:45 crc kubenswrapper[4724]: I0223 17:50:45.205499 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c5fc101-99f6-43b3-ad94-6e23741a2f27" containerName="proxy-httpd" Feb 23 17:50:45 crc kubenswrapper[4724]: I0223 17:50:45.209584 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 17:50:45 crc kubenswrapper[4724]: I0223 17:50:45.212472 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 17:50:45 crc kubenswrapper[4724]: I0223 17:50:45.213234 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 23 17:50:45 crc kubenswrapper[4724]: I0223 17:50:45.213583 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 23 17:50:45 crc kubenswrapper[4724]: E0223 17:50:45.256445 4724 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c0efb1b_cbc1_4ac1_b969_ce5ae7b03857.slice/crio-conmon-fb492313bded3525683787416d1463c506739ef103a7abf059baf18d6f79a5a7.scope\": RecentStats: unable to find data in memory cache]" Feb 23 17:50:45 crc kubenswrapper[4724]: I0223 17:50:45.311728 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgnjv\" (UniqueName: \"kubernetes.io/projected/20cceea2-c746-4269-990c-5032594f1196-kube-api-access-bgnjv\") pod \"ceilometer-0\" (UID: \"20cceea2-c746-4269-990c-5032594f1196\") " pod="openstack/ceilometer-0" Feb 23 17:50:45 crc kubenswrapper[4724]: I0223 17:50:45.311793 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20cceea2-c746-4269-990c-5032594f1196-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"20cceea2-c746-4269-990c-5032594f1196\") " pod="openstack/ceilometer-0" Feb 23 17:50:45 crc kubenswrapper[4724]: I0223 17:50:45.311818 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20cceea2-c746-4269-990c-5032594f1196-run-httpd\") pod \"ceilometer-0\" (UID: \"20cceea2-c746-4269-990c-5032594f1196\") " pod="openstack/ceilometer-0" Feb 23 17:50:45 crc kubenswrapper[4724]: I0223 17:50:45.311838 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20cceea2-c746-4269-990c-5032594f1196-config-data\") pod \"ceilometer-0\" (UID: \"20cceea2-c746-4269-990c-5032594f1196\") " pod="openstack/ceilometer-0" Feb 23 17:50:45 crc kubenswrapper[4724]: I0223 17:50:45.311881 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20cceea2-c746-4269-990c-5032594f1196-scripts\") pod \"ceilometer-0\" (UID: \"20cceea2-c746-4269-990c-5032594f1196\") " pod="openstack/ceilometer-0" Feb 23 17:50:45 crc kubenswrapper[4724]: I0223 17:50:45.311906 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20cceea2-c746-4269-990c-5032594f1196-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"20cceea2-c746-4269-990c-5032594f1196\") " pod="openstack/ceilometer-0" Feb 23 17:50:45 crc kubenswrapper[4724]: I0223 17:50:45.311967 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20cceea2-c746-4269-990c-5032594f1196-log-httpd\") pod \"ceilometer-0\" (UID: \"20cceea2-c746-4269-990c-5032594f1196\") " pod="openstack/ceilometer-0" Feb 23 17:50:45 crc kubenswrapper[4724]: I0223 17:50:45.417312 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20cceea2-c746-4269-990c-5032594f1196-log-httpd\") pod \"ceilometer-0\" (UID: \"20cceea2-c746-4269-990c-5032594f1196\") " pod="openstack/ceilometer-0" Feb 23 17:50:45 crc kubenswrapper[4724]: I0223 17:50:45.417781 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgnjv\" (UniqueName: \"kubernetes.io/projected/20cceea2-c746-4269-990c-5032594f1196-kube-api-access-bgnjv\") pod \"ceilometer-0\" (UID: \"20cceea2-c746-4269-990c-5032594f1196\") " pod="openstack/ceilometer-0" Feb 23 17:50:45 crc kubenswrapper[4724]: I0223 17:50:45.417832 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20cceea2-c746-4269-990c-5032594f1196-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"20cceea2-c746-4269-990c-5032594f1196\") " pod="openstack/ceilometer-0" Feb 23 17:50:45 crc kubenswrapper[4724]: I0223 17:50:45.417863 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20cceea2-c746-4269-990c-5032594f1196-run-httpd\") pod \"ceilometer-0\" (UID: \"20cceea2-c746-4269-990c-5032594f1196\") " pod="openstack/ceilometer-0" Feb 23 17:50:45 crc kubenswrapper[4724]: I0223 17:50:45.417889 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20cceea2-c746-4269-990c-5032594f1196-config-data\") pod \"ceilometer-0\" (UID: \"20cceea2-c746-4269-990c-5032594f1196\") " pod="openstack/ceilometer-0" Feb 23 17:50:45 crc kubenswrapper[4724]: I0223 17:50:45.417962 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20cceea2-c746-4269-990c-5032594f1196-scripts\") pod \"ceilometer-0\" (UID: \"20cceea2-c746-4269-990c-5032594f1196\") " pod="openstack/ceilometer-0" Feb 23 17:50:45 crc kubenswrapper[4724]: I0223 17:50:45.418012 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20cceea2-c746-4269-990c-5032594f1196-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"20cceea2-c746-4269-990c-5032594f1196\") " pod="openstack/ceilometer-0" Feb 23 17:50:45 crc kubenswrapper[4724]: I0223 17:50:45.421626 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20cceea2-c746-4269-990c-5032594f1196-log-httpd\") pod \"ceilometer-0\" (UID: \"20cceea2-c746-4269-990c-5032594f1196\") " pod="openstack/ceilometer-0" Feb 23 17:50:45 crc kubenswrapper[4724]: I0223 17:50:45.424426 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20cceea2-c746-4269-990c-5032594f1196-run-httpd\") pod \"ceilometer-0\" (UID: \"20cceea2-c746-4269-990c-5032594f1196\") " pod="openstack/ceilometer-0" Feb 23 17:50:45 crc kubenswrapper[4724]: I0223 17:50:45.425887 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20cceea2-c746-4269-990c-5032594f1196-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"20cceea2-c746-4269-990c-5032594f1196\") " pod="openstack/ceilometer-0" Feb 23 17:50:45 crc kubenswrapper[4724]: I0223 17:50:45.429045 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a49e-account-create-update-w5ddb" event={"ID":"b54d2670-b9ee-480a-a622-386abf8656f1","Type":"ContainerStarted","Data":"4b0fa17c4b4ca158a4000543e4aa9d6e7ab2a96327ab0748c250b04e42034ffb"} Feb 23 17:50:45 crc kubenswrapper[4724]: I0223 17:50:45.429960 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20cceea2-c746-4269-990c-5032594f1196-scripts\") pod \"ceilometer-0\" (UID: \"20cceea2-c746-4269-990c-5032594f1196\") " pod="openstack/ceilometer-0" Feb 23 17:50:45 crc kubenswrapper[4724]: I0223 17:50:45.431824 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20cceea2-c746-4269-990c-5032594f1196-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"20cceea2-c746-4269-990c-5032594f1196\") " pod="openstack/ceilometer-0" Feb 23 17:50:45 crc kubenswrapper[4724]: I0223 17:50:45.436925 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20cceea2-c746-4269-990c-5032594f1196-config-data\") pod \"ceilometer-0\" (UID: \"20cceea2-c746-4269-990c-5032594f1196\") " pod="openstack/ceilometer-0" Feb 23 17:50:45 crc kubenswrapper[4724]: I0223 17:50:45.447800 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-5016-account-create-update-qmmcj" event={"ID":"4cd2bad2-04ed-4658-b65e-c9a4f208114c","Type":"ContainerStarted","Data":"ca28e295cb85e5acc6e5e2021f2a9f421f208b6649d54f537bda9a2fd7c5fd5a"} Feb 23 17:50:45 crc kubenswrapper[4724]: I0223 17:50:45.447861 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-5016-account-create-update-qmmcj" event={"ID":"4cd2bad2-04ed-4658-b65e-c9a4f208114c","Type":"ContainerStarted","Data":"6cc9a5c15108fe3f53bb7ea46ebb67843fba09448faa8005a2b6d5fcc25033f1"} Feb 23 17:50:45 crc kubenswrapper[4724]: I0223 17:50:45.457779 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgnjv\" (UniqueName: \"kubernetes.io/projected/20cceea2-c746-4269-990c-5032594f1196-kube-api-access-bgnjv\") pod \"ceilometer-0\" (UID: \"20cceea2-c746-4269-990c-5032594f1196\") " pod="openstack/ceilometer-0" Feb 23 17:50:45 crc kubenswrapper[4724]: I0223 17:50:45.464852 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-a49e-account-create-update-w5ddb" podStartSLOduration=6.46482613 podStartE2EDuration="6.46482613s" podCreationTimestamp="2026-02-23 17:50:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:50:45.454348546 +0000 UTC m=+1201.270548166" watchObservedRunningTime="2026-02-23 17:50:45.46482613 +0000 UTC m=+1201.281025750" Feb 23 17:50:45 crc kubenswrapper[4724]: I0223 17:50:45.470070 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-phssd" event={"ID":"59c8cc9e-590f-46f6-a1b3-3cdda2e66f5b","Type":"ContainerStarted","Data":"83fba2cb037e706579174b1241e86fafbdce0404a40284dd7003cf796d401f35"} Feb 23 17:50:45 crc kubenswrapper[4724]: I0223 17:50:45.470125 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-phssd" event={"ID":"59c8cc9e-590f-46f6-a1b3-3cdda2e66f5b","Type":"ContainerStarted","Data":"ef9759eb5b44822266d59632af4319e5a24e04166377ffb5ea5791a269901d7b"} Feb 23 17:50:45 crc kubenswrapper[4724]: I0223 17:50:45.479794 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-6wzkp" event={"ID":"e3a4fd93-b17a-411c-9173-a8038523ffac","Type":"ContainerStarted","Data":"0cb183b570c886f20213a6c41179b42c248b2eac7865176ab464ead013c474b1"} Feb 23 17:50:45 crc kubenswrapper[4724]: I0223 17:50:45.488100 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-f447dffc7-s2mfq" event={"ID":"46ac9fa5-f3bd-48bf-b1e6-1b570b4bda5b","Type":"ContainerStarted","Data":"1c101ede92d5126673e621eea30201a91de466162079694481c7dcf482d1ea5b"} Feb 23 17:50:45 crc kubenswrapper[4724]: I0223 17:50:45.489409 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-586e-account-create-update-5vfvj" event={"ID":"44b41778-b0c6-4bc1-8754-99fc38f1dad5","Type":"ContainerStarted","Data":"1fda88a792f912b96c52c3a4094b293af58992d6afa16a1ab88c8e81e61c89b7"} Feb 23 17:50:45 crc kubenswrapper[4724]: I0223 17:50:45.490538 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-2pkpd" event={"ID":"34e7be71-74ab-423b-9dfd-bd025758573d","Type":"ContainerStarted","Data":"5540b46feaec60b34e09c579d2a0855e9ae2aca9b9b62edb08ed44e7b54f6b6f"} Feb 23 17:50:45 crc kubenswrapper[4724]: I0223 17:50:45.489364 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-5016-account-create-update-qmmcj" podStartSLOduration=6.489348319 podStartE2EDuration="6.489348319s" podCreationTimestamp="2026-02-23 17:50:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:50:45.477372647 +0000 UTC m=+1201.293572247" watchObservedRunningTime="2026-02-23 17:50:45.489348319 +0000 UTC m=+1201.305547919" Feb 23 17:50:45 crc kubenswrapper[4724]: I0223 17:50:45.512369 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-phssd" podStartSLOduration=6.5123466 podStartE2EDuration="6.5123466s" podCreationTimestamp="2026-02-23 17:50:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:50:45.497914706 +0000 UTC m=+1201.314114326" watchObservedRunningTime="2026-02-23 17:50:45.5123466 +0000 UTC m=+1201.328546200" Feb 23 17:50:45 crc kubenswrapper[4724]: I0223 17:50:45.547321 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 17:50:46 crc kubenswrapper[4724]: I0223 17:50:46.124826 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 17:50:46 crc kubenswrapper[4724]: I0223 17:50:46.506885 4724 generic.go:334] "Generic (PLEG): container finished" podID="b54d2670-b9ee-480a-a622-386abf8656f1" containerID="25afc649942830651d7c742d53f9086fc3a7d1a5807c43442496ded939842527" exitCode=0 Feb 23 17:50:46 crc kubenswrapper[4724]: I0223 17:50:46.506959 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a49e-account-create-update-w5ddb" event={"ID":"b54d2670-b9ee-480a-a622-386abf8656f1","Type":"ContainerDied","Data":"25afc649942830651d7c742d53f9086fc3a7d1a5807c43442496ded939842527"} Feb 23 17:50:46 crc kubenswrapper[4724]: I0223 17:50:46.509520 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20cceea2-c746-4269-990c-5032594f1196","Type":"ContainerStarted","Data":"8223178440b58213ec3955e3ceda7ac024a2e482a99ba3369fdf5fbfd8f0f815"} Feb 23 17:50:46 crc kubenswrapper[4724]: I0223 17:50:46.509787 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20cceea2-c746-4269-990c-5032594f1196","Type":"ContainerStarted","Data":"b6a0f995889e47ab2e0bcc54d2b60243f5582cc7b65b460b95e5794eb1b2984f"} Feb 23 17:50:46 crc kubenswrapper[4724]: I0223 17:50:46.511964 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-f447dffc7-s2mfq" event={"ID":"46ac9fa5-f3bd-48bf-b1e6-1b570b4bda5b","Type":"ContainerStarted","Data":"67b4c9462a1fee95cac6b1f516d9bd12194f8d55e307a0b6d898bd80d4ef00a7"} Feb 23 17:50:46 crc kubenswrapper[4724]: I0223 17:50:46.512008 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-f447dffc7-s2mfq" event={"ID":"46ac9fa5-f3bd-48bf-b1e6-1b570b4bda5b","Type":"ContainerStarted","Data":"e5efe74605e328b582cec256d798ba628aa5f78241379a4afb793fe96324cfa4"} Feb 23 17:50:46 crc kubenswrapper[4724]: I0223 17:50:46.512070 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-f447dffc7-s2mfq" Feb 23 17:50:46 crc kubenswrapper[4724]: I0223 17:50:46.512133 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-f447dffc7-s2mfq" Feb 23 17:50:46 crc kubenswrapper[4724]: I0223 17:50:46.513986 4724 generic.go:334] "Generic (PLEG): container finished" podID="59c8cc9e-590f-46f6-a1b3-3cdda2e66f5b" containerID="83fba2cb037e706579174b1241e86fafbdce0404a40284dd7003cf796d401f35" exitCode=0 Feb 23 17:50:46 crc kubenswrapper[4724]: I0223 17:50:46.514042 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-phssd" event={"ID":"59c8cc9e-590f-46f6-a1b3-3cdda2e66f5b","Type":"ContainerDied","Data":"83fba2cb037e706579174b1241e86fafbdce0404a40284dd7003cf796d401f35"} Feb 23 17:50:46 crc kubenswrapper[4724]: I0223 17:50:46.517360 4724 generic.go:334] "Generic (PLEG): container finished" podID="e3a4fd93-b17a-411c-9173-a8038523ffac" containerID="ac0d1bb08699a831319f65e3d6736bc6b2fd07cff4e0f5772ed89ae287c00ba6" exitCode=0 Feb 23 17:50:46 crc kubenswrapper[4724]: I0223 17:50:46.517490 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-6wzkp" event={"ID":"e3a4fd93-b17a-411c-9173-a8038523ffac","Type":"ContainerDied","Data":"ac0d1bb08699a831319f65e3d6736bc6b2fd07cff4e0f5772ed89ae287c00ba6"} Feb 23 17:50:46 crc kubenswrapper[4724]: I0223 17:50:46.520872 4724 generic.go:334] "Generic (PLEG): container finished" podID="4cd2bad2-04ed-4658-b65e-c9a4f208114c" containerID="ca28e295cb85e5acc6e5e2021f2a9f421f208b6649d54f537bda9a2fd7c5fd5a" exitCode=0 Feb 23 17:50:46 crc kubenswrapper[4724]: I0223 17:50:46.520934 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-5016-account-create-update-qmmcj" event={"ID":"4cd2bad2-04ed-4658-b65e-c9a4f208114c","Type":"ContainerDied","Data":"ca28e295cb85e5acc6e5e2021f2a9f421f208b6649d54f537bda9a2fd7c5fd5a"} Feb 23 17:50:46 crc kubenswrapper[4724]: I0223 17:50:46.522907 4724 generic.go:334] "Generic (PLEG): container finished" podID="44b41778-b0c6-4bc1-8754-99fc38f1dad5" containerID="7b271015b5b623ab6defaec9155bea3f7ecb86ecbf35e24892b5669611884a1c" exitCode=0 Feb 23 17:50:46 crc kubenswrapper[4724]: I0223 17:50:46.522949 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-586e-account-create-update-5vfvj" event={"ID":"44b41778-b0c6-4bc1-8754-99fc38f1dad5","Type":"ContainerDied","Data":"7b271015b5b623ab6defaec9155bea3f7ecb86ecbf35e24892b5669611884a1c"} Feb 23 17:50:46 crc kubenswrapper[4724]: I0223 17:50:46.530755 4724 generic.go:334] "Generic (PLEG): container finished" podID="34e7be71-74ab-423b-9dfd-bd025758573d" containerID="92fda6dc4db7212bc07635629965d9904de02e3889e3476e966c0be6f0eca3f3" exitCode=0 Feb 23 17:50:46 crc kubenswrapper[4724]: I0223 17:50:46.530823 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-2pkpd" event={"ID":"34e7be71-74ab-423b-9dfd-bd025758573d","Type":"ContainerDied","Data":"92fda6dc4db7212bc07635629965d9904de02e3889e3476e966c0be6f0eca3f3"} Feb 23 17:50:46 crc kubenswrapper[4724]: I0223 17:50:46.634974 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-f447dffc7-s2mfq" podStartSLOduration=8.634956206 podStartE2EDuration="8.634956206s" podCreationTimestamp="2026-02-23 17:50:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:50:46.622912462 +0000 UTC m=+1202.439112072" watchObservedRunningTime="2026-02-23 17:50:46.634956206 +0000 UTC m=+1202.451155806" Feb 23 17:50:46 crc kubenswrapper[4724]: I0223 17:50:46.963600 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c5fc101-99f6-43b3-ad94-6e23741a2f27" path="/var/lib/kubelet/pods/0c5fc101-99f6-43b3-ad94-6e23741a2f27/volumes" Feb 23 17:50:47 crc kubenswrapper[4724]: I0223 17:50:47.518721 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 17:50:47 crc kubenswrapper[4724]: I0223 17:50:47.559849 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20cceea2-c746-4269-990c-5032594f1196","Type":"ContainerStarted","Data":"b357b2a8522d3f847d9e43eee225e59e952ea4b14358acc96fcb8190b20b1cac"} Feb 23 17:50:48 crc kubenswrapper[4724]: I0223 17:50:48.137177 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-2pkpd" Feb 23 17:50:48 crc kubenswrapper[4724]: I0223 17:50:48.303705 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34e7be71-74ab-423b-9dfd-bd025758573d-operator-scripts\") pod \"34e7be71-74ab-423b-9dfd-bd025758573d\" (UID: \"34e7be71-74ab-423b-9dfd-bd025758573d\") " Feb 23 17:50:48 crc kubenswrapper[4724]: I0223 17:50:48.303913 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnft2\" (UniqueName: \"kubernetes.io/projected/34e7be71-74ab-423b-9dfd-bd025758573d-kube-api-access-wnft2\") pod \"34e7be71-74ab-423b-9dfd-bd025758573d\" (UID: \"34e7be71-74ab-423b-9dfd-bd025758573d\") " Feb 23 17:50:48 crc kubenswrapper[4724]: I0223 17:50:48.305355 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34e7be71-74ab-423b-9dfd-bd025758573d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "34e7be71-74ab-423b-9dfd-bd025758573d" (UID: "34e7be71-74ab-423b-9dfd-bd025758573d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:50:48 crc kubenswrapper[4724]: I0223 17:50:48.314612 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34e7be71-74ab-423b-9dfd-bd025758573d-kube-api-access-wnft2" (OuterVolumeSpecName: "kube-api-access-wnft2") pod "34e7be71-74ab-423b-9dfd-bd025758573d" (UID: "34e7be71-74ab-423b-9dfd-bd025758573d"). InnerVolumeSpecName "kube-api-access-wnft2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:50:48 crc kubenswrapper[4724]: I0223 17:50:48.406367 4724 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34e7be71-74ab-423b-9dfd-bd025758573d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:48 crc kubenswrapper[4724]: I0223 17:50:48.406420 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnft2\" (UniqueName: \"kubernetes.io/projected/34e7be71-74ab-423b-9dfd-bd025758573d-kube-api-access-wnft2\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:48 crc kubenswrapper[4724]: I0223 17:50:48.419603 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-6wzkp" Feb 23 17:50:48 crc kubenswrapper[4724]: I0223 17:50:48.429284 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5016-account-create-update-qmmcj" Feb 23 17:50:48 crc kubenswrapper[4724]: I0223 17:50:48.436894 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-586e-account-create-update-5vfvj" Feb 23 17:50:48 crc kubenswrapper[4724]: I0223 17:50:48.449494 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a49e-account-create-update-w5ddb" Feb 23 17:50:48 crc kubenswrapper[4724]: I0223 17:50:48.470168 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-phssd" Feb 23 17:50:48 crc kubenswrapper[4724]: I0223 17:50:48.507224 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44b41778-b0c6-4bc1-8754-99fc38f1dad5-operator-scripts\") pod \"44b41778-b0c6-4bc1-8754-99fc38f1dad5\" (UID: \"44b41778-b0c6-4bc1-8754-99fc38f1dad5\") " Feb 23 17:50:48 crc kubenswrapper[4724]: I0223 17:50:48.507319 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9j2dj\" (UniqueName: \"kubernetes.io/projected/e3a4fd93-b17a-411c-9173-a8038523ffac-kube-api-access-9j2dj\") pod \"e3a4fd93-b17a-411c-9173-a8038523ffac\" (UID: \"e3a4fd93-b17a-411c-9173-a8038523ffac\") " Feb 23 17:50:48 crc kubenswrapper[4724]: I0223 17:50:48.507431 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmgqj\" (UniqueName: \"kubernetes.io/projected/44b41778-b0c6-4bc1-8754-99fc38f1dad5-kube-api-access-kmgqj\") pod \"44b41778-b0c6-4bc1-8754-99fc38f1dad5\" (UID: \"44b41778-b0c6-4bc1-8754-99fc38f1dad5\") " Feb 23 17:50:48 crc kubenswrapper[4724]: I0223 17:50:48.507462 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3a4fd93-b17a-411c-9173-a8038523ffac-operator-scripts\") pod \"e3a4fd93-b17a-411c-9173-a8038523ffac\" (UID: \"e3a4fd93-b17a-411c-9173-a8038523ffac\") " Feb 23 17:50:48 crc kubenswrapper[4724]: I0223 17:50:48.507517 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fj28d\" (UniqueName: \"kubernetes.io/projected/4cd2bad2-04ed-4658-b65e-c9a4f208114c-kube-api-access-fj28d\") pod \"4cd2bad2-04ed-4658-b65e-c9a4f208114c\" (UID: \"4cd2bad2-04ed-4658-b65e-c9a4f208114c\") " Feb 23 17:50:48 crc kubenswrapper[4724]: I0223 17:50:48.507611 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4cd2bad2-04ed-4658-b65e-c9a4f208114c-operator-scripts\") pod \"4cd2bad2-04ed-4658-b65e-c9a4f208114c\" (UID: \"4cd2bad2-04ed-4658-b65e-c9a4f208114c\") " Feb 23 17:50:48 crc kubenswrapper[4724]: I0223 17:50:48.508872 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cd2bad2-04ed-4658-b65e-c9a4f208114c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4cd2bad2-04ed-4658-b65e-c9a4f208114c" (UID: "4cd2bad2-04ed-4658-b65e-c9a4f208114c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:50:48 crc kubenswrapper[4724]: I0223 17:50:48.509626 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3a4fd93-b17a-411c-9173-a8038523ffac-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e3a4fd93-b17a-411c-9173-a8038523ffac" (UID: "e3a4fd93-b17a-411c-9173-a8038523ffac"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:50:48 crc kubenswrapper[4724]: I0223 17:50:48.510063 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44b41778-b0c6-4bc1-8754-99fc38f1dad5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "44b41778-b0c6-4bc1-8754-99fc38f1dad5" (UID: "44b41778-b0c6-4bc1-8754-99fc38f1dad5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:50:48 crc kubenswrapper[4724]: I0223 17:50:48.513600 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44b41778-b0c6-4bc1-8754-99fc38f1dad5-kube-api-access-kmgqj" (OuterVolumeSpecName: "kube-api-access-kmgqj") pod "44b41778-b0c6-4bc1-8754-99fc38f1dad5" (UID: "44b41778-b0c6-4bc1-8754-99fc38f1dad5"). InnerVolumeSpecName "kube-api-access-kmgqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:50:48 crc kubenswrapper[4724]: I0223 17:50:48.513682 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cd2bad2-04ed-4658-b65e-c9a4f208114c-kube-api-access-fj28d" (OuterVolumeSpecName: "kube-api-access-fj28d") pod "4cd2bad2-04ed-4658-b65e-c9a4f208114c" (UID: "4cd2bad2-04ed-4658-b65e-c9a4f208114c"). InnerVolumeSpecName "kube-api-access-fj28d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:50:48 crc kubenswrapper[4724]: I0223 17:50:48.514805 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3a4fd93-b17a-411c-9173-a8038523ffac-kube-api-access-9j2dj" (OuterVolumeSpecName: "kube-api-access-9j2dj") pod "e3a4fd93-b17a-411c-9173-a8038523ffac" (UID: "e3a4fd93-b17a-411c-9173-a8038523ffac"). InnerVolumeSpecName "kube-api-access-9j2dj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:50:48 crc kubenswrapper[4724]: I0223 17:50:48.575891 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20cceea2-c746-4269-990c-5032594f1196","Type":"ContainerStarted","Data":"0fbf36a8cc9e67893583e1422047a79a5eda98ccb265bd58a2a16e860b8f24af"} Feb 23 17:50:48 crc kubenswrapper[4724]: I0223 17:50:48.578980 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-5016-account-create-update-qmmcj" event={"ID":"4cd2bad2-04ed-4658-b65e-c9a4f208114c","Type":"ContainerDied","Data":"6cc9a5c15108fe3f53bb7ea46ebb67843fba09448faa8005a2b6d5fcc25033f1"} Feb 23 17:50:48 crc kubenswrapper[4724]: I0223 17:50:48.579010 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cc9a5c15108fe3f53bb7ea46ebb67843fba09448faa8005a2b6d5fcc25033f1" Feb 23 17:50:48 crc kubenswrapper[4724]: I0223 17:50:48.579062 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5016-account-create-update-qmmcj" Feb 23 17:50:48 crc kubenswrapper[4724]: I0223 17:50:48.583032 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-phssd" Feb 23 17:50:48 crc kubenswrapper[4724]: I0223 17:50:48.583240 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-phssd" event={"ID":"59c8cc9e-590f-46f6-a1b3-3cdda2e66f5b","Type":"ContainerDied","Data":"ef9759eb5b44822266d59632af4319e5a24e04166377ffb5ea5791a269901d7b"} Feb 23 17:50:48 crc kubenswrapper[4724]: I0223 17:50:48.583360 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef9759eb5b44822266d59632af4319e5a24e04166377ffb5ea5791a269901d7b" Feb 23 17:50:48 crc kubenswrapper[4724]: I0223 17:50:48.585588 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-6wzkp" Feb 23 17:50:48 crc kubenswrapper[4724]: I0223 17:50:48.585601 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-6wzkp" event={"ID":"e3a4fd93-b17a-411c-9173-a8038523ffac","Type":"ContainerDied","Data":"0cb183b570c886f20213a6c41179b42c248b2eac7865176ab464ead013c474b1"} Feb 23 17:50:48 crc kubenswrapper[4724]: I0223 17:50:48.585638 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0cb183b570c886f20213a6c41179b42c248b2eac7865176ab464ead013c474b1" Feb 23 17:50:48 crc kubenswrapper[4724]: I0223 17:50:48.587155 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-586e-account-create-update-5vfvj" event={"ID":"44b41778-b0c6-4bc1-8754-99fc38f1dad5","Type":"ContainerDied","Data":"1fda88a792f912b96c52c3a4094b293af58992d6afa16a1ab88c8e81e61c89b7"} Feb 23 17:50:48 crc kubenswrapper[4724]: I0223 17:50:48.587181 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fda88a792f912b96c52c3a4094b293af58992d6afa16a1ab88c8e81e61c89b7" Feb 23 17:50:48 crc kubenswrapper[4724]: I0223 17:50:48.587226 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-586e-account-create-update-5vfvj" Feb 23 17:50:48 crc kubenswrapper[4724]: I0223 17:50:48.592944 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-2pkpd" event={"ID":"34e7be71-74ab-423b-9dfd-bd025758573d","Type":"ContainerDied","Data":"5540b46feaec60b34e09c579d2a0855e9ae2aca9b9b62edb08ed44e7b54f6b6f"} Feb 23 17:50:48 crc kubenswrapper[4724]: I0223 17:50:48.593206 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5540b46feaec60b34e09c579d2a0855e9ae2aca9b9b62edb08ed44e7b54f6b6f" Feb 23 17:50:48 crc kubenswrapper[4724]: I0223 17:50:48.592955 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-2pkpd" Feb 23 17:50:48 crc kubenswrapper[4724]: I0223 17:50:48.594268 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a49e-account-create-update-w5ddb" event={"ID":"b54d2670-b9ee-480a-a622-386abf8656f1","Type":"ContainerDied","Data":"4b0fa17c4b4ca158a4000543e4aa9d6e7ab2a96327ab0748c250b04e42034ffb"} Feb 23 17:50:48 crc kubenswrapper[4724]: I0223 17:50:48.594307 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b0fa17c4b4ca158a4000543e4aa9d6e7ab2a96327ab0748c250b04e42034ffb" Feb 23 17:50:48 crc kubenswrapper[4724]: I0223 17:50:48.594285 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a49e-account-create-update-w5ddb" Feb 23 17:50:48 crc kubenswrapper[4724]: I0223 17:50:48.609703 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b54d2670-b9ee-480a-a622-386abf8656f1-operator-scripts\") pod \"b54d2670-b9ee-480a-a622-386abf8656f1\" (UID: \"b54d2670-b9ee-480a-a622-386abf8656f1\") " Feb 23 17:50:48 crc kubenswrapper[4724]: I0223 17:50:48.609827 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bwn7\" (UniqueName: \"kubernetes.io/projected/59c8cc9e-590f-46f6-a1b3-3cdda2e66f5b-kube-api-access-7bwn7\") pod \"59c8cc9e-590f-46f6-a1b3-3cdda2e66f5b\" (UID: \"59c8cc9e-590f-46f6-a1b3-3cdda2e66f5b\") " Feb 23 17:50:48 crc kubenswrapper[4724]: I0223 17:50:48.609885 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59c8cc9e-590f-46f6-a1b3-3cdda2e66f5b-operator-scripts\") pod \"59c8cc9e-590f-46f6-a1b3-3cdda2e66f5b\" (UID: \"59c8cc9e-590f-46f6-a1b3-3cdda2e66f5b\") " Feb 23 17:50:48 crc kubenswrapper[4724]: I0223 17:50:48.610545 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b54d2670-b9ee-480a-a622-386abf8656f1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b54d2670-b9ee-480a-a622-386abf8656f1" (UID: "b54d2670-b9ee-480a-a622-386abf8656f1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:50:48 crc kubenswrapper[4724]: I0223 17:50:48.609935 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvg4s\" (UniqueName: \"kubernetes.io/projected/b54d2670-b9ee-480a-a622-386abf8656f1-kube-api-access-qvg4s\") pod \"b54d2670-b9ee-480a-a622-386abf8656f1\" (UID: \"b54d2670-b9ee-480a-a622-386abf8656f1\") " Feb 23 17:50:48 crc kubenswrapper[4724]: I0223 17:50:48.610978 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59c8cc9e-590f-46f6-a1b3-3cdda2e66f5b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "59c8cc9e-590f-46f6-a1b3-3cdda2e66f5b" (UID: "59c8cc9e-590f-46f6-a1b3-3cdda2e66f5b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:50:48 crc kubenswrapper[4724]: I0223 17:50:48.611427 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmgqj\" (UniqueName: \"kubernetes.io/projected/44b41778-b0c6-4bc1-8754-99fc38f1dad5-kube-api-access-kmgqj\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:48 crc kubenswrapper[4724]: I0223 17:50:48.611453 4724 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3a4fd93-b17a-411c-9173-a8038523ffac-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:48 crc kubenswrapper[4724]: I0223 17:50:48.611466 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fj28d\" (UniqueName: \"kubernetes.io/projected/4cd2bad2-04ed-4658-b65e-c9a4f208114c-kube-api-access-fj28d\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:48 crc kubenswrapper[4724]: I0223 17:50:48.611477 4724 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4cd2bad2-04ed-4658-b65e-c9a4f208114c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:48 crc kubenswrapper[4724]: I0223 17:50:48.611489 4724 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b54d2670-b9ee-480a-a622-386abf8656f1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:48 crc kubenswrapper[4724]: I0223 17:50:48.611498 4724 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44b41778-b0c6-4bc1-8754-99fc38f1dad5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:48 crc kubenswrapper[4724]: I0223 17:50:48.611506 4724 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59c8cc9e-590f-46f6-a1b3-3cdda2e66f5b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:48 crc kubenswrapper[4724]: I0223 17:50:48.611518 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9j2dj\" (UniqueName: \"kubernetes.io/projected/e3a4fd93-b17a-411c-9173-a8038523ffac-kube-api-access-9j2dj\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:48 crc kubenswrapper[4724]: I0223 17:50:48.625117 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b54d2670-b9ee-480a-a622-386abf8656f1-kube-api-access-qvg4s" (OuterVolumeSpecName: "kube-api-access-qvg4s") pod "b54d2670-b9ee-480a-a622-386abf8656f1" (UID: "b54d2670-b9ee-480a-a622-386abf8656f1"). InnerVolumeSpecName "kube-api-access-qvg4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:50:48 crc kubenswrapper[4724]: I0223 17:50:48.640167 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59c8cc9e-590f-46f6-a1b3-3cdda2e66f5b-kube-api-access-7bwn7" (OuterVolumeSpecName: "kube-api-access-7bwn7") pod "59c8cc9e-590f-46f6-a1b3-3cdda2e66f5b" (UID: "59c8cc9e-590f-46f6-a1b3-3cdda2e66f5b"). InnerVolumeSpecName "kube-api-access-7bwn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:50:48 crc kubenswrapper[4724]: I0223 17:50:48.712958 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bwn7\" (UniqueName: \"kubernetes.io/projected/59c8cc9e-590f-46f6-a1b3-3cdda2e66f5b-kube-api-access-7bwn7\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:48 crc kubenswrapper[4724]: I0223 17:50:48.712986 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvg4s\" (UniqueName: \"kubernetes.io/projected/b54d2670-b9ee-480a-a622-386abf8656f1-kube-api-access-qvg4s\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.336713 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.430230 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbch8\" (UniqueName: \"kubernetes.io/projected/3be48d90-f238-4e9e-83ca-c91030530489-kube-api-access-qbch8\") pod \"3be48d90-f238-4e9e-83ca-c91030530489\" (UID: \"3be48d90-f238-4e9e-83ca-c91030530489\") " Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.430286 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3be48d90-f238-4e9e-83ca-c91030530489-etc-machine-id\") pod \"3be48d90-f238-4e9e-83ca-c91030530489\" (UID: \"3be48d90-f238-4e9e-83ca-c91030530489\") " Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.430314 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3be48d90-f238-4e9e-83ca-c91030530489-logs\") pod \"3be48d90-f238-4e9e-83ca-c91030530489\" (UID: \"3be48d90-f238-4e9e-83ca-c91030530489\") " Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.430384 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3be48d90-f238-4e9e-83ca-c91030530489-config-data\") pod \"3be48d90-f238-4e9e-83ca-c91030530489\" (UID: \"3be48d90-f238-4e9e-83ca-c91030530489\") " Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.430407 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3be48d90-f238-4e9e-83ca-c91030530489-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "3be48d90-f238-4e9e-83ca-c91030530489" (UID: "3be48d90-f238-4e9e-83ca-c91030530489"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.430476 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3be48d90-f238-4e9e-83ca-c91030530489-scripts\") pod \"3be48d90-f238-4e9e-83ca-c91030530489\" (UID: \"3be48d90-f238-4e9e-83ca-c91030530489\") " Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.430634 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3be48d90-f238-4e9e-83ca-c91030530489-combined-ca-bundle\") pod \"3be48d90-f238-4e9e-83ca-c91030530489\" (UID: \"3be48d90-f238-4e9e-83ca-c91030530489\") " Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.430956 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3be48d90-f238-4e9e-83ca-c91030530489-config-data-custom\") pod \"3be48d90-f238-4e9e-83ca-c91030530489\" (UID: \"3be48d90-f238-4e9e-83ca-c91030530489\") " Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.431523 4724 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3be48d90-f238-4e9e-83ca-c91030530489-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.430707 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3be48d90-f238-4e9e-83ca-c91030530489-logs" (OuterVolumeSpecName: "logs") pod "3be48d90-f238-4e9e-83ca-c91030530489" (UID: "3be48d90-f238-4e9e-83ca-c91030530489"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.478252 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3be48d90-f238-4e9e-83ca-c91030530489-scripts" (OuterVolumeSpecName: "scripts") pod "3be48d90-f238-4e9e-83ca-c91030530489" (UID: "3be48d90-f238-4e9e-83ca-c91030530489"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.482699 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3be48d90-f238-4e9e-83ca-c91030530489-kube-api-access-qbch8" (OuterVolumeSpecName: "kube-api-access-qbch8") pod "3be48d90-f238-4e9e-83ca-c91030530489" (UID: "3be48d90-f238-4e9e-83ca-c91030530489"). InnerVolumeSpecName "kube-api-access-qbch8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.529783 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3be48d90-f238-4e9e-83ca-c91030530489-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3be48d90-f238-4e9e-83ca-c91030530489" (UID: "3be48d90-f238-4e9e-83ca-c91030530489"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.535143 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3be48d90-f238-4e9e-83ca-c91030530489-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3be48d90-f238-4e9e-83ca-c91030530489" (UID: "3be48d90-f238-4e9e-83ca-c91030530489"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.535678 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3be48d90-f238-4e9e-83ca-c91030530489-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.537796 4724 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3be48d90-f238-4e9e-83ca-c91030530489-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.537828 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbch8\" (UniqueName: \"kubernetes.io/projected/3be48d90-f238-4e9e-83ca-c91030530489-kube-api-access-qbch8\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.537849 4724 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3be48d90-f238-4e9e-83ca-c91030530489-logs\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.537861 4724 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3be48d90-f238-4e9e-83ca-c91030530489-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.582859 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3be48d90-f238-4e9e-83ca-c91030530489-config-data" (OuterVolumeSpecName: "config-data") pod "3be48d90-f238-4e9e-83ca-c91030530489" (UID: "3be48d90-f238-4e9e-83ca-c91030530489"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.611404 4724 generic.go:334] "Generic (PLEG): container finished" podID="3be48d90-f238-4e9e-83ca-c91030530489" containerID="5de0dc6fbea53f23211a32ac5ef2f314d7a3c7372205893cc1dbde96b16b5bc4" exitCode=137 Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.611452 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3be48d90-f238-4e9e-83ca-c91030530489","Type":"ContainerDied","Data":"5de0dc6fbea53f23211a32ac5ef2f314d7a3c7372205893cc1dbde96b16b5bc4"} Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.611479 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3be48d90-f238-4e9e-83ca-c91030530489","Type":"ContainerDied","Data":"22651926b0e1f6ef8ea3fb870a5ea3830f4e073323e4032ef8adfef054a93e38"} Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.611500 4724 scope.go:117] "RemoveContainer" containerID="5de0dc6fbea53f23211a32ac5ef2f314d7a3c7372205893cc1dbde96b16b5bc4" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.611638 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.639515 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3be48d90-f238-4e9e-83ca-c91030530489-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.648066 4724 scope.go:117] "RemoveContainer" containerID="812fa5738325c6e1f6b16ffbfe57f31e025f829ffea575c001088200bfa4705a" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.654521 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.671438 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.699501 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 23 17:50:49 crc kubenswrapper[4724]: E0223 17:50:49.699971 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3a4fd93-b17a-411c-9173-a8038523ffac" containerName="mariadb-database-create" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.699983 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3a4fd93-b17a-411c-9173-a8038523ffac" containerName="mariadb-database-create" Feb 23 17:50:49 crc kubenswrapper[4724]: E0223 17:50:49.699994 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34e7be71-74ab-423b-9dfd-bd025758573d" containerName="mariadb-database-create" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.700000 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="34e7be71-74ab-423b-9dfd-bd025758573d" containerName="mariadb-database-create" Feb 23 17:50:49 crc kubenswrapper[4724]: E0223 17:50:49.700015 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cd2bad2-04ed-4658-b65e-c9a4f208114c" containerName="mariadb-account-create-update" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.700022 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cd2bad2-04ed-4658-b65e-c9a4f208114c" containerName="mariadb-account-create-update" Feb 23 17:50:49 crc kubenswrapper[4724]: E0223 17:50:49.700037 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3be48d90-f238-4e9e-83ca-c91030530489" containerName="cinder-api-log" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.700043 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="3be48d90-f238-4e9e-83ca-c91030530489" containerName="cinder-api-log" Feb 23 17:50:49 crc kubenswrapper[4724]: E0223 17:50:49.700057 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3be48d90-f238-4e9e-83ca-c91030530489" containerName="cinder-api" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.700062 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="3be48d90-f238-4e9e-83ca-c91030530489" containerName="cinder-api" Feb 23 17:50:49 crc kubenswrapper[4724]: E0223 17:50:49.700074 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59c8cc9e-590f-46f6-a1b3-3cdda2e66f5b" containerName="mariadb-database-create" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.700080 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="59c8cc9e-590f-46f6-a1b3-3cdda2e66f5b" containerName="mariadb-database-create" Feb 23 17:50:49 crc kubenswrapper[4724]: E0223 17:50:49.700089 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b54d2670-b9ee-480a-a622-386abf8656f1" containerName="mariadb-account-create-update" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.700094 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="b54d2670-b9ee-480a-a622-386abf8656f1" containerName="mariadb-account-create-update" Feb 23 17:50:49 crc kubenswrapper[4724]: E0223 17:50:49.700106 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44b41778-b0c6-4bc1-8754-99fc38f1dad5" containerName="mariadb-account-create-update" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.700111 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="44b41778-b0c6-4bc1-8754-99fc38f1dad5" containerName="mariadb-account-create-update" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.700307 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="44b41778-b0c6-4bc1-8754-99fc38f1dad5" containerName="mariadb-account-create-update" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.700317 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="3be48d90-f238-4e9e-83ca-c91030530489" containerName="cinder-api" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.700330 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="59c8cc9e-590f-46f6-a1b3-3cdda2e66f5b" containerName="mariadb-database-create" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.700344 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="3be48d90-f238-4e9e-83ca-c91030530489" containerName="cinder-api-log" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.700354 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3a4fd93-b17a-411c-9173-a8038523ffac" containerName="mariadb-database-create" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.700362 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="b54d2670-b9ee-480a-a622-386abf8656f1" containerName="mariadb-account-create-update" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.700372 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="34e7be71-74ab-423b-9dfd-bd025758573d" containerName="mariadb-database-create" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.700405 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cd2bad2-04ed-4658-b65e-c9a4f208114c" containerName="mariadb-account-create-update" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.701434 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.702089 4724 scope.go:117] "RemoveContainer" containerID="5de0dc6fbea53f23211a32ac5ef2f314d7a3c7372205893cc1dbde96b16b5bc4" Feb 23 17:50:49 crc kubenswrapper[4724]: E0223 17:50:49.702883 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5de0dc6fbea53f23211a32ac5ef2f314d7a3c7372205893cc1dbde96b16b5bc4\": container with ID starting with 5de0dc6fbea53f23211a32ac5ef2f314d7a3c7372205893cc1dbde96b16b5bc4 not found: ID does not exist" containerID="5de0dc6fbea53f23211a32ac5ef2f314d7a3c7372205893cc1dbde96b16b5bc4" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.702958 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5de0dc6fbea53f23211a32ac5ef2f314d7a3c7372205893cc1dbde96b16b5bc4"} err="failed to get container status \"5de0dc6fbea53f23211a32ac5ef2f314d7a3c7372205893cc1dbde96b16b5bc4\": rpc error: code = NotFound desc = could not find container \"5de0dc6fbea53f23211a32ac5ef2f314d7a3c7372205893cc1dbde96b16b5bc4\": container with ID starting with 5de0dc6fbea53f23211a32ac5ef2f314d7a3c7372205893cc1dbde96b16b5bc4 not found: ID does not exist" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.703012 4724 scope.go:117] "RemoveContainer" containerID="812fa5738325c6e1f6b16ffbfe57f31e025f829ffea575c001088200bfa4705a" Feb 23 17:50:49 crc kubenswrapper[4724]: E0223 17:50:49.703297 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"812fa5738325c6e1f6b16ffbfe57f31e025f829ffea575c001088200bfa4705a\": container with ID starting with 812fa5738325c6e1f6b16ffbfe57f31e025f829ffea575c001088200bfa4705a not found: ID does not exist" containerID="812fa5738325c6e1f6b16ffbfe57f31e025f829ffea575c001088200bfa4705a" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.703325 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"812fa5738325c6e1f6b16ffbfe57f31e025f829ffea575c001088200bfa4705a"} err="failed to get container status \"812fa5738325c6e1f6b16ffbfe57f31e025f829ffea575c001088200bfa4705a\": rpc error: code = NotFound desc = could not find container \"812fa5738325c6e1f6b16ffbfe57f31e025f829ffea575c001088200bfa4705a\": container with ID starting with 812fa5738325c6e1f6b16ffbfe57f31e025f829ffea575c001088200bfa4705a not found: ID does not exist" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.708001 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.708464 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.708625 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.715093 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.844649 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c03aee9-806f-4319-a3b8-b3226a740f4b-public-tls-certs\") pod \"cinder-api-0\" (UID: \"6c03aee9-806f-4319-a3b8-b3226a740f4b\") " pod="openstack/cinder-api-0" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.844708 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c03aee9-806f-4319-a3b8-b3226a740f4b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6c03aee9-806f-4319-a3b8-b3226a740f4b\") " pod="openstack/cinder-api-0" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.844751 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqnld\" (UniqueName: \"kubernetes.io/projected/6c03aee9-806f-4319-a3b8-b3226a740f4b-kube-api-access-qqnld\") pod \"cinder-api-0\" (UID: \"6c03aee9-806f-4319-a3b8-b3226a740f4b\") " pod="openstack/cinder-api-0" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.844777 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c03aee9-806f-4319-a3b8-b3226a740f4b-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"6c03aee9-806f-4319-a3b8-b3226a740f4b\") " pod="openstack/cinder-api-0" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.844807 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c03aee9-806f-4319-a3b8-b3226a740f4b-scripts\") pod \"cinder-api-0\" (UID: \"6c03aee9-806f-4319-a3b8-b3226a740f4b\") " pod="openstack/cinder-api-0" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.844869 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c03aee9-806f-4319-a3b8-b3226a740f4b-config-data\") pod \"cinder-api-0\" (UID: \"6c03aee9-806f-4319-a3b8-b3226a740f4b\") " pod="openstack/cinder-api-0" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.844895 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6c03aee9-806f-4319-a3b8-b3226a740f4b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6c03aee9-806f-4319-a3b8-b3226a740f4b\") " pod="openstack/cinder-api-0" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.844948 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c03aee9-806f-4319-a3b8-b3226a740f4b-logs\") pod \"cinder-api-0\" (UID: \"6c03aee9-806f-4319-a3b8-b3226a740f4b\") " pod="openstack/cinder-api-0" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.845010 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c03aee9-806f-4319-a3b8-b3226a740f4b-config-data-custom\") pod \"cinder-api-0\" (UID: \"6c03aee9-806f-4319-a3b8-b3226a740f4b\") " pod="openstack/cinder-api-0" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.946350 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c03aee9-806f-4319-a3b8-b3226a740f4b-config-data\") pod \"cinder-api-0\" (UID: \"6c03aee9-806f-4319-a3b8-b3226a740f4b\") " pod="openstack/cinder-api-0" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.946411 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6c03aee9-806f-4319-a3b8-b3226a740f4b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6c03aee9-806f-4319-a3b8-b3226a740f4b\") " pod="openstack/cinder-api-0" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.946457 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c03aee9-806f-4319-a3b8-b3226a740f4b-logs\") pod \"cinder-api-0\" (UID: \"6c03aee9-806f-4319-a3b8-b3226a740f4b\") " pod="openstack/cinder-api-0" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.946508 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c03aee9-806f-4319-a3b8-b3226a740f4b-config-data-custom\") pod \"cinder-api-0\" (UID: \"6c03aee9-806f-4319-a3b8-b3226a740f4b\") " pod="openstack/cinder-api-0" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.946555 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c03aee9-806f-4319-a3b8-b3226a740f4b-public-tls-certs\") pod \"cinder-api-0\" (UID: \"6c03aee9-806f-4319-a3b8-b3226a740f4b\") " pod="openstack/cinder-api-0" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.946576 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c03aee9-806f-4319-a3b8-b3226a740f4b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6c03aee9-806f-4319-a3b8-b3226a740f4b\") " pod="openstack/cinder-api-0" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.946605 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqnld\" (UniqueName: \"kubernetes.io/projected/6c03aee9-806f-4319-a3b8-b3226a740f4b-kube-api-access-qqnld\") pod \"cinder-api-0\" (UID: \"6c03aee9-806f-4319-a3b8-b3226a740f4b\") " pod="openstack/cinder-api-0" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.946622 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c03aee9-806f-4319-a3b8-b3226a740f4b-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"6c03aee9-806f-4319-a3b8-b3226a740f4b\") " pod="openstack/cinder-api-0" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.946641 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c03aee9-806f-4319-a3b8-b3226a740f4b-scripts\") pod \"cinder-api-0\" (UID: \"6c03aee9-806f-4319-a3b8-b3226a740f4b\") " pod="openstack/cinder-api-0" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.947302 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6c03aee9-806f-4319-a3b8-b3226a740f4b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6c03aee9-806f-4319-a3b8-b3226a740f4b\") " pod="openstack/cinder-api-0" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.947925 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c03aee9-806f-4319-a3b8-b3226a740f4b-logs\") pod \"cinder-api-0\" (UID: \"6c03aee9-806f-4319-a3b8-b3226a740f4b\") " pod="openstack/cinder-api-0" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.950997 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c03aee9-806f-4319-a3b8-b3226a740f4b-scripts\") pod \"cinder-api-0\" (UID: \"6c03aee9-806f-4319-a3b8-b3226a740f4b\") " pod="openstack/cinder-api-0" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.951028 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c03aee9-806f-4319-a3b8-b3226a740f4b-config-data-custom\") pod \"cinder-api-0\" (UID: \"6c03aee9-806f-4319-a3b8-b3226a740f4b\") " pod="openstack/cinder-api-0" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.951374 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c03aee9-806f-4319-a3b8-b3226a740f4b-public-tls-certs\") pod \"cinder-api-0\" (UID: \"6c03aee9-806f-4319-a3b8-b3226a740f4b\") " pod="openstack/cinder-api-0" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.951549 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c03aee9-806f-4319-a3b8-b3226a740f4b-config-data\") pod \"cinder-api-0\" (UID: \"6c03aee9-806f-4319-a3b8-b3226a740f4b\") " pod="openstack/cinder-api-0" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.952228 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c03aee9-806f-4319-a3b8-b3226a740f4b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6c03aee9-806f-4319-a3b8-b3226a740f4b\") " pod="openstack/cinder-api-0" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.952897 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c03aee9-806f-4319-a3b8-b3226a740f4b-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"6c03aee9-806f-4319-a3b8-b3226a740f4b\") " pod="openstack/cinder-api-0" Feb 23 17:50:49 crc kubenswrapper[4724]: I0223 17:50:49.964786 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqnld\" (UniqueName: \"kubernetes.io/projected/6c03aee9-806f-4319-a3b8-b3226a740f4b-kube-api-access-qqnld\") pod \"cinder-api-0\" (UID: \"6c03aee9-806f-4319-a3b8-b3226a740f4b\") " pod="openstack/cinder-api-0" Feb 23 17:50:50 crc kubenswrapper[4724]: I0223 17:50:50.018563 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 23 17:50:50 crc kubenswrapper[4724]: I0223 17:50:50.354772 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-s46d2"] Feb 23 17:50:50 crc kubenswrapper[4724]: I0223 17:50:50.356634 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-s46d2" Feb 23 17:50:50 crc kubenswrapper[4724]: I0223 17:50:50.366704 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-s46d2"] Feb 23 17:50:50 crc kubenswrapper[4724]: I0223 17:50:50.367954 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 23 17:50:50 crc kubenswrapper[4724]: I0223 17:50:50.368142 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 23 17:50:50 crc kubenswrapper[4724]: I0223 17:50:50.368248 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-bfc5w" Feb 23 17:50:50 crc kubenswrapper[4724]: I0223 17:50:50.460054 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c23170d-6cdc-4c4e-be8d-a4e61cb8feeb-config-data\") pod \"nova-cell0-conductor-db-sync-s46d2\" (UID: \"1c23170d-6cdc-4c4e-be8d-a4e61cb8feeb\") " pod="openstack/nova-cell0-conductor-db-sync-s46d2" Feb 23 17:50:50 crc kubenswrapper[4724]: I0223 17:50:50.460133 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plfz5\" (UniqueName: \"kubernetes.io/projected/1c23170d-6cdc-4c4e-be8d-a4e61cb8feeb-kube-api-access-plfz5\") pod \"nova-cell0-conductor-db-sync-s46d2\" (UID: \"1c23170d-6cdc-4c4e-be8d-a4e61cb8feeb\") " pod="openstack/nova-cell0-conductor-db-sync-s46d2" Feb 23 17:50:50 crc kubenswrapper[4724]: I0223 17:50:50.460202 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c23170d-6cdc-4c4e-be8d-a4e61cb8feeb-scripts\") pod \"nova-cell0-conductor-db-sync-s46d2\" (UID: \"1c23170d-6cdc-4c4e-be8d-a4e61cb8feeb\") " pod="openstack/nova-cell0-conductor-db-sync-s46d2" Feb 23 17:50:50 crc kubenswrapper[4724]: I0223 17:50:50.460256 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c23170d-6cdc-4c4e-be8d-a4e61cb8feeb-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-s46d2\" (UID: \"1c23170d-6cdc-4c4e-be8d-a4e61cb8feeb\") " pod="openstack/nova-cell0-conductor-db-sync-s46d2" Feb 23 17:50:50 crc kubenswrapper[4724]: I0223 17:50:50.563445 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c23170d-6cdc-4c4e-be8d-a4e61cb8feeb-config-data\") pod \"nova-cell0-conductor-db-sync-s46d2\" (UID: \"1c23170d-6cdc-4c4e-be8d-a4e61cb8feeb\") " pod="openstack/nova-cell0-conductor-db-sync-s46d2" Feb 23 17:50:50 crc kubenswrapper[4724]: I0223 17:50:50.563519 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plfz5\" (UniqueName: \"kubernetes.io/projected/1c23170d-6cdc-4c4e-be8d-a4e61cb8feeb-kube-api-access-plfz5\") pod \"nova-cell0-conductor-db-sync-s46d2\" (UID: \"1c23170d-6cdc-4c4e-be8d-a4e61cb8feeb\") " pod="openstack/nova-cell0-conductor-db-sync-s46d2" Feb 23 17:50:50 crc kubenswrapper[4724]: I0223 17:50:50.563588 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c23170d-6cdc-4c4e-be8d-a4e61cb8feeb-scripts\") pod \"nova-cell0-conductor-db-sync-s46d2\" (UID: \"1c23170d-6cdc-4c4e-be8d-a4e61cb8feeb\") " pod="openstack/nova-cell0-conductor-db-sync-s46d2" Feb 23 17:50:50 crc kubenswrapper[4724]: I0223 17:50:50.563619 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c23170d-6cdc-4c4e-be8d-a4e61cb8feeb-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-s46d2\" (UID: \"1c23170d-6cdc-4c4e-be8d-a4e61cb8feeb\") " pod="openstack/nova-cell0-conductor-db-sync-s46d2" Feb 23 17:50:50 crc kubenswrapper[4724]: I0223 17:50:50.573045 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c23170d-6cdc-4c4e-be8d-a4e61cb8feeb-scripts\") pod \"nova-cell0-conductor-db-sync-s46d2\" (UID: \"1c23170d-6cdc-4c4e-be8d-a4e61cb8feeb\") " pod="openstack/nova-cell0-conductor-db-sync-s46d2" Feb 23 17:50:50 crc kubenswrapper[4724]: I0223 17:50:50.573583 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c23170d-6cdc-4c4e-be8d-a4e61cb8feeb-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-s46d2\" (UID: \"1c23170d-6cdc-4c4e-be8d-a4e61cb8feeb\") " pod="openstack/nova-cell0-conductor-db-sync-s46d2" Feb 23 17:50:50 crc kubenswrapper[4724]: I0223 17:50:50.582998 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plfz5\" (UniqueName: \"kubernetes.io/projected/1c23170d-6cdc-4c4e-be8d-a4e61cb8feeb-kube-api-access-plfz5\") pod \"nova-cell0-conductor-db-sync-s46d2\" (UID: \"1c23170d-6cdc-4c4e-be8d-a4e61cb8feeb\") " pod="openstack/nova-cell0-conductor-db-sync-s46d2" Feb 23 17:50:50 crc kubenswrapper[4724]: I0223 17:50:50.583588 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c23170d-6cdc-4c4e-be8d-a4e61cb8feeb-config-data\") pod \"nova-cell0-conductor-db-sync-s46d2\" (UID: \"1c23170d-6cdc-4c4e-be8d-a4e61cb8feeb\") " pod="openstack/nova-cell0-conductor-db-sync-s46d2" Feb 23 17:50:50 crc kubenswrapper[4724]: I0223 17:50:50.643226 4724 generic.go:334] "Generic (PLEG): container finished" podID="86eb7ff0-87b2-4538-8c5b-9126768e810b" containerID="14e56e42a874bd312d515edbb00d44d0a514d159d98544acf28fba090f14c860" exitCode=1 Feb 23 17:50:50 crc kubenswrapper[4724]: I0223 17:50:50.643308 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"86eb7ff0-87b2-4538-8c5b-9126768e810b","Type":"ContainerDied","Data":"14e56e42a874bd312d515edbb00d44d0a514d159d98544acf28fba090f14c860"} Feb 23 17:50:50 crc kubenswrapper[4724]: I0223 17:50:50.643371 4724 scope.go:117] "RemoveContainer" containerID="eea15f12b37ff5426ac01301fcccf9eee8bdd329a3a18203c6e4bee6ba83abfd" Feb 23 17:50:50 crc kubenswrapper[4724]: I0223 17:50:50.644026 4724 scope.go:117] "RemoveContainer" containerID="14e56e42a874bd312d515edbb00d44d0a514d159d98544acf28fba090f14c860" Feb 23 17:50:50 crc kubenswrapper[4724]: E0223 17:50:50.644439 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 40s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(86eb7ff0-87b2-4538-8c5b-9126768e810b)\"" pod="openstack/watcher-decision-engine-0" podUID="86eb7ff0-87b2-4538-8c5b-9126768e810b" Feb 23 17:50:50 crc kubenswrapper[4724]: I0223 17:50:50.651770 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20cceea2-c746-4269-990c-5032594f1196","Type":"ContainerStarted","Data":"d4d3318f70900e7c5943e5365fb51a413a1cf1332739af013be363f751884ef6"} Feb 23 17:50:50 crc kubenswrapper[4724]: I0223 17:50:50.651964 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20cceea2-c746-4269-990c-5032594f1196" containerName="ceilometer-central-agent" containerID="cri-o://8223178440b58213ec3955e3ceda7ac024a2e482a99ba3369fdf5fbfd8f0f815" gracePeriod=30 Feb 23 17:50:50 crc kubenswrapper[4724]: I0223 17:50:50.652062 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20cceea2-c746-4269-990c-5032594f1196" containerName="proxy-httpd" containerID="cri-o://d4d3318f70900e7c5943e5365fb51a413a1cf1332739af013be363f751884ef6" gracePeriod=30 Feb 23 17:50:50 crc kubenswrapper[4724]: I0223 17:50:50.652097 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20cceea2-c746-4269-990c-5032594f1196" containerName="ceilometer-notification-agent" containerID="cri-o://b357b2a8522d3f847d9e43eee225e59e952ea4b14358acc96fcb8190b20b1cac" gracePeriod=30 Feb 23 17:50:50 crc kubenswrapper[4724]: I0223 17:50:50.652149 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20cceea2-c746-4269-990c-5032594f1196" containerName="sg-core" containerID="cri-o://0fbf36a8cc9e67893583e1422047a79a5eda98ccb265bd58a2a16e860b8f24af" gracePeriod=30 Feb 23 17:50:50 crc kubenswrapper[4724]: I0223 17:50:50.652068 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 23 17:50:50 crc kubenswrapper[4724]: I0223 17:50:50.691020 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.693718889 podStartE2EDuration="5.690999379s" podCreationTimestamp="2026-02-23 17:50:45 +0000 UTC" firstStartedPulling="2026-02-23 17:50:46.183164448 +0000 UTC m=+1201.999364048" lastFinishedPulling="2026-02-23 17:50:50.180444938 +0000 UTC m=+1205.996644538" observedRunningTime="2026-02-23 17:50:50.682651158 +0000 UTC m=+1206.498850758" watchObservedRunningTime="2026-02-23 17:50:50.690999379 +0000 UTC m=+1206.507198979" Feb 23 17:50:50 crc kubenswrapper[4724]: I0223 17:50:50.730171 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-s46d2" Feb 23 17:50:50 crc kubenswrapper[4724]: I0223 17:50:50.767912 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 23 17:50:50 crc kubenswrapper[4724]: I0223 17:50:50.903191 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-74674fd4f8-mmmpd" podUID="df53406b-fb3c-41f5-86af-b78ac8d5df6d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.167:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.167:8443: connect: connection refused" Feb 23 17:50:50 crc kubenswrapper[4724]: I0223 17:50:50.903285 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-74674fd4f8-mmmpd" Feb 23 17:50:50 crc kubenswrapper[4724]: I0223 17:50:50.976008 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3be48d90-f238-4e9e-83ca-c91030530489" path="/var/lib/kubelet/pods/3be48d90-f238-4e9e-83ca-c91030530489/volumes" Feb 23 17:50:51 crc kubenswrapper[4724]: I0223 17:50:51.186588 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-84d9ddfbc9-spsrv" Feb 23 17:50:51 crc kubenswrapper[4724]: I0223 17:50:51.255953 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5b67f89948-r429p"] Feb 23 17:50:51 crc kubenswrapper[4724]: I0223 17:50:51.256157 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5b67f89948-r429p" podUID="dbbdbb8a-6e82-49cb-b631-8d8646d28dc5" containerName="neutron-api" containerID="cri-o://7cbacc20033fdc2236a16a7da6062a41d5ecb3f9fb4a5be55257888fd066fbe4" gracePeriod=30 Feb 23 17:50:51 crc kubenswrapper[4724]: I0223 17:50:51.258261 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5b67f89948-r429p" podUID="dbbdbb8a-6e82-49cb-b631-8d8646d28dc5" containerName="neutron-httpd" containerID="cri-o://c5d4f31a76e7501685c05c02ab7f671b92f2076ecbd40beaa6b549572565c277" gracePeriod=30 Feb 23 17:50:51 crc kubenswrapper[4724]: I0223 17:50:51.268623 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-s46d2"] Feb 23 17:50:51 crc kubenswrapper[4724]: I0223 17:50:51.557525 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 23 17:50:51 crc kubenswrapper[4724]: I0223 17:50:51.557579 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 23 17:50:51 crc kubenswrapper[4724]: I0223 17:50:51.665485 4724 scope.go:117] "RemoveContainer" containerID="14e56e42a874bd312d515edbb00d44d0a514d159d98544acf28fba090f14c860" Feb 23 17:50:51 crc kubenswrapper[4724]: E0223 17:50:51.666029 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 40s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(86eb7ff0-87b2-4538-8c5b-9126768e810b)\"" pod="openstack/watcher-decision-engine-0" podUID="86eb7ff0-87b2-4538-8c5b-9126768e810b" Feb 23 17:50:51 crc kubenswrapper[4724]: I0223 17:50:51.666306 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-s46d2" event={"ID":"1c23170d-6cdc-4c4e-be8d-a4e61cb8feeb","Type":"ContainerStarted","Data":"5c432d31acb83dbd512249496937d5ac81819411763ab60224df7a7f258cc3e3"} Feb 23 17:50:51 crc kubenswrapper[4724]: I0223 17:50:51.670967 4724 generic.go:334] "Generic (PLEG): container finished" podID="dbbdbb8a-6e82-49cb-b631-8d8646d28dc5" containerID="c5d4f31a76e7501685c05c02ab7f671b92f2076ecbd40beaa6b549572565c277" exitCode=0 Feb 23 17:50:51 crc kubenswrapper[4724]: I0223 17:50:51.671059 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b67f89948-r429p" event={"ID":"dbbdbb8a-6e82-49cb-b631-8d8646d28dc5","Type":"ContainerDied","Data":"c5d4f31a76e7501685c05c02ab7f671b92f2076ecbd40beaa6b549572565c277"} Feb 23 17:50:51 crc kubenswrapper[4724]: I0223 17:50:51.673064 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6c03aee9-806f-4319-a3b8-b3226a740f4b","Type":"ContainerStarted","Data":"468e8384546844dfbac09d5f6d693ceee5bb428ae5c487856603c202c9ea5382"} Feb 23 17:50:51 crc kubenswrapper[4724]: I0223 17:50:51.676966 4724 generic.go:334] "Generic (PLEG): container finished" podID="20cceea2-c746-4269-990c-5032594f1196" containerID="0fbf36a8cc9e67893583e1422047a79a5eda98ccb265bd58a2a16e860b8f24af" exitCode=2 Feb 23 17:50:51 crc kubenswrapper[4724]: I0223 17:50:51.677620 4724 generic.go:334] "Generic (PLEG): container finished" podID="20cceea2-c746-4269-990c-5032594f1196" containerID="b357b2a8522d3f847d9e43eee225e59e952ea4b14358acc96fcb8190b20b1cac" exitCode=0 Feb 23 17:50:51 crc kubenswrapper[4724]: I0223 17:50:51.677043 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20cceea2-c746-4269-990c-5032594f1196","Type":"ContainerDied","Data":"0fbf36a8cc9e67893583e1422047a79a5eda98ccb265bd58a2a16e860b8f24af"} Feb 23 17:50:51 crc kubenswrapper[4724]: I0223 17:50:51.677674 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20cceea2-c746-4269-990c-5032594f1196","Type":"ContainerDied","Data":"b357b2a8522d3f847d9e43eee225e59e952ea4b14358acc96fcb8190b20b1cac"} Feb 23 17:50:52 crc kubenswrapper[4724]: I0223 17:50:52.690678 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6c03aee9-806f-4319-a3b8-b3226a740f4b","Type":"ContainerStarted","Data":"44aae43df8efe1fdb1b6197ad0568f718a7be041244e75f9fed295ceaac9c421"} Feb 23 17:50:52 crc kubenswrapper[4724]: I0223 17:50:52.691010 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6c03aee9-806f-4319-a3b8-b3226a740f4b","Type":"ContainerStarted","Data":"1a23839161de53dc49e96d917378e51df92f629603efd7b1187e3df84c01052d"} Feb 23 17:50:52 crc kubenswrapper[4724]: I0223 17:50:52.691286 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 23 17:50:52 crc kubenswrapper[4724]: I0223 17:50:52.709257 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.709233328 podStartE2EDuration="3.709233328s" podCreationTimestamp="2026-02-23 17:50:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:50:52.707539305 +0000 UTC m=+1208.523738905" watchObservedRunningTime="2026-02-23 17:50:52.709233328 +0000 UTC m=+1208.525432918" Feb 23 17:50:53 crc kubenswrapper[4724]: I0223 17:50:53.702102 4724 generic.go:334] "Generic (PLEG): container finished" podID="20cceea2-c746-4269-990c-5032594f1196" containerID="8223178440b58213ec3955e3ceda7ac024a2e482a99ba3369fdf5fbfd8f0f815" exitCode=0 Feb 23 17:50:53 crc kubenswrapper[4724]: I0223 17:50:53.702147 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20cceea2-c746-4269-990c-5032594f1196","Type":"ContainerDied","Data":"8223178440b58213ec3955e3ceda7ac024a2e482a99ba3369fdf5fbfd8f0f815"} Feb 23 17:50:53 crc kubenswrapper[4724]: I0223 17:50:53.893893 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-f447dffc7-s2mfq" Feb 23 17:50:53 crc kubenswrapper[4724]: I0223 17:50:53.910861 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-f447dffc7-s2mfq" Feb 23 17:50:54 crc kubenswrapper[4724]: W0223 17:50:54.196276 4724 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20cceea2_c746_4269_990c_5032594f1196.slice/crio-conmon-8223178440b58213ec3955e3ceda7ac024a2e482a99ba3369fdf5fbfd8f0f815.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20cceea2_c746_4269_990c_5032594f1196.slice/crio-conmon-8223178440b58213ec3955e3ceda7ac024a2e482a99ba3369fdf5fbfd8f0f815.scope: no such file or directory Feb 23 17:50:54 crc kubenswrapper[4724]: W0223 17:50:54.196331 4724 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20cceea2_c746_4269_990c_5032594f1196.slice/crio-8223178440b58213ec3955e3ceda7ac024a2e482a99ba3369fdf5fbfd8f0f815.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20cceea2_c746_4269_990c_5032594f1196.slice/crio-8223178440b58213ec3955e3ceda7ac024a2e482a99ba3369fdf5fbfd8f0f815.scope: no such file or directory Feb 23 17:50:54 crc kubenswrapper[4724]: W0223 17:50:54.196366 4724 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20cceea2_c746_4269_990c_5032594f1196.slice/crio-conmon-b357b2a8522d3f847d9e43eee225e59e952ea4b14358acc96fcb8190b20b1cac.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20cceea2_c746_4269_990c_5032594f1196.slice/crio-conmon-b357b2a8522d3f847d9e43eee225e59e952ea4b14358acc96fcb8190b20b1cac.scope: no such file or directory Feb 23 17:50:54 crc kubenswrapper[4724]: W0223 17:50:54.196416 4724 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20cceea2_c746_4269_990c_5032594f1196.slice/crio-b357b2a8522d3f847d9e43eee225e59e952ea4b14358acc96fcb8190b20b1cac.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20cceea2_c746_4269_990c_5032594f1196.slice/crio-b357b2a8522d3f847d9e43eee225e59e952ea4b14358acc96fcb8190b20b1cac.scope: no such file or directory Feb 23 17:50:54 crc kubenswrapper[4724]: W0223 17:50:54.196445 4724 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20cceea2_c746_4269_990c_5032594f1196.slice/crio-conmon-0fbf36a8cc9e67893583e1422047a79a5eda98ccb265bd58a2a16e860b8f24af.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20cceea2_c746_4269_990c_5032594f1196.slice/crio-conmon-0fbf36a8cc9e67893583e1422047a79a5eda98ccb265bd58a2a16e860b8f24af.scope: no such file or directory Feb 23 17:50:54 crc kubenswrapper[4724]: W0223 17:50:54.196469 4724 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20cceea2_c746_4269_990c_5032594f1196.slice/crio-0fbf36a8cc9e67893583e1422047a79a5eda98ccb265bd58a2a16e860b8f24af.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20cceea2_c746_4269_990c_5032594f1196.slice/crio-0fbf36a8cc9e67893583e1422047a79a5eda98ccb265bd58a2a16e860b8f24af.scope: no such file or directory Feb 23 17:50:54 crc kubenswrapper[4724]: I0223 17:50:54.724707 4724 generic.go:334] "Generic (PLEG): container finished" podID="df53406b-fb3c-41f5-86af-b78ac8d5df6d" containerID="e2211db7088619e4eb64abce15e5e8d41646526a13426ffbd781e4629c000ebd" exitCode=137 Feb 23 17:50:54 crc kubenswrapper[4724]: I0223 17:50:54.725971 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74674fd4f8-mmmpd" event={"ID":"df53406b-fb3c-41f5-86af-b78ac8d5df6d","Type":"ContainerDied","Data":"e2211db7088619e4eb64abce15e5e8d41646526a13426ffbd781e4629c000ebd"} Feb 23 17:50:54 crc kubenswrapper[4724]: I0223 17:50:54.868688 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-74674fd4f8-mmmpd" Feb 23 17:50:55 crc kubenswrapper[4724]: I0223 17:50:55.020025 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df53406b-fb3c-41f5-86af-b78ac8d5df6d-combined-ca-bundle\") pod \"df53406b-fb3c-41f5-86af-b78ac8d5df6d\" (UID: \"df53406b-fb3c-41f5-86af-b78ac8d5df6d\") " Feb 23 17:50:55 crc kubenswrapper[4724]: I0223 17:50:55.020082 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/df53406b-fb3c-41f5-86af-b78ac8d5df6d-config-data\") pod \"df53406b-fb3c-41f5-86af-b78ac8d5df6d\" (UID: \"df53406b-fb3c-41f5-86af-b78ac8d5df6d\") " Feb 23 17:50:55 crc kubenswrapper[4724]: I0223 17:50:55.020127 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/df53406b-fb3c-41f5-86af-b78ac8d5df6d-horizon-tls-certs\") pod \"df53406b-fb3c-41f5-86af-b78ac8d5df6d\" (UID: \"df53406b-fb3c-41f5-86af-b78ac8d5df6d\") " Feb 23 17:50:55 crc kubenswrapper[4724]: I0223 17:50:55.020153 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-529jm\" (UniqueName: \"kubernetes.io/projected/df53406b-fb3c-41f5-86af-b78ac8d5df6d-kube-api-access-529jm\") pod \"df53406b-fb3c-41f5-86af-b78ac8d5df6d\" (UID: \"df53406b-fb3c-41f5-86af-b78ac8d5df6d\") " Feb 23 17:50:55 crc kubenswrapper[4724]: I0223 17:50:55.020256 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df53406b-fb3c-41f5-86af-b78ac8d5df6d-logs\") pod \"df53406b-fb3c-41f5-86af-b78ac8d5df6d\" (UID: \"df53406b-fb3c-41f5-86af-b78ac8d5df6d\") " Feb 23 17:50:55 crc kubenswrapper[4724]: I0223 17:50:55.020357 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/df53406b-fb3c-41f5-86af-b78ac8d5df6d-scripts\") pod \"df53406b-fb3c-41f5-86af-b78ac8d5df6d\" (UID: \"df53406b-fb3c-41f5-86af-b78ac8d5df6d\") " Feb 23 17:50:55 crc kubenswrapper[4724]: I0223 17:50:55.020498 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/df53406b-fb3c-41f5-86af-b78ac8d5df6d-horizon-secret-key\") pod \"df53406b-fb3c-41f5-86af-b78ac8d5df6d\" (UID: \"df53406b-fb3c-41f5-86af-b78ac8d5df6d\") " Feb 23 17:50:55 crc kubenswrapper[4724]: I0223 17:50:55.021067 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df53406b-fb3c-41f5-86af-b78ac8d5df6d-logs" (OuterVolumeSpecName: "logs") pod "df53406b-fb3c-41f5-86af-b78ac8d5df6d" (UID: "df53406b-fb3c-41f5-86af-b78ac8d5df6d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:50:55 crc kubenswrapper[4724]: I0223 17:50:55.027153 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df53406b-fb3c-41f5-86af-b78ac8d5df6d-kube-api-access-529jm" (OuterVolumeSpecName: "kube-api-access-529jm") pod "df53406b-fb3c-41f5-86af-b78ac8d5df6d" (UID: "df53406b-fb3c-41f5-86af-b78ac8d5df6d"). InnerVolumeSpecName "kube-api-access-529jm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:50:55 crc kubenswrapper[4724]: I0223 17:50:55.030158 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df53406b-fb3c-41f5-86af-b78ac8d5df6d-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "df53406b-fb3c-41f5-86af-b78ac8d5df6d" (UID: "df53406b-fb3c-41f5-86af-b78ac8d5df6d"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:50:55 crc kubenswrapper[4724]: I0223 17:50:55.048888 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df53406b-fb3c-41f5-86af-b78ac8d5df6d-config-data" (OuterVolumeSpecName: "config-data") pod "df53406b-fb3c-41f5-86af-b78ac8d5df6d" (UID: "df53406b-fb3c-41f5-86af-b78ac8d5df6d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:50:55 crc kubenswrapper[4724]: I0223 17:50:55.049781 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df53406b-fb3c-41f5-86af-b78ac8d5df6d-scripts" (OuterVolumeSpecName: "scripts") pod "df53406b-fb3c-41f5-86af-b78ac8d5df6d" (UID: "df53406b-fb3c-41f5-86af-b78ac8d5df6d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:50:55 crc kubenswrapper[4724]: I0223 17:50:55.065787 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df53406b-fb3c-41f5-86af-b78ac8d5df6d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df53406b-fb3c-41f5-86af-b78ac8d5df6d" (UID: "df53406b-fb3c-41f5-86af-b78ac8d5df6d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:50:55 crc kubenswrapper[4724]: I0223 17:50:55.082364 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df53406b-fb3c-41f5-86af-b78ac8d5df6d-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "df53406b-fb3c-41f5-86af-b78ac8d5df6d" (UID: "df53406b-fb3c-41f5-86af-b78ac8d5df6d"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:50:55 crc kubenswrapper[4724]: I0223 17:50:55.122999 4724 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/df53406b-fb3c-41f5-86af-b78ac8d5df6d-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:55 crc kubenswrapper[4724]: I0223 17:50:55.123248 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df53406b-fb3c-41f5-86af-b78ac8d5df6d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:55 crc kubenswrapper[4724]: I0223 17:50:55.123321 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/df53406b-fb3c-41f5-86af-b78ac8d5df6d-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:55 crc kubenswrapper[4724]: I0223 17:50:55.123378 4724 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/df53406b-fb3c-41f5-86af-b78ac8d5df6d-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:55 crc kubenswrapper[4724]: I0223 17:50:55.123450 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-529jm\" (UniqueName: \"kubernetes.io/projected/df53406b-fb3c-41f5-86af-b78ac8d5df6d-kube-api-access-529jm\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:55 crc kubenswrapper[4724]: I0223 17:50:55.123510 4724 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df53406b-fb3c-41f5-86af-b78ac8d5df6d-logs\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:55 crc kubenswrapper[4724]: I0223 17:50:55.123574 4724 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/df53406b-fb3c-41f5-86af-b78ac8d5df6d-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 17:50:55 crc kubenswrapper[4724]: I0223 17:50:55.749379 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74674fd4f8-mmmpd" event={"ID":"df53406b-fb3c-41f5-86af-b78ac8d5df6d","Type":"ContainerDied","Data":"655a790b2a5fc56d6a2ccd75d9f2c86566b7cd4eef511372c724e4ee48743d8e"} Feb 23 17:50:55 crc kubenswrapper[4724]: I0223 17:50:55.749529 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-74674fd4f8-mmmpd" Feb 23 17:50:55 crc kubenswrapper[4724]: I0223 17:50:55.749796 4724 scope.go:117] "RemoveContainer" containerID="1655072b2b368448156effff044965d4dd72cc86d075ab29bd3d947a764a0158" Feb 23 17:50:55 crc kubenswrapper[4724]: I0223 17:50:55.784738 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-74674fd4f8-mmmpd"] Feb 23 17:50:55 crc kubenswrapper[4724]: I0223 17:50:55.795048 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-74674fd4f8-mmmpd"] Feb 23 17:50:55 crc kubenswrapper[4724]: I0223 17:50:55.964751 4724 scope.go:117] "RemoveContainer" containerID="e2211db7088619e4eb64abce15e5e8d41646526a13426ffbd781e4629c000ebd" Feb 23 17:50:56 crc kubenswrapper[4724]: I0223 17:50:56.769847 4724 generic.go:334] "Generic (PLEG): container finished" podID="dbbdbb8a-6e82-49cb-b631-8d8646d28dc5" containerID="7cbacc20033fdc2236a16a7da6062a41d5ecb3f9fb4a5be55257888fd066fbe4" exitCode=0 Feb 23 17:50:56 crc kubenswrapper[4724]: I0223 17:50:56.769909 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b67f89948-r429p" event={"ID":"dbbdbb8a-6e82-49cb-b631-8d8646d28dc5","Type":"ContainerDied","Data":"7cbacc20033fdc2236a16a7da6062a41d5ecb3f9fb4a5be55257888fd066fbe4"} Feb 23 17:50:56 crc kubenswrapper[4724]: I0223 17:50:56.969007 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df53406b-fb3c-41f5-86af-b78ac8d5df6d" path="/var/lib/kubelet/pods/df53406b-fb3c-41f5-86af-b78ac8d5df6d/volumes" Feb 23 17:51:01 crc kubenswrapper[4724]: I0223 17:51:01.389940 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5b67f89948-r429p" Feb 23 17:51:01 crc kubenswrapper[4724]: I0223 17:51:01.538342 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hz8s7\" (UniqueName: \"kubernetes.io/projected/dbbdbb8a-6e82-49cb-b631-8d8646d28dc5-kube-api-access-hz8s7\") pod \"dbbdbb8a-6e82-49cb-b631-8d8646d28dc5\" (UID: \"dbbdbb8a-6e82-49cb-b631-8d8646d28dc5\") " Feb 23 17:51:01 crc kubenswrapper[4724]: I0223 17:51:01.538479 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbbdbb8a-6e82-49cb-b631-8d8646d28dc5-ovndb-tls-certs\") pod \"dbbdbb8a-6e82-49cb-b631-8d8646d28dc5\" (UID: \"dbbdbb8a-6e82-49cb-b631-8d8646d28dc5\") " Feb 23 17:51:01 crc kubenswrapper[4724]: I0223 17:51:01.538524 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/dbbdbb8a-6e82-49cb-b631-8d8646d28dc5-httpd-config\") pod \"dbbdbb8a-6e82-49cb-b631-8d8646d28dc5\" (UID: \"dbbdbb8a-6e82-49cb-b631-8d8646d28dc5\") " Feb 23 17:51:01 crc kubenswrapper[4724]: I0223 17:51:01.538559 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbbdbb8a-6e82-49cb-b631-8d8646d28dc5-combined-ca-bundle\") pod \"dbbdbb8a-6e82-49cb-b631-8d8646d28dc5\" (UID: \"dbbdbb8a-6e82-49cb-b631-8d8646d28dc5\") " Feb 23 17:51:01 crc kubenswrapper[4724]: I0223 17:51:01.538612 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/dbbdbb8a-6e82-49cb-b631-8d8646d28dc5-config\") pod \"dbbdbb8a-6e82-49cb-b631-8d8646d28dc5\" (UID: \"dbbdbb8a-6e82-49cb-b631-8d8646d28dc5\") " Feb 23 17:51:01 crc kubenswrapper[4724]: I0223 17:51:01.553555 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbbdbb8a-6e82-49cb-b631-8d8646d28dc5-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "dbbdbb8a-6e82-49cb-b631-8d8646d28dc5" (UID: "dbbdbb8a-6e82-49cb-b631-8d8646d28dc5"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:51:01 crc kubenswrapper[4724]: I0223 17:51:01.554234 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbbdbb8a-6e82-49cb-b631-8d8646d28dc5-kube-api-access-hz8s7" (OuterVolumeSpecName: "kube-api-access-hz8s7") pod "dbbdbb8a-6e82-49cb-b631-8d8646d28dc5" (UID: "dbbdbb8a-6e82-49cb-b631-8d8646d28dc5"). InnerVolumeSpecName "kube-api-access-hz8s7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:51:01 crc kubenswrapper[4724]: I0223 17:51:01.626567 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbbdbb8a-6e82-49cb-b631-8d8646d28dc5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dbbdbb8a-6e82-49cb-b631-8d8646d28dc5" (UID: "dbbdbb8a-6e82-49cb-b631-8d8646d28dc5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:51:01 crc kubenswrapper[4724]: I0223 17:51:01.632539 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbbdbb8a-6e82-49cb-b631-8d8646d28dc5-config" (OuterVolumeSpecName: "config") pod "dbbdbb8a-6e82-49cb-b631-8d8646d28dc5" (UID: "dbbdbb8a-6e82-49cb-b631-8d8646d28dc5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:51:01 crc kubenswrapper[4724]: I0223 17:51:01.641541 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hz8s7\" (UniqueName: \"kubernetes.io/projected/dbbdbb8a-6e82-49cb-b631-8d8646d28dc5-kube-api-access-hz8s7\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:01 crc kubenswrapper[4724]: I0223 17:51:01.641569 4724 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/dbbdbb8a-6e82-49cb-b631-8d8646d28dc5-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:01 crc kubenswrapper[4724]: I0223 17:51:01.641579 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbbdbb8a-6e82-49cb-b631-8d8646d28dc5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:01 crc kubenswrapper[4724]: I0223 17:51:01.641589 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/dbbdbb8a-6e82-49cb-b631-8d8646d28dc5-config\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:01 crc kubenswrapper[4724]: I0223 17:51:01.650439 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbbdbb8a-6e82-49cb-b631-8d8646d28dc5-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "dbbdbb8a-6e82-49cb-b631-8d8646d28dc5" (UID: "dbbdbb8a-6e82-49cb-b631-8d8646d28dc5"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:51:01 crc kubenswrapper[4724]: I0223 17:51:01.743336 4724 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbbdbb8a-6e82-49cb-b631-8d8646d28dc5-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:01 crc kubenswrapper[4724]: I0223 17:51:01.820158 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b67f89948-r429p" event={"ID":"dbbdbb8a-6e82-49cb-b631-8d8646d28dc5","Type":"ContainerDied","Data":"995033ca820787bed1cf5548f851028fd863649a3077f41c19941b2449420faf"} Feb 23 17:51:01 crc kubenswrapper[4724]: I0223 17:51:01.820231 4724 scope.go:117] "RemoveContainer" containerID="c5d4f31a76e7501685c05c02ab7f671b92f2076ecbd40beaa6b549572565c277" Feb 23 17:51:01 crc kubenswrapper[4724]: I0223 17:51:01.820291 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5b67f89948-r429p" Feb 23 17:51:01 crc kubenswrapper[4724]: I0223 17:51:01.821549 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-s46d2" event={"ID":"1c23170d-6cdc-4c4e-be8d-a4e61cb8feeb","Type":"ContainerStarted","Data":"b5c8f1ce7cacc65a9f809d2af43294f845ccae54956d141e9e38f6ecb6966019"} Feb 23 17:51:01 crc kubenswrapper[4724]: I0223 17:51:01.841536 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-s46d2" podStartSLOduration=2.084377952 podStartE2EDuration="11.841515394s" podCreationTimestamp="2026-02-23 17:50:50 +0000 UTC" firstStartedPulling="2026-02-23 17:50:51.301210587 +0000 UTC m=+1207.117410187" lastFinishedPulling="2026-02-23 17:51:01.058348029 +0000 UTC m=+1216.874547629" observedRunningTime="2026-02-23 17:51:01.837439971 +0000 UTC m=+1217.653639571" watchObservedRunningTime="2026-02-23 17:51:01.841515394 +0000 UTC m=+1217.657714994" Feb 23 17:51:01 crc kubenswrapper[4724]: I0223 17:51:01.841990 4724 scope.go:117] "RemoveContainer" containerID="7cbacc20033fdc2236a16a7da6062a41d5ecb3f9fb4a5be55257888fd066fbe4" Feb 23 17:51:01 crc kubenswrapper[4724]: I0223 17:51:01.863867 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5b67f89948-r429p"] Feb 23 17:51:01 crc kubenswrapper[4724]: I0223 17:51:01.872125 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5b67f89948-r429p"] Feb 23 17:51:02 crc kubenswrapper[4724]: I0223 17:51:02.236293 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 23 17:51:02 crc kubenswrapper[4724]: I0223 17:51:02.293822 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-69f7cbf768-jd6kh" Feb 23 17:51:02 crc kubenswrapper[4724]: I0223 17:51:02.916047 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-69f7cbf768-jd6kh" Feb 23 17:51:02 crc kubenswrapper[4724]: I0223 17:51:02.951940 4724 scope.go:117] "RemoveContainer" containerID="14e56e42a874bd312d515edbb00d44d0a514d159d98544acf28fba090f14c860" Feb 23 17:51:02 crc kubenswrapper[4724]: E0223 17:51:02.952197 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 40s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(86eb7ff0-87b2-4538-8c5b-9126768e810b)\"" pod="openstack/watcher-decision-engine-0" podUID="86eb7ff0-87b2-4538-8c5b-9126768e810b" Feb 23 17:51:02 crc kubenswrapper[4724]: I0223 17:51:02.964803 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbbdbb8a-6e82-49cb-b631-8d8646d28dc5" path="/var/lib/kubelet/pods/dbbdbb8a-6e82-49cb-b631-8d8646d28dc5/volumes" Feb 23 17:51:02 crc kubenswrapper[4724]: I0223 17:51:02.998803 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-57d985d94b-jc7cf"] Feb 23 17:51:02 crc kubenswrapper[4724]: I0223 17:51:02.999012 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-57d985d94b-jc7cf" podUID="63ff397b-64ac-4aa1-b20e-e2570bcc4423" containerName="placement-log" containerID="cri-o://083a4d92c4ac23306dfbe7d05c72f6d9262c3da71b1d63656163c941e3be0e4d" gracePeriod=30 Feb 23 17:51:02 crc kubenswrapper[4724]: I0223 17:51:02.999434 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-57d985d94b-jc7cf" podUID="63ff397b-64ac-4aa1-b20e-e2570bcc4423" containerName="placement-api" containerID="cri-o://c554f8541c449c23f2af4f7f15d503e07ff2f40beb831b1f9e1efc00f9b315df" gracePeriod=30 Feb 23 17:51:03 crc kubenswrapper[4724]: I0223 17:51:03.859303 4724 generic.go:334] "Generic (PLEG): container finished" podID="63ff397b-64ac-4aa1-b20e-e2570bcc4423" containerID="083a4d92c4ac23306dfbe7d05c72f6d9262c3da71b1d63656163c941e3be0e4d" exitCode=143 Feb 23 17:51:03 crc kubenswrapper[4724]: I0223 17:51:03.859344 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-57d985d94b-jc7cf" event={"ID":"63ff397b-64ac-4aa1-b20e-e2570bcc4423","Type":"ContainerDied","Data":"083a4d92c4ac23306dfbe7d05c72f6d9262c3da71b1d63656163c941e3be0e4d"} Feb 23 17:51:04 crc kubenswrapper[4724]: I0223 17:51:04.294500 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-57d985d94b-jc7cf" Feb 23 17:51:04 crc kubenswrapper[4724]: I0223 17:51:04.431401 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/63ff397b-64ac-4aa1-b20e-e2570bcc4423-public-tls-certs\") pod \"63ff397b-64ac-4aa1-b20e-e2570bcc4423\" (UID: \"63ff397b-64ac-4aa1-b20e-e2570bcc4423\") " Feb 23 17:51:04 crc kubenswrapper[4724]: I0223 17:51:04.431496 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63ff397b-64ac-4aa1-b20e-e2570bcc4423-logs\") pod \"63ff397b-64ac-4aa1-b20e-e2570bcc4423\" (UID: \"63ff397b-64ac-4aa1-b20e-e2570bcc4423\") " Feb 23 17:51:04 crc kubenswrapper[4724]: I0223 17:51:04.431554 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63ff397b-64ac-4aa1-b20e-e2570bcc4423-scripts\") pod \"63ff397b-64ac-4aa1-b20e-e2570bcc4423\" (UID: \"63ff397b-64ac-4aa1-b20e-e2570bcc4423\") " Feb 23 17:51:04 crc kubenswrapper[4724]: I0223 17:51:04.431927 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63ff397b-64ac-4aa1-b20e-e2570bcc4423-logs" (OuterVolumeSpecName: "logs") pod "63ff397b-64ac-4aa1-b20e-e2570bcc4423" (UID: "63ff397b-64ac-4aa1-b20e-e2570bcc4423"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:51:04 crc kubenswrapper[4724]: I0223 17:51:04.431588 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63ff397b-64ac-4aa1-b20e-e2570bcc4423-config-data\") pod \"63ff397b-64ac-4aa1-b20e-e2570bcc4423\" (UID: \"63ff397b-64ac-4aa1-b20e-e2570bcc4423\") " Feb 23 17:51:04 crc kubenswrapper[4724]: I0223 17:51:04.432271 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/63ff397b-64ac-4aa1-b20e-e2570bcc4423-internal-tls-certs\") pod \"63ff397b-64ac-4aa1-b20e-e2570bcc4423\" (UID: \"63ff397b-64ac-4aa1-b20e-e2570bcc4423\") " Feb 23 17:51:04 crc kubenswrapper[4724]: I0223 17:51:04.432564 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63ff397b-64ac-4aa1-b20e-e2570bcc4423-combined-ca-bundle\") pod \"63ff397b-64ac-4aa1-b20e-e2570bcc4423\" (UID: \"63ff397b-64ac-4aa1-b20e-e2570bcc4423\") " Feb 23 17:51:04 crc kubenswrapper[4724]: I0223 17:51:04.432736 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tx5bf\" (UniqueName: \"kubernetes.io/projected/63ff397b-64ac-4aa1-b20e-e2570bcc4423-kube-api-access-tx5bf\") pod \"63ff397b-64ac-4aa1-b20e-e2570bcc4423\" (UID: \"63ff397b-64ac-4aa1-b20e-e2570bcc4423\") " Feb 23 17:51:04 crc kubenswrapper[4724]: I0223 17:51:04.433448 4724 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63ff397b-64ac-4aa1-b20e-e2570bcc4423-logs\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:04 crc kubenswrapper[4724]: I0223 17:51:04.437582 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63ff397b-64ac-4aa1-b20e-e2570bcc4423-kube-api-access-tx5bf" (OuterVolumeSpecName: "kube-api-access-tx5bf") pod "63ff397b-64ac-4aa1-b20e-e2570bcc4423" (UID: "63ff397b-64ac-4aa1-b20e-e2570bcc4423"). InnerVolumeSpecName "kube-api-access-tx5bf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:51:04 crc kubenswrapper[4724]: I0223 17:51:04.463095 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63ff397b-64ac-4aa1-b20e-e2570bcc4423-scripts" (OuterVolumeSpecName: "scripts") pod "63ff397b-64ac-4aa1-b20e-e2570bcc4423" (UID: "63ff397b-64ac-4aa1-b20e-e2570bcc4423"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:51:04 crc kubenswrapper[4724]: I0223 17:51:04.496700 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63ff397b-64ac-4aa1-b20e-e2570bcc4423-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "63ff397b-64ac-4aa1-b20e-e2570bcc4423" (UID: "63ff397b-64ac-4aa1-b20e-e2570bcc4423"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:51:04 crc kubenswrapper[4724]: I0223 17:51:04.513452 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63ff397b-64ac-4aa1-b20e-e2570bcc4423-config-data" (OuterVolumeSpecName: "config-data") pod "63ff397b-64ac-4aa1-b20e-e2570bcc4423" (UID: "63ff397b-64ac-4aa1-b20e-e2570bcc4423"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:51:04 crc kubenswrapper[4724]: I0223 17:51:04.535144 4724 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63ff397b-64ac-4aa1-b20e-e2570bcc4423-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:04 crc kubenswrapper[4724]: I0223 17:51:04.535174 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63ff397b-64ac-4aa1-b20e-e2570bcc4423-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:04 crc kubenswrapper[4724]: I0223 17:51:04.535186 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63ff397b-64ac-4aa1-b20e-e2570bcc4423-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:04 crc kubenswrapper[4724]: I0223 17:51:04.535199 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tx5bf\" (UniqueName: \"kubernetes.io/projected/63ff397b-64ac-4aa1-b20e-e2570bcc4423-kube-api-access-tx5bf\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:04 crc kubenswrapper[4724]: I0223 17:51:04.538584 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63ff397b-64ac-4aa1-b20e-e2570bcc4423-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "63ff397b-64ac-4aa1-b20e-e2570bcc4423" (UID: "63ff397b-64ac-4aa1-b20e-e2570bcc4423"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:51:04 crc kubenswrapper[4724]: I0223 17:51:04.562928 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63ff397b-64ac-4aa1-b20e-e2570bcc4423-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "63ff397b-64ac-4aa1-b20e-e2570bcc4423" (UID: "63ff397b-64ac-4aa1-b20e-e2570bcc4423"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:51:04 crc kubenswrapper[4724]: I0223 17:51:04.637372 4724 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/63ff397b-64ac-4aa1-b20e-e2570bcc4423-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:04 crc kubenswrapper[4724]: I0223 17:51:04.637411 4724 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/63ff397b-64ac-4aa1-b20e-e2570bcc4423-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:04 crc kubenswrapper[4724]: I0223 17:51:04.873620 4724 generic.go:334] "Generic (PLEG): container finished" podID="63ff397b-64ac-4aa1-b20e-e2570bcc4423" containerID="c554f8541c449c23f2af4f7f15d503e07ff2f40beb831b1f9e1efc00f9b315df" exitCode=0 Feb 23 17:51:04 crc kubenswrapper[4724]: I0223 17:51:04.873669 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-57d985d94b-jc7cf" event={"ID":"63ff397b-64ac-4aa1-b20e-e2570bcc4423","Type":"ContainerDied","Data":"c554f8541c449c23f2af4f7f15d503e07ff2f40beb831b1f9e1efc00f9b315df"} Feb 23 17:51:04 crc kubenswrapper[4724]: I0223 17:51:04.873701 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-57d985d94b-jc7cf" event={"ID":"63ff397b-64ac-4aa1-b20e-e2570bcc4423","Type":"ContainerDied","Data":"faf8e8e01e595a8926d32d61a6c4629c500201c575f87384f9f4d17c3b44b7b3"} Feb 23 17:51:04 crc kubenswrapper[4724]: I0223 17:51:04.873722 4724 scope.go:117] "RemoveContainer" containerID="c554f8541c449c23f2af4f7f15d503e07ff2f40beb831b1f9e1efc00f9b315df" Feb 23 17:51:04 crc kubenswrapper[4724]: I0223 17:51:04.873750 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-57d985d94b-jc7cf" Feb 23 17:51:04 crc kubenswrapper[4724]: I0223 17:51:04.901875 4724 scope.go:117] "RemoveContainer" containerID="083a4d92c4ac23306dfbe7d05c72f6d9262c3da71b1d63656163c941e3be0e4d" Feb 23 17:51:04 crc kubenswrapper[4724]: I0223 17:51:04.927945 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-57d985d94b-jc7cf"] Feb 23 17:51:04 crc kubenswrapper[4724]: I0223 17:51:04.933576 4724 scope.go:117] "RemoveContainer" containerID="c554f8541c449c23f2af4f7f15d503e07ff2f40beb831b1f9e1efc00f9b315df" Feb 23 17:51:04 crc kubenswrapper[4724]: E0223 17:51:04.934090 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c554f8541c449c23f2af4f7f15d503e07ff2f40beb831b1f9e1efc00f9b315df\": container with ID starting with c554f8541c449c23f2af4f7f15d503e07ff2f40beb831b1f9e1efc00f9b315df not found: ID does not exist" containerID="c554f8541c449c23f2af4f7f15d503e07ff2f40beb831b1f9e1efc00f9b315df" Feb 23 17:51:04 crc kubenswrapper[4724]: I0223 17:51:04.934147 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c554f8541c449c23f2af4f7f15d503e07ff2f40beb831b1f9e1efc00f9b315df"} err="failed to get container status \"c554f8541c449c23f2af4f7f15d503e07ff2f40beb831b1f9e1efc00f9b315df\": rpc error: code = NotFound desc = could not find container \"c554f8541c449c23f2af4f7f15d503e07ff2f40beb831b1f9e1efc00f9b315df\": container with ID starting with c554f8541c449c23f2af4f7f15d503e07ff2f40beb831b1f9e1efc00f9b315df not found: ID does not exist" Feb 23 17:51:04 crc kubenswrapper[4724]: I0223 17:51:04.934177 4724 scope.go:117] "RemoveContainer" containerID="083a4d92c4ac23306dfbe7d05c72f6d9262c3da71b1d63656163c941e3be0e4d" Feb 23 17:51:04 crc kubenswrapper[4724]: E0223 17:51:04.934916 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"083a4d92c4ac23306dfbe7d05c72f6d9262c3da71b1d63656163c941e3be0e4d\": container with ID starting with 083a4d92c4ac23306dfbe7d05c72f6d9262c3da71b1d63656163c941e3be0e4d not found: ID does not exist" containerID="083a4d92c4ac23306dfbe7d05c72f6d9262c3da71b1d63656163c941e3be0e4d" Feb 23 17:51:04 crc kubenswrapper[4724]: I0223 17:51:04.934959 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"083a4d92c4ac23306dfbe7d05c72f6d9262c3da71b1d63656163c941e3be0e4d"} err="failed to get container status \"083a4d92c4ac23306dfbe7d05c72f6d9262c3da71b1d63656163c941e3be0e4d\": rpc error: code = NotFound desc = could not find container \"083a4d92c4ac23306dfbe7d05c72f6d9262c3da71b1d63656163c941e3be0e4d\": container with ID starting with 083a4d92c4ac23306dfbe7d05c72f6d9262c3da71b1d63656163c941e3be0e4d not found: ID does not exist" Feb 23 17:51:04 crc kubenswrapper[4724]: I0223 17:51:04.948372 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-57d985d94b-jc7cf"] Feb 23 17:51:04 crc kubenswrapper[4724]: I0223 17:51:04.967328 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63ff397b-64ac-4aa1-b20e-e2570bcc4423" path="/var/lib/kubelet/pods/63ff397b-64ac-4aa1-b20e-e2570bcc4423/volumes" Feb 23 17:51:06 crc kubenswrapper[4724]: I0223 17:51:06.512907 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 17:51:06 crc kubenswrapper[4724]: I0223 17:51:06.513421 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ea216262-5ec8-4c74-8cec-376d7241e6a8" containerName="glance-log" containerID="cri-o://bff2dc251ebe0f525f3a9b7f471ef5c7e64ab32deb29c9382a6695ad17c6762e" gracePeriod=30 Feb 23 17:51:06 crc kubenswrapper[4724]: I0223 17:51:06.513499 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ea216262-5ec8-4c74-8cec-376d7241e6a8" containerName="glance-httpd" containerID="cri-o://294f31868cdcc017ecaf968c643faacf81ef76a3632549753f1869e91df6f12e" gracePeriod=30 Feb 23 17:51:06 crc kubenswrapper[4724]: I0223 17:51:06.894058 4724 generic.go:334] "Generic (PLEG): container finished" podID="ea216262-5ec8-4c74-8cec-376d7241e6a8" containerID="bff2dc251ebe0f525f3a9b7f471ef5c7e64ab32deb29c9382a6695ad17c6762e" exitCode=143 Feb 23 17:51:06 crc kubenswrapper[4724]: I0223 17:51:06.894102 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ea216262-5ec8-4c74-8cec-376d7241e6a8","Type":"ContainerDied","Data":"bff2dc251ebe0f525f3a9b7f471ef5c7e64ab32deb29c9382a6695ad17c6762e"} Feb 23 17:51:07 crc kubenswrapper[4724]: I0223 17:51:07.909125 4724 generic.go:334] "Generic (PLEG): container finished" podID="ea216262-5ec8-4c74-8cec-376d7241e6a8" containerID="294f31868cdcc017ecaf968c643faacf81ef76a3632549753f1869e91df6f12e" exitCode=0 Feb 23 17:51:07 crc kubenswrapper[4724]: I0223 17:51:07.909175 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ea216262-5ec8-4c74-8cec-376d7241e6a8","Type":"ContainerDied","Data":"294f31868cdcc017ecaf968c643faacf81ef76a3632549753f1869e91df6f12e"} Feb 23 17:51:07 crc kubenswrapper[4724]: I0223 17:51:07.909224 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ea216262-5ec8-4c74-8cec-376d7241e6a8","Type":"ContainerDied","Data":"c20d182e34a553f8333eed087e8443f3e1b978f18ffd378bf76283c24daaf6c3"} Feb 23 17:51:07 crc kubenswrapper[4724]: I0223 17:51:07.909242 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c20d182e34a553f8333eed087e8443f3e1b978f18ffd378bf76283c24daaf6c3" Feb 23 17:51:07 crc kubenswrapper[4724]: I0223 17:51:07.989977 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 23 17:51:08 crc kubenswrapper[4724]: I0223 17:51:08.072336 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 17:51:08 crc kubenswrapper[4724]: I0223 17:51:08.072842 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="b46d2359-d4b2-4f2a-9d22-52928aa39da8" containerName="glance-log" containerID="cri-o://b6983b542727d47e3909c7a0e5d2098fbe7de7dab8de7baa5b87c28a3af808db" gracePeriod=30 Feb 23 17:51:08 crc kubenswrapper[4724]: I0223 17:51:08.073280 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="b46d2359-d4b2-4f2a-9d22-52928aa39da8" containerName="glance-httpd" containerID="cri-o://9bfe9e18ae2365edf0716f2f387235bee21577f05d64aed5741d350a3ebde028" gracePeriod=30 Feb 23 17:51:08 crc kubenswrapper[4724]: I0223 17:51:08.102941 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jn4dr\" (UniqueName: \"kubernetes.io/projected/ea216262-5ec8-4c74-8cec-376d7241e6a8-kube-api-access-jn4dr\") pod \"ea216262-5ec8-4c74-8cec-376d7241e6a8\" (UID: \"ea216262-5ec8-4c74-8cec-376d7241e6a8\") " Feb 23 17:51:08 crc kubenswrapper[4724]: I0223 17:51:08.103036 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ea216262-5ec8-4c74-8cec-376d7241e6a8\" (UID: \"ea216262-5ec8-4c74-8cec-376d7241e6a8\") " Feb 23 17:51:08 crc kubenswrapper[4724]: I0223 17:51:08.103109 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea216262-5ec8-4c74-8cec-376d7241e6a8-config-data\") pod \"ea216262-5ec8-4c74-8cec-376d7241e6a8\" (UID: \"ea216262-5ec8-4c74-8cec-376d7241e6a8\") " Feb 23 17:51:08 crc kubenswrapper[4724]: I0223 17:51:08.103138 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ea216262-5ec8-4c74-8cec-376d7241e6a8-httpd-run\") pod \"ea216262-5ec8-4c74-8cec-376d7241e6a8\" (UID: \"ea216262-5ec8-4c74-8cec-376d7241e6a8\") " Feb 23 17:51:08 crc kubenswrapper[4724]: I0223 17:51:08.103195 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea216262-5ec8-4c74-8cec-376d7241e6a8-scripts\") pod \"ea216262-5ec8-4c74-8cec-376d7241e6a8\" (UID: \"ea216262-5ec8-4c74-8cec-376d7241e6a8\") " Feb 23 17:51:08 crc kubenswrapper[4724]: I0223 17:51:08.103254 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea216262-5ec8-4c74-8cec-376d7241e6a8-combined-ca-bundle\") pod \"ea216262-5ec8-4c74-8cec-376d7241e6a8\" (UID: \"ea216262-5ec8-4c74-8cec-376d7241e6a8\") " Feb 23 17:51:08 crc kubenswrapper[4724]: I0223 17:51:08.103281 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea216262-5ec8-4c74-8cec-376d7241e6a8-public-tls-certs\") pod \"ea216262-5ec8-4c74-8cec-376d7241e6a8\" (UID: \"ea216262-5ec8-4c74-8cec-376d7241e6a8\") " Feb 23 17:51:08 crc kubenswrapper[4724]: I0223 17:51:08.103319 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea216262-5ec8-4c74-8cec-376d7241e6a8-logs\") pod \"ea216262-5ec8-4c74-8cec-376d7241e6a8\" (UID: \"ea216262-5ec8-4c74-8cec-376d7241e6a8\") " Feb 23 17:51:08 crc kubenswrapper[4724]: I0223 17:51:08.103874 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea216262-5ec8-4c74-8cec-376d7241e6a8-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ea216262-5ec8-4c74-8cec-376d7241e6a8" (UID: "ea216262-5ec8-4c74-8cec-376d7241e6a8"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:51:08 crc kubenswrapper[4724]: I0223 17:51:08.106906 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea216262-5ec8-4c74-8cec-376d7241e6a8-logs" (OuterVolumeSpecName: "logs") pod "ea216262-5ec8-4c74-8cec-376d7241e6a8" (UID: "ea216262-5ec8-4c74-8cec-376d7241e6a8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:51:08 crc kubenswrapper[4724]: I0223 17:51:08.108776 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "ea216262-5ec8-4c74-8cec-376d7241e6a8" (UID: "ea216262-5ec8-4c74-8cec-376d7241e6a8"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 23 17:51:08 crc kubenswrapper[4724]: I0223 17:51:08.110016 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea216262-5ec8-4c74-8cec-376d7241e6a8-scripts" (OuterVolumeSpecName: "scripts") pod "ea216262-5ec8-4c74-8cec-376d7241e6a8" (UID: "ea216262-5ec8-4c74-8cec-376d7241e6a8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:51:08 crc kubenswrapper[4724]: I0223 17:51:08.115729 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea216262-5ec8-4c74-8cec-376d7241e6a8-kube-api-access-jn4dr" (OuterVolumeSpecName: "kube-api-access-jn4dr") pod "ea216262-5ec8-4c74-8cec-376d7241e6a8" (UID: "ea216262-5ec8-4c74-8cec-376d7241e6a8"). InnerVolumeSpecName "kube-api-access-jn4dr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:51:08 crc kubenswrapper[4724]: I0223 17:51:08.147544 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea216262-5ec8-4c74-8cec-376d7241e6a8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ea216262-5ec8-4c74-8cec-376d7241e6a8" (UID: "ea216262-5ec8-4c74-8cec-376d7241e6a8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:51:08 crc kubenswrapper[4724]: I0223 17:51:08.168977 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea216262-5ec8-4c74-8cec-376d7241e6a8-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ea216262-5ec8-4c74-8cec-376d7241e6a8" (UID: "ea216262-5ec8-4c74-8cec-376d7241e6a8"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:51:08 crc kubenswrapper[4724]: I0223 17:51:08.202435 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea216262-5ec8-4c74-8cec-376d7241e6a8-config-data" (OuterVolumeSpecName: "config-data") pod "ea216262-5ec8-4c74-8cec-376d7241e6a8" (UID: "ea216262-5ec8-4c74-8cec-376d7241e6a8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:51:08 crc kubenswrapper[4724]: I0223 17:51:08.206063 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jn4dr\" (UniqueName: \"kubernetes.io/projected/ea216262-5ec8-4c74-8cec-376d7241e6a8-kube-api-access-jn4dr\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:08 crc kubenswrapper[4724]: I0223 17:51:08.206125 4724 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Feb 23 17:51:08 crc kubenswrapper[4724]: I0223 17:51:08.206137 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea216262-5ec8-4c74-8cec-376d7241e6a8-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:08 crc kubenswrapper[4724]: I0223 17:51:08.206147 4724 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ea216262-5ec8-4c74-8cec-376d7241e6a8-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:08 crc kubenswrapper[4724]: I0223 17:51:08.206155 4724 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea216262-5ec8-4c74-8cec-376d7241e6a8-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:08 crc kubenswrapper[4724]: I0223 17:51:08.206163 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea216262-5ec8-4c74-8cec-376d7241e6a8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:08 crc kubenswrapper[4724]: I0223 17:51:08.206170 4724 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea216262-5ec8-4c74-8cec-376d7241e6a8-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:08 crc kubenswrapper[4724]: I0223 17:51:08.206178 4724 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea216262-5ec8-4c74-8cec-376d7241e6a8-logs\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:08 crc kubenswrapper[4724]: I0223 17:51:08.233198 4724 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Feb 23 17:51:08 crc kubenswrapper[4724]: I0223 17:51:08.309561 4724 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:08 crc kubenswrapper[4724]: I0223 17:51:08.921632 4724 generic.go:334] "Generic (PLEG): container finished" podID="b46d2359-d4b2-4f2a-9d22-52928aa39da8" containerID="b6983b542727d47e3909c7a0e5d2098fbe7de7dab8de7baa5b87c28a3af808db" exitCode=143 Feb 23 17:51:08 crc kubenswrapper[4724]: I0223 17:51:08.921750 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b46d2359-d4b2-4f2a-9d22-52928aa39da8","Type":"ContainerDied","Data":"b6983b542727d47e3909c7a0e5d2098fbe7de7dab8de7baa5b87c28a3af808db"} Feb 23 17:51:08 crc kubenswrapper[4724]: I0223 17:51:08.921962 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 23 17:51:08 crc kubenswrapper[4724]: I0223 17:51:08.967350 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 17:51:08 crc kubenswrapper[4724]: I0223 17:51:08.975090 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 17:51:08 crc kubenswrapper[4724]: I0223 17:51:08.998328 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 17:51:08 crc kubenswrapper[4724]: E0223 17:51:08.999091 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbbdbb8a-6e82-49cb-b631-8d8646d28dc5" containerName="neutron-api" Feb 23 17:51:09 crc kubenswrapper[4724]: I0223 17:51:09.005648 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbbdbb8a-6e82-49cb-b631-8d8646d28dc5" containerName="neutron-api" Feb 23 17:51:09 crc kubenswrapper[4724]: E0223 17:51:09.005896 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df53406b-fb3c-41f5-86af-b78ac8d5df6d" containerName="horizon" Feb 23 17:51:09 crc kubenswrapper[4724]: I0223 17:51:09.006003 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="df53406b-fb3c-41f5-86af-b78ac8d5df6d" containerName="horizon" Feb 23 17:51:09 crc kubenswrapper[4724]: E0223 17:51:09.006106 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63ff397b-64ac-4aa1-b20e-e2570bcc4423" containerName="placement-api" Feb 23 17:51:09 crc kubenswrapper[4724]: I0223 17:51:09.006197 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="63ff397b-64ac-4aa1-b20e-e2570bcc4423" containerName="placement-api" Feb 23 17:51:09 crc kubenswrapper[4724]: E0223 17:51:09.006291 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea216262-5ec8-4c74-8cec-376d7241e6a8" containerName="glance-log" Feb 23 17:51:09 crc kubenswrapper[4724]: I0223 17:51:09.006370 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea216262-5ec8-4c74-8cec-376d7241e6a8" containerName="glance-log" Feb 23 17:51:09 crc kubenswrapper[4724]: E0223 17:51:09.006496 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea216262-5ec8-4c74-8cec-376d7241e6a8" containerName="glance-httpd" Feb 23 17:51:09 crc kubenswrapper[4724]: I0223 17:51:09.006577 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea216262-5ec8-4c74-8cec-376d7241e6a8" containerName="glance-httpd" Feb 23 17:51:09 crc kubenswrapper[4724]: E0223 17:51:09.006669 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df53406b-fb3c-41f5-86af-b78ac8d5df6d" containerName="horizon-log" Feb 23 17:51:09 crc kubenswrapper[4724]: I0223 17:51:09.006765 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="df53406b-fb3c-41f5-86af-b78ac8d5df6d" containerName="horizon-log" Feb 23 17:51:09 crc kubenswrapper[4724]: E0223 17:51:09.006860 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63ff397b-64ac-4aa1-b20e-e2570bcc4423" containerName="placement-log" Feb 23 17:51:09 crc kubenswrapper[4724]: I0223 17:51:09.006949 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="63ff397b-64ac-4aa1-b20e-e2570bcc4423" containerName="placement-log" Feb 23 17:51:09 crc kubenswrapper[4724]: E0223 17:51:09.007093 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbbdbb8a-6e82-49cb-b631-8d8646d28dc5" containerName="neutron-httpd" Feb 23 17:51:09 crc kubenswrapper[4724]: I0223 17:51:09.007185 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbbdbb8a-6e82-49cb-b631-8d8646d28dc5" containerName="neutron-httpd" Feb 23 17:51:09 crc kubenswrapper[4724]: I0223 17:51:09.007669 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="63ff397b-64ac-4aa1-b20e-e2570bcc4423" containerName="placement-log" Feb 23 17:51:09 crc kubenswrapper[4724]: I0223 17:51:09.007791 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="df53406b-fb3c-41f5-86af-b78ac8d5df6d" containerName="horizon" Feb 23 17:51:09 crc kubenswrapper[4724]: I0223 17:51:09.007888 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="df53406b-fb3c-41f5-86af-b78ac8d5df6d" containerName="horizon-log" Feb 23 17:51:09 crc kubenswrapper[4724]: I0223 17:51:09.007996 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbbdbb8a-6e82-49cb-b631-8d8646d28dc5" containerName="neutron-httpd" Feb 23 17:51:09 crc kubenswrapper[4724]: I0223 17:51:09.008091 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea216262-5ec8-4c74-8cec-376d7241e6a8" containerName="glance-log" Feb 23 17:51:09 crc kubenswrapper[4724]: I0223 17:51:09.008205 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="63ff397b-64ac-4aa1-b20e-e2570bcc4423" containerName="placement-api" Feb 23 17:51:09 crc kubenswrapper[4724]: I0223 17:51:09.008293 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbbdbb8a-6e82-49cb-b631-8d8646d28dc5" containerName="neutron-api" Feb 23 17:51:09 crc kubenswrapper[4724]: I0223 17:51:09.008357 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea216262-5ec8-4c74-8cec-376d7241e6a8" containerName="glance-httpd" Feb 23 17:51:09 crc kubenswrapper[4724]: I0223 17:51:09.009310 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 17:51:09 crc kubenswrapper[4724]: I0223 17:51:09.009508 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 23 17:51:09 crc kubenswrapper[4724]: I0223 17:51:09.024837 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 23 17:51:09 crc kubenswrapper[4724]: I0223 17:51:09.025042 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 23 17:51:09 crc kubenswrapper[4724]: I0223 17:51:09.126670 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8883a549-3562-42b7-86d4-934c3076f934-logs\") pod \"glance-default-external-api-0\" (UID: \"8883a549-3562-42b7-86d4-934c3076f934\") " pod="openstack/glance-default-external-api-0" Feb 23 17:51:09 crc kubenswrapper[4724]: I0223 17:51:09.126837 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8883a549-3562-42b7-86d4-934c3076f934-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8883a549-3562-42b7-86d4-934c3076f934\") " pod="openstack/glance-default-external-api-0" Feb 23 17:51:09 crc kubenswrapper[4724]: I0223 17:51:09.126900 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8883a549-3562-42b7-86d4-934c3076f934-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8883a549-3562-42b7-86d4-934c3076f934\") " pod="openstack/glance-default-external-api-0" Feb 23 17:51:09 crc kubenswrapper[4724]: I0223 17:51:09.126925 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"8883a549-3562-42b7-86d4-934c3076f934\") " pod="openstack/glance-default-external-api-0" Feb 23 17:51:09 crc kubenswrapper[4724]: I0223 17:51:09.126999 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8883a549-3562-42b7-86d4-934c3076f934-scripts\") pod \"glance-default-external-api-0\" (UID: \"8883a549-3562-42b7-86d4-934c3076f934\") " pod="openstack/glance-default-external-api-0" Feb 23 17:51:09 crc kubenswrapper[4724]: I0223 17:51:09.127039 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8883a549-3562-42b7-86d4-934c3076f934-config-data\") pod \"glance-default-external-api-0\" (UID: \"8883a549-3562-42b7-86d4-934c3076f934\") " pod="openstack/glance-default-external-api-0" Feb 23 17:51:09 crc kubenswrapper[4724]: I0223 17:51:09.127125 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67bkn\" (UniqueName: \"kubernetes.io/projected/8883a549-3562-42b7-86d4-934c3076f934-kube-api-access-67bkn\") pod \"glance-default-external-api-0\" (UID: \"8883a549-3562-42b7-86d4-934c3076f934\") " pod="openstack/glance-default-external-api-0" Feb 23 17:51:09 crc kubenswrapper[4724]: I0223 17:51:09.127238 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8883a549-3562-42b7-86d4-934c3076f934-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8883a549-3562-42b7-86d4-934c3076f934\") " pod="openstack/glance-default-external-api-0" Feb 23 17:51:09 crc kubenswrapper[4724]: I0223 17:51:09.238318 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8883a549-3562-42b7-86d4-934c3076f934-logs\") pod \"glance-default-external-api-0\" (UID: \"8883a549-3562-42b7-86d4-934c3076f934\") " pod="openstack/glance-default-external-api-0" Feb 23 17:51:09 crc kubenswrapper[4724]: I0223 17:51:09.238950 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8883a549-3562-42b7-86d4-934c3076f934-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8883a549-3562-42b7-86d4-934c3076f934\") " pod="openstack/glance-default-external-api-0" Feb 23 17:51:09 crc kubenswrapper[4724]: I0223 17:51:09.239183 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8883a549-3562-42b7-86d4-934c3076f934-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8883a549-3562-42b7-86d4-934c3076f934\") " pod="openstack/glance-default-external-api-0" Feb 23 17:51:09 crc kubenswrapper[4724]: I0223 17:51:09.239240 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"8883a549-3562-42b7-86d4-934c3076f934\") " pod="openstack/glance-default-external-api-0" Feb 23 17:51:09 crc kubenswrapper[4724]: I0223 17:51:09.239316 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8883a549-3562-42b7-86d4-934c3076f934-logs\") pod \"glance-default-external-api-0\" (UID: \"8883a549-3562-42b7-86d4-934c3076f934\") " pod="openstack/glance-default-external-api-0" Feb 23 17:51:09 crc kubenswrapper[4724]: I0223 17:51:09.239468 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8883a549-3562-42b7-86d4-934c3076f934-scripts\") pod \"glance-default-external-api-0\" (UID: \"8883a549-3562-42b7-86d4-934c3076f934\") " pod="openstack/glance-default-external-api-0" Feb 23 17:51:09 crc kubenswrapper[4724]: I0223 17:51:09.241376 4724 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"8883a549-3562-42b7-86d4-934c3076f934\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Feb 23 17:51:09 crc kubenswrapper[4724]: I0223 17:51:09.239536 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8883a549-3562-42b7-86d4-934c3076f934-config-data\") pod \"glance-default-external-api-0\" (UID: \"8883a549-3562-42b7-86d4-934c3076f934\") " pod="openstack/glance-default-external-api-0" Feb 23 17:51:09 crc kubenswrapper[4724]: I0223 17:51:09.245319 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67bkn\" (UniqueName: \"kubernetes.io/projected/8883a549-3562-42b7-86d4-934c3076f934-kube-api-access-67bkn\") pod \"glance-default-external-api-0\" (UID: \"8883a549-3562-42b7-86d4-934c3076f934\") " pod="openstack/glance-default-external-api-0" Feb 23 17:51:09 crc kubenswrapper[4724]: I0223 17:51:09.245443 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8883a549-3562-42b7-86d4-934c3076f934-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8883a549-3562-42b7-86d4-934c3076f934\") " pod="openstack/glance-default-external-api-0" Feb 23 17:51:09 crc kubenswrapper[4724]: I0223 17:51:09.246077 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8883a549-3562-42b7-86d4-934c3076f934-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8883a549-3562-42b7-86d4-934c3076f934\") " pod="openstack/glance-default-external-api-0" Feb 23 17:51:09 crc kubenswrapper[4724]: I0223 17:51:09.248234 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8883a549-3562-42b7-86d4-934c3076f934-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8883a549-3562-42b7-86d4-934c3076f934\") " pod="openstack/glance-default-external-api-0" Feb 23 17:51:09 crc kubenswrapper[4724]: I0223 17:51:09.248510 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8883a549-3562-42b7-86d4-934c3076f934-scripts\") pod \"glance-default-external-api-0\" (UID: \"8883a549-3562-42b7-86d4-934c3076f934\") " pod="openstack/glance-default-external-api-0" Feb 23 17:51:09 crc kubenswrapper[4724]: I0223 17:51:09.249151 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8883a549-3562-42b7-86d4-934c3076f934-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8883a549-3562-42b7-86d4-934c3076f934\") " pod="openstack/glance-default-external-api-0" Feb 23 17:51:09 crc kubenswrapper[4724]: I0223 17:51:09.250564 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8883a549-3562-42b7-86d4-934c3076f934-config-data\") pod \"glance-default-external-api-0\" (UID: \"8883a549-3562-42b7-86d4-934c3076f934\") " pod="openstack/glance-default-external-api-0" Feb 23 17:51:09 crc kubenswrapper[4724]: I0223 17:51:09.262289 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67bkn\" (UniqueName: \"kubernetes.io/projected/8883a549-3562-42b7-86d4-934c3076f934-kube-api-access-67bkn\") pod \"glance-default-external-api-0\" (UID: \"8883a549-3562-42b7-86d4-934c3076f934\") " pod="openstack/glance-default-external-api-0" Feb 23 17:51:09 crc kubenswrapper[4724]: I0223 17:51:09.274218 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"8883a549-3562-42b7-86d4-934c3076f934\") " pod="openstack/glance-default-external-api-0" Feb 23 17:51:09 crc kubenswrapper[4724]: I0223 17:51:09.352686 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 23 17:51:09 crc kubenswrapper[4724]: I0223 17:51:09.921467 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 17:51:09 crc kubenswrapper[4724]: I0223 17:51:09.932850 4724 generic.go:334] "Generic (PLEG): container finished" podID="b46d2359-d4b2-4f2a-9d22-52928aa39da8" containerID="9bfe9e18ae2365edf0716f2f387235bee21577f05d64aed5741d350a3ebde028" exitCode=0 Feb 23 17:51:09 crc kubenswrapper[4724]: I0223 17:51:09.933904 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b46d2359-d4b2-4f2a-9d22-52928aa39da8","Type":"ContainerDied","Data":"9bfe9e18ae2365edf0716f2f387235bee21577f05d64aed5741d350a3ebde028"} Feb 23 17:51:10 crc kubenswrapper[4724]: I0223 17:51:10.657741 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 23 17:51:10 crc kubenswrapper[4724]: I0223 17:51:10.677543 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b46d2359-d4b2-4f2a-9d22-52928aa39da8-internal-tls-certs\") pod \"b46d2359-d4b2-4f2a-9d22-52928aa39da8\" (UID: \"b46d2359-d4b2-4f2a-9d22-52928aa39da8\") " Feb 23 17:51:10 crc kubenswrapper[4724]: I0223 17:51:10.678071 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b46d2359-d4b2-4f2a-9d22-52928aa39da8-logs\") pod \"b46d2359-d4b2-4f2a-9d22-52928aa39da8\" (UID: \"b46d2359-d4b2-4f2a-9d22-52928aa39da8\") " Feb 23 17:51:10 crc kubenswrapper[4724]: I0223 17:51:10.678100 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b46d2359-d4b2-4f2a-9d22-52928aa39da8-httpd-run\") pod \"b46d2359-d4b2-4f2a-9d22-52928aa39da8\" (UID: \"b46d2359-d4b2-4f2a-9d22-52928aa39da8\") " Feb 23 17:51:10 crc kubenswrapper[4724]: I0223 17:51:10.678127 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"b46d2359-d4b2-4f2a-9d22-52928aa39da8\" (UID: \"b46d2359-d4b2-4f2a-9d22-52928aa39da8\") " Feb 23 17:51:10 crc kubenswrapper[4724]: I0223 17:51:10.678152 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b46d2359-d4b2-4f2a-9d22-52928aa39da8-combined-ca-bundle\") pod \"b46d2359-d4b2-4f2a-9d22-52928aa39da8\" (UID: \"b46d2359-d4b2-4f2a-9d22-52928aa39da8\") " Feb 23 17:51:10 crc kubenswrapper[4724]: I0223 17:51:10.678210 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fwtr\" (UniqueName: \"kubernetes.io/projected/b46d2359-d4b2-4f2a-9d22-52928aa39da8-kube-api-access-4fwtr\") pod \"b46d2359-d4b2-4f2a-9d22-52928aa39da8\" (UID: \"b46d2359-d4b2-4f2a-9d22-52928aa39da8\") " Feb 23 17:51:10 crc kubenswrapper[4724]: I0223 17:51:10.678309 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b46d2359-d4b2-4f2a-9d22-52928aa39da8-config-data\") pod \"b46d2359-d4b2-4f2a-9d22-52928aa39da8\" (UID: \"b46d2359-d4b2-4f2a-9d22-52928aa39da8\") " Feb 23 17:51:10 crc kubenswrapper[4724]: I0223 17:51:10.678342 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b46d2359-d4b2-4f2a-9d22-52928aa39da8-scripts\") pod \"b46d2359-d4b2-4f2a-9d22-52928aa39da8\" (UID: \"b46d2359-d4b2-4f2a-9d22-52928aa39da8\") " Feb 23 17:51:10 crc kubenswrapper[4724]: I0223 17:51:10.678684 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b46d2359-d4b2-4f2a-9d22-52928aa39da8-logs" (OuterVolumeSpecName: "logs") pod "b46d2359-d4b2-4f2a-9d22-52928aa39da8" (UID: "b46d2359-d4b2-4f2a-9d22-52928aa39da8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:51:10 crc kubenswrapper[4724]: I0223 17:51:10.678759 4724 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b46d2359-d4b2-4f2a-9d22-52928aa39da8-logs\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:10 crc kubenswrapper[4724]: I0223 17:51:10.683263 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b46d2359-d4b2-4f2a-9d22-52928aa39da8-kube-api-access-4fwtr" (OuterVolumeSpecName: "kube-api-access-4fwtr") pod "b46d2359-d4b2-4f2a-9d22-52928aa39da8" (UID: "b46d2359-d4b2-4f2a-9d22-52928aa39da8"). InnerVolumeSpecName "kube-api-access-4fwtr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:51:10 crc kubenswrapper[4724]: I0223 17:51:10.683430 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b46d2359-d4b2-4f2a-9d22-52928aa39da8-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b46d2359-d4b2-4f2a-9d22-52928aa39da8" (UID: "b46d2359-d4b2-4f2a-9d22-52928aa39da8"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:51:10 crc kubenswrapper[4724]: I0223 17:51:10.685947 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b46d2359-d4b2-4f2a-9d22-52928aa39da8-scripts" (OuterVolumeSpecName: "scripts") pod "b46d2359-d4b2-4f2a-9d22-52928aa39da8" (UID: "b46d2359-d4b2-4f2a-9d22-52928aa39da8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:51:10 crc kubenswrapper[4724]: I0223 17:51:10.686907 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "b46d2359-d4b2-4f2a-9d22-52928aa39da8" (UID: "b46d2359-d4b2-4f2a-9d22-52928aa39da8"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 23 17:51:10 crc kubenswrapper[4724]: I0223 17:51:10.718493 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b46d2359-d4b2-4f2a-9d22-52928aa39da8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b46d2359-d4b2-4f2a-9d22-52928aa39da8" (UID: "b46d2359-d4b2-4f2a-9d22-52928aa39da8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:51:10 crc kubenswrapper[4724]: I0223 17:51:10.752719 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b46d2359-d4b2-4f2a-9d22-52928aa39da8-config-data" (OuterVolumeSpecName: "config-data") pod "b46d2359-d4b2-4f2a-9d22-52928aa39da8" (UID: "b46d2359-d4b2-4f2a-9d22-52928aa39da8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:51:10 crc kubenswrapper[4724]: I0223 17:51:10.780286 4724 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b46d2359-d4b2-4f2a-9d22-52928aa39da8-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:10 crc kubenswrapper[4724]: I0223 17:51:10.780329 4724 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Feb 23 17:51:10 crc kubenswrapper[4724]: I0223 17:51:10.780339 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b46d2359-d4b2-4f2a-9d22-52928aa39da8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:10 crc kubenswrapper[4724]: I0223 17:51:10.780350 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fwtr\" (UniqueName: \"kubernetes.io/projected/b46d2359-d4b2-4f2a-9d22-52928aa39da8-kube-api-access-4fwtr\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:10 crc kubenswrapper[4724]: I0223 17:51:10.780359 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b46d2359-d4b2-4f2a-9d22-52928aa39da8-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:10 crc kubenswrapper[4724]: I0223 17:51:10.780367 4724 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b46d2359-d4b2-4f2a-9d22-52928aa39da8-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:10 crc kubenswrapper[4724]: I0223 17:51:10.811908 4724 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Feb 23 17:51:10 crc kubenswrapper[4724]: I0223 17:51:10.817814 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b46d2359-d4b2-4f2a-9d22-52928aa39da8-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b46d2359-d4b2-4f2a-9d22-52928aa39da8" (UID: "b46d2359-d4b2-4f2a-9d22-52928aa39da8"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:51:10 crc kubenswrapper[4724]: I0223 17:51:10.884527 4724 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:10 crc kubenswrapper[4724]: I0223 17:51:10.884556 4724 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b46d2359-d4b2-4f2a-9d22-52928aa39da8-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:10 crc kubenswrapper[4724]: I0223 17:51:10.953061 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 23 17:51:10 crc kubenswrapper[4724]: I0223 17:51:10.968366 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea216262-5ec8-4c74-8cec-376d7241e6a8" path="/var/lib/kubelet/pods/ea216262-5ec8-4c74-8cec-376d7241e6a8/volumes" Feb 23 17:51:10 crc kubenswrapper[4724]: I0223 17:51:10.969495 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b46d2359-d4b2-4f2a-9d22-52928aa39da8","Type":"ContainerDied","Data":"9ade4ad5f2754b3ecaabd2f44b8ddcd9075c78fa4fddb524ee085f8630c1d51d"} Feb 23 17:51:10 crc kubenswrapper[4724]: I0223 17:51:10.969960 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8883a549-3562-42b7-86d4-934c3076f934","Type":"ContainerStarted","Data":"79bfc09c93277ca92cf99a9c591296633a10fa150b8e0494f85f72bc55ec6c19"} Feb 23 17:51:10 crc kubenswrapper[4724]: I0223 17:51:10.969976 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8883a549-3562-42b7-86d4-934c3076f934","Type":"ContainerStarted","Data":"82505e677133096fafde28590a64c729613a15aad0e5fede481733cf282fb0aa"} Feb 23 17:51:10 crc kubenswrapper[4724]: I0223 17:51:10.969993 4724 scope.go:117] "RemoveContainer" containerID="9bfe9e18ae2365edf0716f2f387235bee21577f05d64aed5741d350a3ebde028" Feb 23 17:51:11 crc kubenswrapper[4724]: I0223 17:51:11.007592 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 17:51:11 crc kubenswrapper[4724]: I0223 17:51:11.019785 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 17:51:11 crc kubenswrapper[4724]: I0223 17:51:11.029056 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 17:51:11 crc kubenswrapper[4724]: E0223 17:51:11.029832 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b46d2359-d4b2-4f2a-9d22-52928aa39da8" containerName="glance-httpd" Feb 23 17:51:11 crc kubenswrapper[4724]: I0223 17:51:11.029854 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="b46d2359-d4b2-4f2a-9d22-52928aa39da8" containerName="glance-httpd" Feb 23 17:51:11 crc kubenswrapper[4724]: E0223 17:51:11.029888 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b46d2359-d4b2-4f2a-9d22-52928aa39da8" containerName="glance-log" Feb 23 17:51:11 crc kubenswrapper[4724]: I0223 17:51:11.029896 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="b46d2359-d4b2-4f2a-9d22-52928aa39da8" containerName="glance-log" Feb 23 17:51:11 crc kubenswrapper[4724]: I0223 17:51:11.030116 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="b46d2359-d4b2-4f2a-9d22-52928aa39da8" containerName="glance-httpd" Feb 23 17:51:11 crc kubenswrapper[4724]: I0223 17:51:11.030149 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="b46d2359-d4b2-4f2a-9d22-52928aa39da8" containerName="glance-log" Feb 23 17:51:11 crc kubenswrapper[4724]: I0223 17:51:11.031544 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 23 17:51:11 crc kubenswrapper[4724]: I0223 17:51:11.037160 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 17:51:11 crc kubenswrapper[4724]: I0223 17:51:11.037220 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 23 17:51:11 crc kubenswrapper[4724]: I0223 17:51:11.037279 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 23 17:51:11 crc kubenswrapper[4724]: I0223 17:51:11.062821 4724 scope.go:117] "RemoveContainer" containerID="b6983b542727d47e3909c7a0e5d2098fbe7de7dab8de7baa5b87c28a3af808db" Feb 23 17:51:11 crc kubenswrapper[4724]: I0223 17:51:11.191470 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/260cff26-a398-4898-9708-61ef33a6aa00-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"260cff26-a398-4898-9708-61ef33a6aa00\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:51:11 crc kubenswrapper[4724]: I0223 17:51:11.191814 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/260cff26-a398-4898-9708-61ef33a6aa00-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"260cff26-a398-4898-9708-61ef33a6aa00\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:51:11 crc kubenswrapper[4724]: I0223 17:51:11.191855 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/260cff26-a398-4898-9708-61ef33a6aa00-logs\") pod \"glance-default-internal-api-0\" (UID: \"260cff26-a398-4898-9708-61ef33a6aa00\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:51:11 crc kubenswrapper[4724]: I0223 17:51:11.191877 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmtkn\" (UniqueName: \"kubernetes.io/projected/260cff26-a398-4898-9708-61ef33a6aa00-kube-api-access-mmtkn\") pod \"glance-default-internal-api-0\" (UID: \"260cff26-a398-4898-9708-61ef33a6aa00\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:51:11 crc kubenswrapper[4724]: I0223 17:51:11.191903 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/260cff26-a398-4898-9708-61ef33a6aa00-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"260cff26-a398-4898-9708-61ef33a6aa00\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:51:11 crc kubenswrapper[4724]: I0223 17:51:11.191925 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/260cff26-a398-4898-9708-61ef33a6aa00-scripts\") pod \"glance-default-internal-api-0\" (UID: \"260cff26-a398-4898-9708-61ef33a6aa00\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:51:11 crc kubenswrapper[4724]: I0223 17:51:11.192099 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/260cff26-a398-4898-9708-61ef33a6aa00-config-data\") pod \"glance-default-internal-api-0\" (UID: \"260cff26-a398-4898-9708-61ef33a6aa00\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:51:11 crc kubenswrapper[4724]: I0223 17:51:11.192145 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"260cff26-a398-4898-9708-61ef33a6aa00\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:51:11 crc kubenswrapper[4724]: I0223 17:51:11.293676 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/260cff26-a398-4898-9708-61ef33a6aa00-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"260cff26-a398-4898-9708-61ef33a6aa00\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:51:11 crc kubenswrapper[4724]: I0223 17:51:11.293760 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/260cff26-a398-4898-9708-61ef33a6aa00-logs\") pod \"glance-default-internal-api-0\" (UID: \"260cff26-a398-4898-9708-61ef33a6aa00\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:51:11 crc kubenswrapper[4724]: I0223 17:51:11.293794 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmtkn\" (UniqueName: \"kubernetes.io/projected/260cff26-a398-4898-9708-61ef33a6aa00-kube-api-access-mmtkn\") pod \"glance-default-internal-api-0\" (UID: \"260cff26-a398-4898-9708-61ef33a6aa00\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:51:11 crc kubenswrapper[4724]: I0223 17:51:11.293830 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/260cff26-a398-4898-9708-61ef33a6aa00-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"260cff26-a398-4898-9708-61ef33a6aa00\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:51:11 crc kubenswrapper[4724]: I0223 17:51:11.293858 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/260cff26-a398-4898-9708-61ef33a6aa00-scripts\") pod \"glance-default-internal-api-0\" (UID: \"260cff26-a398-4898-9708-61ef33a6aa00\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:51:11 crc kubenswrapper[4724]: I0223 17:51:11.293909 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/260cff26-a398-4898-9708-61ef33a6aa00-config-data\") pod \"glance-default-internal-api-0\" (UID: \"260cff26-a398-4898-9708-61ef33a6aa00\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:51:11 crc kubenswrapper[4724]: I0223 17:51:11.293932 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"260cff26-a398-4898-9708-61ef33a6aa00\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:51:11 crc kubenswrapper[4724]: I0223 17:51:11.294041 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/260cff26-a398-4898-9708-61ef33a6aa00-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"260cff26-a398-4898-9708-61ef33a6aa00\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:51:11 crc kubenswrapper[4724]: I0223 17:51:11.294448 4724 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"260cff26-a398-4898-9708-61ef33a6aa00\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Feb 23 17:51:11 crc kubenswrapper[4724]: I0223 17:51:11.294574 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/260cff26-a398-4898-9708-61ef33a6aa00-logs\") pod \"glance-default-internal-api-0\" (UID: \"260cff26-a398-4898-9708-61ef33a6aa00\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:51:11 crc kubenswrapper[4724]: I0223 17:51:11.295216 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/260cff26-a398-4898-9708-61ef33a6aa00-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"260cff26-a398-4898-9708-61ef33a6aa00\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:51:11 crc kubenswrapper[4724]: I0223 17:51:11.302291 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/260cff26-a398-4898-9708-61ef33a6aa00-config-data\") pod \"glance-default-internal-api-0\" (UID: \"260cff26-a398-4898-9708-61ef33a6aa00\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:51:11 crc kubenswrapper[4724]: I0223 17:51:11.312004 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/260cff26-a398-4898-9708-61ef33a6aa00-scripts\") pod \"glance-default-internal-api-0\" (UID: \"260cff26-a398-4898-9708-61ef33a6aa00\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:51:11 crc kubenswrapper[4724]: I0223 17:51:11.312185 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/260cff26-a398-4898-9708-61ef33a6aa00-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"260cff26-a398-4898-9708-61ef33a6aa00\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:51:11 crc kubenswrapper[4724]: I0223 17:51:11.314505 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/260cff26-a398-4898-9708-61ef33a6aa00-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"260cff26-a398-4898-9708-61ef33a6aa00\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:51:11 crc kubenswrapper[4724]: I0223 17:51:11.316729 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmtkn\" (UniqueName: \"kubernetes.io/projected/260cff26-a398-4898-9708-61ef33a6aa00-kube-api-access-mmtkn\") pod \"glance-default-internal-api-0\" (UID: \"260cff26-a398-4898-9708-61ef33a6aa00\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:51:11 crc kubenswrapper[4724]: I0223 17:51:11.326201 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"260cff26-a398-4898-9708-61ef33a6aa00\") " pod="openstack/glance-default-internal-api-0" Feb 23 17:51:11 crc kubenswrapper[4724]: I0223 17:51:11.376502 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 23 17:51:11 crc kubenswrapper[4724]: I0223 17:51:11.557864 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 23 17:51:11 crc kubenswrapper[4724]: I0223 17:51:11.558178 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 23 17:51:11 crc kubenswrapper[4724]: I0223 17:51:11.559143 4724 scope.go:117] "RemoveContainer" containerID="14e56e42a874bd312d515edbb00d44d0a514d159d98544acf28fba090f14c860" Feb 23 17:51:11 crc kubenswrapper[4724]: E0223 17:51:11.559439 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 40s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(86eb7ff0-87b2-4538-8c5b-9126768e810b)\"" pod="openstack/watcher-decision-engine-0" podUID="86eb7ff0-87b2-4538-8c5b-9126768e810b" Feb 23 17:51:11 crc kubenswrapper[4724]: I0223 17:51:11.871543 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 17:51:11 crc kubenswrapper[4724]: I0223 17:51:11.967781 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"260cff26-a398-4898-9708-61ef33a6aa00","Type":"ContainerStarted","Data":"946e576fdde4b2d408971ee53af4b585a2d10cfffd358cab5b408810e6d1100c"} Feb 23 17:51:11 crc kubenswrapper[4724]: I0223 17:51:11.969564 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8883a549-3562-42b7-86d4-934c3076f934","Type":"ContainerStarted","Data":"418546ef645a4c1a893e4e91ba96100df2637307a934ee0353cc7cc7b044515a"} Feb 23 17:51:12 crc kubenswrapper[4724]: I0223 17:51:12.054633 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.054608097 podStartE2EDuration="4.054608097s" podCreationTimestamp="2026-02-23 17:51:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:51:12.00599606 +0000 UTC m=+1227.822195670" watchObservedRunningTime="2026-02-23 17:51:12.054608097 +0000 UTC m=+1227.870807697" Feb 23 17:51:12 crc kubenswrapper[4724]: I0223 17:51:12.965636 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b46d2359-d4b2-4f2a-9d22-52928aa39da8" path="/var/lib/kubelet/pods/b46d2359-d4b2-4f2a-9d22-52928aa39da8/volumes" Feb 23 17:51:12 crc kubenswrapper[4724]: I0223 17:51:12.983335 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"260cff26-a398-4898-9708-61ef33a6aa00","Type":"ContainerStarted","Data":"87b3587acf2decd33a158eaa52cd6663c94fa47284d4c91a0d7640ed6a3f9884"} Feb 23 17:51:13 crc kubenswrapper[4724]: I0223 17:51:13.994996 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"260cff26-a398-4898-9708-61ef33a6aa00","Type":"ContainerStarted","Data":"b690b168235bb491a90f45e23927d00c9dbf45aabee85862c1d227ab248adca1"} Feb 23 17:51:14 crc kubenswrapper[4724]: I0223 17:51:14.033658 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.033635786 podStartE2EDuration="4.033635786s" podCreationTimestamp="2026-02-23 17:51:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:51:14.020027853 +0000 UTC m=+1229.836227473" watchObservedRunningTime="2026-02-23 17:51:14.033635786 +0000 UTC m=+1229.849835386" Feb 23 17:51:15 crc kubenswrapper[4724]: I0223 17:51:15.552158 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="20cceea2-c746-4269-990c-5032594f1196" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 23 17:51:17 crc kubenswrapper[4724]: I0223 17:51:17.045245 4724 generic.go:334] "Generic (PLEG): container finished" podID="1c23170d-6cdc-4c4e-be8d-a4e61cb8feeb" containerID="b5c8f1ce7cacc65a9f809d2af43294f845ccae54956d141e9e38f6ecb6966019" exitCode=0 Feb 23 17:51:17 crc kubenswrapper[4724]: I0223 17:51:17.045349 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-s46d2" event={"ID":"1c23170d-6cdc-4c4e-be8d-a4e61cb8feeb","Type":"ContainerDied","Data":"b5c8f1ce7cacc65a9f809d2af43294f845ccae54956d141e9e38f6ecb6966019"} Feb 23 17:51:17 crc kubenswrapper[4724]: E0223 17:51:17.330428 4724 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/3704c6ed2fd54e305483b98a2bf15f35467f99ca38d476a9512636aeb2828ae9/diff" to get inode usage: stat /var/lib/containers/storage/overlay/3704c6ed2fd54e305483b98a2bf15f35467f99ca38d476a9512636aeb2828ae9/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_cinder-api-0_3be48d90-f238-4e9e-83ca-c91030530489/cinder-api/0.log" to get inode usage: stat /var/log/pods/openstack_cinder-api-0_3be48d90-f238-4e9e-83ca-c91030530489/cinder-api/0.log: no such file or directory Feb 23 17:51:18 crc kubenswrapper[4724]: I0223 17:51:18.419936 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-s46d2" Feb 23 17:51:18 crc kubenswrapper[4724]: I0223 17:51:18.527460 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c23170d-6cdc-4c4e-be8d-a4e61cb8feeb-combined-ca-bundle\") pod \"1c23170d-6cdc-4c4e-be8d-a4e61cb8feeb\" (UID: \"1c23170d-6cdc-4c4e-be8d-a4e61cb8feeb\") " Feb 23 17:51:18 crc kubenswrapper[4724]: I0223 17:51:18.527553 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c23170d-6cdc-4c4e-be8d-a4e61cb8feeb-config-data\") pod \"1c23170d-6cdc-4c4e-be8d-a4e61cb8feeb\" (UID: \"1c23170d-6cdc-4c4e-be8d-a4e61cb8feeb\") " Feb 23 17:51:18 crc kubenswrapper[4724]: I0223 17:51:18.527838 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plfz5\" (UniqueName: \"kubernetes.io/projected/1c23170d-6cdc-4c4e-be8d-a4e61cb8feeb-kube-api-access-plfz5\") pod \"1c23170d-6cdc-4c4e-be8d-a4e61cb8feeb\" (UID: \"1c23170d-6cdc-4c4e-be8d-a4e61cb8feeb\") " Feb 23 17:51:18 crc kubenswrapper[4724]: I0223 17:51:18.527896 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c23170d-6cdc-4c4e-be8d-a4e61cb8feeb-scripts\") pod \"1c23170d-6cdc-4c4e-be8d-a4e61cb8feeb\" (UID: \"1c23170d-6cdc-4c4e-be8d-a4e61cb8feeb\") " Feb 23 17:51:18 crc kubenswrapper[4724]: I0223 17:51:18.534644 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c23170d-6cdc-4c4e-be8d-a4e61cb8feeb-kube-api-access-plfz5" (OuterVolumeSpecName: "kube-api-access-plfz5") pod "1c23170d-6cdc-4c4e-be8d-a4e61cb8feeb" (UID: "1c23170d-6cdc-4c4e-be8d-a4e61cb8feeb"). InnerVolumeSpecName "kube-api-access-plfz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:51:18 crc kubenswrapper[4724]: I0223 17:51:18.535242 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c23170d-6cdc-4c4e-be8d-a4e61cb8feeb-scripts" (OuterVolumeSpecName: "scripts") pod "1c23170d-6cdc-4c4e-be8d-a4e61cb8feeb" (UID: "1c23170d-6cdc-4c4e-be8d-a4e61cb8feeb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:51:18 crc kubenswrapper[4724]: I0223 17:51:18.559886 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c23170d-6cdc-4c4e-be8d-a4e61cb8feeb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1c23170d-6cdc-4c4e-be8d-a4e61cb8feeb" (UID: "1c23170d-6cdc-4c4e-be8d-a4e61cb8feeb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:51:18 crc kubenswrapper[4724]: I0223 17:51:18.561967 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c23170d-6cdc-4c4e-be8d-a4e61cb8feeb-config-data" (OuterVolumeSpecName: "config-data") pod "1c23170d-6cdc-4c4e-be8d-a4e61cb8feeb" (UID: "1c23170d-6cdc-4c4e-be8d-a4e61cb8feeb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:51:18 crc kubenswrapper[4724]: I0223 17:51:18.631184 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c23170d-6cdc-4c4e-be8d-a4e61cb8feeb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:18 crc kubenswrapper[4724]: I0223 17:51:18.631213 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c23170d-6cdc-4c4e-be8d-a4e61cb8feeb-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:18 crc kubenswrapper[4724]: I0223 17:51:18.631224 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plfz5\" (UniqueName: \"kubernetes.io/projected/1c23170d-6cdc-4c4e-be8d-a4e61cb8feeb-kube-api-access-plfz5\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:18 crc kubenswrapper[4724]: I0223 17:51:18.631232 4724 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c23170d-6cdc-4c4e-be8d-a4e61cb8feeb-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:19 crc kubenswrapper[4724]: I0223 17:51:19.067695 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-s46d2" event={"ID":"1c23170d-6cdc-4c4e-be8d-a4e61cb8feeb","Type":"ContainerDied","Data":"5c432d31acb83dbd512249496937d5ac81819411763ab60224df7a7f258cc3e3"} Feb 23 17:51:19 crc kubenswrapper[4724]: I0223 17:51:19.067742 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c432d31acb83dbd512249496937d5ac81819411763ab60224df7a7f258cc3e3" Feb 23 17:51:19 crc kubenswrapper[4724]: I0223 17:51:19.067809 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-s46d2" Feb 23 17:51:19 crc kubenswrapper[4724]: I0223 17:51:19.158526 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 23 17:51:19 crc kubenswrapper[4724]: E0223 17:51:19.158961 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c23170d-6cdc-4c4e-be8d-a4e61cb8feeb" containerName="nova-cell0-conductor-db-sync" Feb 23 17:51:19 crc kubenswrapper[4724]: I0223 17:51:19.158986 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c23170d-6cdc-4c4e-be8d-a4e61cb8feeb" containerName="nova-cell0-conductor-db-sync" Feb 23 17:51:19 crc kubenswrapper[4724]: I0223 17:51:19.159215 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c23170d-6cdc-4c4e-be8d-a4e61cb8feeb" containerName="nova-cell0-conductor-db-sync" Feb 23 17:51:19 crc kubenswrapper[4724]: I0223 17:51:19.160026 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 23 17:51:19 crc kubenswrapper[4724]: I0223 17:51:19.162482 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 23 17:51:19 crc kubenswrapper[4724]: I0223 17:51:19.163166 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-bfc5w" Feb 23 17:51:19 crc kubenswrapper[4724]: I0223 17:51:19.190455 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 23 17:51:19 crc kubenswrapper[4724]: I0223 17:51:19.241603 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/557c3e1b-ccc8-48d7-8a2c-78de846beac2-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"557c3e1b-ccc8-48d7-8a2c-78de846beac2\") " pod="openstack/nova-cell0-conductor-0" Feb 23 17:51:19 crc kubenswrapper[4724]: I0223 17:51:19.241952 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/557c3e1b-ccc8-48d7-8a2c-78de846beac2-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"557c3e1b-ccc8-48d7-8a2c-78de846beac2\") " pod="openstack/nova-cell0-conductor-0" Feb 23 17:51:19 crc kubenswrapper[4724]: I0223 17:51:19.242131 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4x78\" (UniqueName: \"kubernetes.io/projected/557c3e1b-ccc8-48d7-8a2c-78de846beac2-kube-api-access-v4x78\") pod \"nova-cell0-conductor-0\" (UID: \"557c3e1b-ccc8-48d7-8a2c-78de846beac2\") " pod="openstack/nova-cell0-conductor-0" Feb 23 17:51:19 crc kubenswrapper[4724]: I0223 17:51:19.343293 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4x78\" (UniqueName: \"kubernetes.io/projected/557c3e1b-ccc8-48d7-8a2c-78de846beac2-kube-api-access-v4x78\") pod \"nova-cell0-conductor-0\" (UID: \"557c3e1b-ccc8-48d7-8a2c-78de846beac2\") " pod="openstack/nova-cell0-conductor-0" Feb 23 17:51:19 crc kubenswrapper[4724]: I0223 17:51:19.343430 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/557c3e1b-ccc8-48d7-8a2c-78de846beac2-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"557c3e1b-ccc8-48d7-8a2c-78de846beac2\") " pod="openstack/nova-cell0-conductor-0" Feb 23 17:51:19 crc kubenswrapper[4724]: I0223 17:51:19.343514 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/557c3e1b-ccc8-48d7-8a2c-78de846beac2-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"557c3e1b-ccc8-48d7-8a2c-78de846beac2\") " pod="openstack/nova-cell0-conductor-0" Feb 23 17:51:19 crc kubenswrapper[4724]: I0223 17:51:19.351210 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/557c3e1b-ccc8-48d7-8a2c-78de846beac2-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"557c3e1b-ccc8-48d7-8a2c-78de846beac2\") " pod="openstack/nova-cell0-conductor-0" Feb 23 17:51:19 crc kubenswrapper[4724]: I0223 17:51:19.352942 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/557c3e1b-ccc8-48d7-8a2c-78de846beac2-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"557c3e1b-ccc8-48d7-8a2c-78de846beac2\") " pod="openstack/nova-cell0-conductor-0" Feb 23 17:51:19 crc kubenswrapper[4724]: I0223 17:51:19.353133 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 23 17:51:19 crc kubenswrapper[4724]: I0223 17:51:19.353161 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 23 17:51:19 crc kubenswrapper[4724]: I0223 17:51:19.363967 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4x78\" (UniqueName: \"kubernetes.io/projected/557c3e1b-ccc8-48d7-8a2c-78de846beac2-kube-api-access-v4x78\") pod \"nova-cell0-conductor-0\" (UID: \"557c3e1b-ccc8-48d7-8a2c-78de846beac2\") " pod="openstack/nova-cell0-conductor-0" Feb 23 17:51:19 crc kubenswrapper[4724]: I0223 17:51:19.392472 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 23 17:51:19 crc kubenswrapper[4724]: I0223 17:51:19.394309 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 23 17:51:19 crc kubenswrapper[4724]: I0223 17:51:19.487883 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 23 17:51:19 crc kubenswrapper[4724]: I0223 17:51:19.990139 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 23 17:51:20 crc kubenswrapper[4724]: I0223 17:51:20.079536 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"557c3e1b-ccc8-48d7-8a2c-78de846beac2","Type":"ContainerStarted","Data":"f92c95ff051acaa2cc1793d7e4e92a722823521e91844520cf1cbe6ab922d335"} Feb 23 17:51:20 crc kubenswrapper[4724]: I0223 17:51:20.079883 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 23 17:51:20 crc kubenswrapper[4724]: I0223 17:51:20.079907 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 23 17:51:20 crc kubenswrapper[4724]: W0223 17:51:20.699794 4724 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20cceea2_c746_4269_990c_5032594f1196.slice/crio-conmon-d4d3318f70900e7c5943e5365fb51a413a1cf1332739af013be363f751884ef6.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20cceea2_c746_4269_990c_5032594f1196.slice/crio-conmon-d4d3318f70900e7c5943e5365fb51a413a1cf1332739af013be363f751884ef6.scope: no such file or directory Feb 23 17:51:20 crc kubenswrapper[4724]: W0223 17:51:20.700133 4724 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c23170d_6cdc_4c4e_be8d_a4e61cb8feeb.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c23170d_6cdc_4c4e_be8d_a4e61cb8feeb.slice: no such file or directory Feb 23 17:51:20 crc kubenswrapper[4724]: W0223 17:51:20.700167 4724 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20cceea2_c746_4269_990c_5032594f1196.slice/crio-d4d3318f70900e7c5943e5365fb51a413a1cf1332739af013be363f751884ef6.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20cceea2_c746_4269_990c_5032594f1196.slice/crio-d4d3318f70900e7c5943e5365fb51a413a1cf1332739af013be363f751884ef6.scope: no such file or directory Feb 23 17:51:21 crc kubenswrapper[4724]: I0223 17:51:21.099195 4724 generic.go:334] "Generic (PLEG): container finished" podID="20cceea2-c746-4269-990c-5032594f1196" containerID="d4d3318f70900e7c5943e5365fb51a413a1cf1332739af013be363f751884ef6" exitCode=137 Feb 23 17:51:21 crc kubenswrapper[4724]: I0223 17:51:21.099263 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20cceea2-c746-4269-990c-5032594f1196","Type":"ContainerDied","Data":"d4d3318f70900e7c5943e5365fb51a413a1cf1332739af013be363f751884ef6"} Feb 23 17:51:21 crc kubenswrapper[4724]: I0223 17:51:21.099293 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20cceea2-c746-4269-990c-5032594f1196","Type":"ContainerDied","Data":"b6a0f995889e47ab2e0bcc54d2b60243f5582cc7b65b460b95e5794eb1b2984f"} Feb 23 17:51:21 crc kubenswrapper[4724]: I0223 17:51:21.099304 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6a0f995889e47ab2e0bcc54d2b60243f5582cc7b65b460b95e5794eb1b2984f" Feb 23 17:51:21 crc kubenswrapper[4724]: I0223 17:51:21.106675 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"557c3e1b-ccc8-48d7-8a2c-78de846beac2","Type":"ContainerStarted","Data":"eb5ce66f03728d772c69ce9d1f85abe329109ddc205cc1feaef82f78a8e2f9fb"} Feb 23 17:51:21 crc kubenswrapper[4724]: I0223 17:51:21.106742 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 23 17:51:21 crc kubenswrapper[4724]: I0223 17:51:21.135237 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.135213653 podStartE2EDuration="2.135213653s" podCreationTimestamp="2026-02-23 17:51:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:51:21.126815772 +0000 UTC m=+1236.943015392" watchObservedRunningTime="2026-02-23 17:51:21.135213653 +0000 UTC m=+1236.951413263" Feb 23 17:51:21 crc kubenswrapper[4724]: I0223 17:51:21.141280 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 17:51:21 crc kubenswrapper[4724]: I0223 17:51:21.287966 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20cceea2-c746-4269-990c-5032594f1196-log-httpd\") pod \"20cceea2-c746-4269-990c-5032594f1196\" (UID: \"20cceea2-c746-4269-990c-5032594f1196\") " Feb 23 17:51:21 crc kubenswrapper[4724]: I0223 17:51:21.288013 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgnjv\" (UniqueName: \"kubernetes.io/projected/20cceea2-c746-4269-990c-5032594f1196-kube-api-access-bgnjv\") pod \"20cceea2-c746-4269-990c-5032594f1196\" (UID: \"20cceea2-c746-4269-990c-5032594f1196\") " Feb 23 17:51:21 crc kubenswrapper[4724]: I0223 17:51:21.288097 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20cceea2-c746-4269-990c-5032594f1196-combined-ca-bundle\") pod \"20cceea2-c746-4269-990c-5032594f1196\" (UID: \"20cceea2-c746-4269-990c-5032594f1196\") " Feb 23 17:51:21 crc kubenswrapper[4724]: I0223 17:51:21.288171 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20cceea2-c746-4269-990c-5032594f1196-sg-core-conf-yaml\") pod \"20cceea2-c746-4269-990c-5032594f1196\" (UID: \"20cceea2-c746-4269-990c-5032594f1196\") " Feb 23 17:51:21 crc kubenswrapper[4724]: I0223 17:51:21.288309 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20cceea2-c746-4269-990c-5032594f1196-run-httpd\") pod \"20cceea2-c746-4269-990c-5032594f1196\" (UID: \"20cceea2-c746-4269-990c-5032594f1196\") " Feb 23 17:51:21 crc kubenswrapper[4724]: I0223 17:51:21.288350 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20cceea2-c746-4269-990c-5032594f1196-config-data\") pod \"20cceea2-c746-4269-990c-5032594f1196\" (UID: \"20cceea2-c746-4269-990c-5032594f1196\") " Feb 23 17:51:21 crc kubenswrapper[4724]: I0223 17:51:21.288415 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20cceea2-c746-4269-990c-5032594f1196-scripts\") pod \"20cceea2-c746-4269-990c-5032594f1196\" (UID: \"20cceea2-c746-4269-990c-5032594f1196\") " Feb 23 17:51:21 crc kubenswrapper[4724]: I0223 17:51:21.289124 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20cceea2-c746-4269-990c-5032594f1196-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "20cceea2-c746-4269-990c-5032594f1196" (UID: "20cceea2-c746-4269-990c-5032594f1196"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:51:21 crc kubenswrapper[4724]: I0223 17:51:21.289283 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20cceea2-c746-4269-990c-5032594f1196-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "20cceea2-c746-4269-990c-5032594f1196" (UID: "20cceea2-c746-4269-990c-5032594f1196"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:51:21 crc kubenswrapper[4724]: I0223 17:51:21.289837 4724 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20cceea2-c746-4269-990c-5032594f1196-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:21 crc kubenswrapper[4724]: I0223 17:51:21.289893 4724 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20cceea2-c746-4269-990c-5032594f1196-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:21 crc kubenswrapper[4724]: I0223 17:51:21.295301 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20cceea2-c746-4269-990c-5032594f1196-kube-api-access-bgnjv" (OuterVolumeSpecName: "kube-api-access-bgnjv") pod "20cceea2-c746-4269-990c-5032594f1196" (UID: "20cceea2-c746-4269-990c-5032594f1196"). InnerVolumeSpecName "kube-api-access-bgnjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:51:21 crc kubenswrapper[4724]: I0223 17:51:21.306720 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20cceea2-c746-4269-990c-5032594f1196-scripts" (OuterVolumeSpecName: "scripts") pod "20cceea2-c746-4269-990c-5032594f1196" (UID: "20cceea2-c746-4269-990c-5032594f1196"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:51:21 crc kubenswrapper[4724]: I0223 17:51:21.319803 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20cceea2-c746-4269-990c-5032594f1196-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "20cceea2-c746-4269-990c-5032594f1196" (UID: "20cceea2-c746-4269-990c-5032594f1196"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:51:21 crc kubenswrapper[4724]: I0223 17:51:21.361728 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20cceea2-c746-4269-990c-5032594f1196-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20cceea2-c746-4269-990c-5032594f1196" (UID: "20cceea2-c746-4269-990c-5032594f1196"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:51:21 crc kubenswrapper[4724]: I0223 17:51:21.377941 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 23 17:51:21 crc kubenswrapper[4724]: I0223 17:51:21.379992 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 23 17:51:21 crc kubenswrapper[4724]: I0223 17:51:21.391816 4724 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20cceea2-c746-4269-990c-5032594f1196-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:21 crc kubenswrapper[4724]: I0223 17:51:21.391845 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgnjv\" (UniqueName: \"kubernetes.io/projected/20cceea2-c746-4269-990c-5032594f1196-kube-api-access-bgnjv\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:21 crc kubenswrapper[4724]: I0223 17:51:21.391856 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20cceea2-c746-4269-990c-5032594f1196-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:21 crc kubenswrapper[4724]: I0223 17:51:21.391871 4724 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20cceea2-c746-4269-990c-5032594f1196-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:21 crc kubenswrapper[4724]: I0223 17:51:21.393228 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20cceea2-c746-4269-990c-5032594f1196-config-data" (OuterVolumeSpecName: "config-data") pod "20cceea2-c746-4269-990c-5032594f1196" (UID: "20cceea2-c746-4269-990c-5032594f1196"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:51:21 crc kubenswrapper[4724]: I0223 17:51:21.411307 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 23 17:51:21 crc kubenswrapper[4724]: I0223 17:51:21.424178 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 23 17:51:21 crc kubenswrapper[4724]: I0223 17:51:21.493458 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20cceea2-c746-4269-990c-5032594f1196-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:22 crc kubenswrapper[4724]: I0223 17:51:22.000432 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 23 17:51:22 crc kubenswrapper[4724]: I0223 17:51:22.000840 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 23 17:51:22 crc kubenswrapper[4724]: I0223 17:51:22.116899 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 17:51:22 crc kubenswrapper[4724]: I0223 17:51:22.118159 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 23 17:51:22 crc kubenswrapper[4724]: I0223 17:51:22.118186 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 23 17:51:22 crc kubenswrapper[4724]: I0223 17:51:22.169263 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 17:51:22 crc kubenswrapper[4724]: I0223 17:51:22.177212 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 23 17:51:22 crc kubenswrapper[4724]: I0223 17:51:22.193959 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 23 17:51:22 crc kubenswrapper[4724]: E0223 17:51:22.194365 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20cceea2-c746-4269-990c-5032594f1196" containerName="ceilometer-notification-agent" Feb 23 17:51:22 crc kubenswrapper[4724]: I0223 17:51:22.194401 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="20cceea2-c746-4269-990c-5032594f1196" containerName="ceilometer-notification-agent" Feb 23 17:51:22 crc kubenswrapper[4724]: E0223 17:51:22.194418 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20cceea2-c746-4269-990c-5032594f1196" containerName="proxy-httpd" Feb 23 17:51:22 crc kubenswrapper[4724]: I0223 17:51:22.194425 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="20cceea2-c746-4269-990c-5032594f1196" containerName="proxy-httpd" Feb 23 17:51:22 crc kubenswrapper[4724]: E0223 17:51:22.194436 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20cceea2-c746-4269-990c-5032594f1196" containerName="sg-core" Feb 23 17:51:22 crc kubenswrapper[4724]: I0223 17:51:22.194443 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="20cceea2-c746-4269-990c-5032594f1196" containerName="sg-core" Feb 23 17:51:22 crc kubenswrapper[4724]: E0223 17:51:22.194458 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20cceea2-c746-4269-990c-5032594f1196" containerName="ceilometer-central-agent" Feb 23 17:51:22 crc kubenswrapper[4724]: I0223 17:51:22.194463 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="20cceea2-c746-4269-990c-5032594f1196" containerName="ceilometer-central-agent" Feb 23 17:51:22 crc kubenswrapper[4724]: I0223 17:51:22.194644 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="20cceea2-c746-4269-990c-5032594f1196" containerName="sg-core" Feb 23 17:51:22 crc kubenswrapper[4724]: I0223 17:51:22.194664 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="20cceea2-c746-4269-990c-5032594f1196" containerName="proxy-httpd" Feb 23 17:51:22 crc kubenswrapper[4724]: I0223 17:51:22.194673 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="20cceea2-c746-4269-990c-5032594f1196" containerName="ceilometer-central-agent" Feb 23 17:51:22 crc kubenswrapper[4724]: I0223 17:51:22.194687 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="20cceea2-c746-4269-990c-5032594f1196" containerName="ceilometer-notification-agent" Feb 23 17:51:22 crc kubenswrapper[4724]: I0223 17:51:22.196327 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 17:51:22 crc kubenswrapper[4724]: I0223 17:51:22.199886 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 23 17:51:22 crc kubenswrapper[4724]: I0223 17:51:22.199944 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 23 17:51:22 crc kubenswrapper[4724]: I0223 17:51:22.214447 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 17:51:22 crc kubenswrapper[4724]: I0223 17:51:22.308685 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/46766c09-b7dd-4263-8e07-089095bb5cac-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"46766c09-b7dd-4263-8e07-089095bb5cac\") " pod="openstack/ceilometer-0" Feb 23 17:51:22 crc kubenswrapper[4724]: I0223 17:51:22.308725 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46766c09-b7dd-4263-8e07-089095bb5cac-log-httpd\") pod \"ceilometer-0\" (UID: \"46766c09-b7dd-4263-8e07-089095bb5cac\") " pod="openstack/ceilometer-0" Feb 23 17:51:22 crc kubenswrapper[4724]: I0223 17:51:22.308816 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46766c09-b7dd-4263-8e07-089095bb5cac-scripts\") pod \"ceilometer-0\" (UID: \"46766c09-b7dd-4263-8e07-089095bb5cac\") " pod="openstack/ceilometer-0" Feb 23 17:51:22 crc kubenswrapper[4724]: I0223 17:51:22.308874 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46766c09-b7dd-4263-8e07-089095bb5cac-config-data\") pod \"ceilometer-0\" (UID: \"46766c09-b7dd-4263-8e07-089095bb5cac\") " pod="openstack/ceilometer-0" Feb 23 17:51:22 crc kubenswrapper[4724]: I0223 17:51:22.308935 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46766c09-b7dd-4263-8e07-089095bb5cac-run-httpd\") pod \"ceilometer-0\" (UID: \"46766c09-b7dd-4263-8e07-089095bb5cac\") " pod="openstack/ceilometer-0" Feb 23 17:51:22 crc kubenswrapper[4724]: I0223 17:51:22.308952 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cvgc\" (UniqueName: \"kubernetes.io/projected/46766c09-b7dd-4263-8e07-089095bb5cac-kube-api-access-8cvgc\") pod \"ceilometer-0\" (UID: \"46766c09-b7dd-4263-8e07-089095bb5cac\") " pod="openstack/ceilometer-0" Feb 23 17:51:22 crc kubenswrapper[4724]: I0223 17:51:22.308991 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46766c09-b7dd-4263-8e07-089095bb5cac-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"46766c09-b7dd-4263-8e07-089095bb5cac\") " pod="openstack/ceilometer-0" Feb 23 17:51:22 crc kubenswrapper[4724]: I0223 17:51:22.410668 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46766c09-b7dd-4263-8e07-089095bb5cac-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"46766c09-b7dd-4263-8e07-089095bb5cac\") " pod="openstack/ceilometer-0" Feb 23 17:51:22 crc kubenswrapper[4724]: I0223 17:51:22.411070 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/46766c09-b7dd-4263-8e07-089095bb5cac-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"46766c09-b7dd-4263-8e07-089095bb5cac\") " pod="openstack/ceilometer-0" Feb 23 17:51:22 crc kubenswrapper[4724]: I0223 17:51:22.411185 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46766c09-b7dd-4263-8e07-089095bb5cac-log-httpd\") pod \"ceilometer-0\" (UID: \"46766c09-b7dd-4263-8e07-089095bb5cac\") " pod="openstack/ceilometer-0" Feb 23 17:51:22 crc kubenswrapper[4724]: I0223 17:51:22.411375 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46766c09-b7dd-4263-8e07-089095bb5cac-scripts\") pod \"ceilometer-0\" (UID: \"46766c09-b7dd-4263-8e07-089095bb5cac\") " pod="openstack/ceilometer-0" Feb 23 17:51:22 crc kubenswrapper[4724]: I0223 17:51:22.411608 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46766c09-b7dd-4263-8e07-089095bb5cac-config-data\") pod \"ceilometer-0\" (UID: \"46766c09-b7dd-4263-8e07-089095bb5cac\") " pod="openstack/ceilometer-0" Feb 23 17:51:22 crc kubenswrapper[4724]: I0223 17:51:22.411761 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46766c09-b7dd-4263-8e07-089095bb5cac-log-httpd\") pod \"ceilometer-0\" (UID: \"46766c09-b7dd-4263-8e07-089095bb5cac\") " pod="openstack/ceilometer-0" Feb 23 17:51:22 crc kubenswrapper[4724]: I0223 17:51:22.411885 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46766c09-b7dd-4263-8e07-089095bb5cac-run-httpd\") pod \"ceilometer-0\" (UID: \"46766c09-b7dd-4263-8e07-089095bb5cac\") " pod="openstack/ceilometer-0" Feb 23 17:51:22 crc kubenswrapper[4724]: I0223 17:51:22.411998 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cvgc\" (UniqueName: \"kubernetes.io/projected/46766c09-b7dd-4263-8e07-089095bb5cac-kube-api-access-8cvgc\") pod \"ceilometer-0\" (UID: \"46766c09-b7dd-4263-8e07-089095bb5cac\") " pod="openstack/ceilometer-0" Feb 23 17:51:22 crc kubenswrapper[4724]: I0223 17:51:22.412141 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46766c09-b7dd-4263-8e07-089095bb5cac-run-httpd\") pod \"ceilometer-0\" (UID: \"46766c09-b7dd-4263-8e07-089095bb5cac\") " pod="openstack/ceilometer-0" Feb 23 17:51:22 crc kubenswrapper[4724]: I0223 17:51:22.415270 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/46766c09-b7dd-4263-8e07-089095bb5cac-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"46766c09-b7dd-4263-8e07-089095bb5cac\") " pod="openstack/ceilometer-0" Feb 23 17:51:22 crc kubenswrapper[4724]: I0223 17:51:22.416829 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46766c09-b7dd-4263-8e07-089095bb5cac-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"46766c09-b7dd-4263-8e07-089095bb5cac\") " pod="openstack/ceilometer-0" Feb 23 17:51:22 crc kubenswrapper[4724]: I0223 17:51:22.417431 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46766c09-b7dd-4263-8e07-089095bb5cac-scripts\") pod \"ceilometer-0\" (UID: \"46766c09-b7dd-4263-8e07-089095bb5cac\") " pod="openstack/ceilometer-0" Feb 23 17:51:22 crc kubenswrapper[4724]: I0223 17:51:22.428964 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46766c09-b7dd-4263-8e07-089095bb5cac-config-data\") pod \"ceilometer-0\" (UID: \"46766c09-b7dd-4263-8e07-089095bb5cac\") " pod="openstack/ceilometer-0" Feb 23 17:51:22 crc kubenswrapper[4724]: I0223 17:51:22.445524 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cvgc\" (UniqueName: \"kubernetes.io/projected/46766c09-b7dd-4263-8e07-089095bb5cac-kube-api-access-8cvgc\") pod \"ceilometer-0\" (UID: \"46766c09-b7dd-4263-8e07-089095bb5cac\") " pod="openstack/ceilometer-0" Feb 23 17:51:22 crc kubenswrapper[4724]: I0223 17:51:22.515167 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 17:51:22 crc kubenswrapper[4724]: I0223 17:51:22.961202 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20cceea2-c746-4269-990c-5032594f1196" path="/var/lib/kubelet/pods/20cceea2-c746-4269-990c-5032594f1196/volumes" Feb 23 17:51:22 crc kubenswrapper[4724]: I0223 17:51:22.986480 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 17:51:23 crc kubenswrapper[4724]: I0223 17:51:23.000733 4724 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 17:51:23 crc kubenswrapper[4724]: I0223 17:51:23.132782 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46766c09-b7dd-4263-8e07-089095bb5cac","Type":"ContainerStarted","Data":"c4f61a2556b232fbdf3f5d35217153ff743138ebe121e05aeb62aaad422b1d88"} Feb 23 17:51:24 crc kubenswrapper[4724]: I0223 17:51:24.144360 4724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 17:51:24 crc kubenswrapper[4724]: I0223 17:51:24.145579 4724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 17:51:24 crc kubenswrapper[4724]: I0223 17:51:24.146516 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46766c09-b7dd-4263-8e07-089095bb5cac","Type":"ContainerStarted","Data":"78c868e904ea64e5a5111b28a0772dd194161393b04e7bd194f0b954bcb2b143"} Feb 23 17:51:24 crc kubenswrapper[4724]: I0223 17:51:24.146717 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46766c09-b7dd-4263-8e07-089095bb5cac","Type":"ContainerStarted","Data":"fc2c97350ceae43cec4fd85fb3317fa56f876e004aba7f5ab5fe4766ee6765ef"} Feb 23 17:51:24 crc kubenswrapper[4724]: I0223 17:51:24.191782 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 23 17:51:24 crc kubenswrapper[4724]: I0223 17:51:24.241941 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 23 17:51:25 crc kubenswrapper[4724]: I0223 17:51:25.157861 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46766c09-b7dd-4263-8e07-089095bb5cac","Type":"ContainerStarted","Data":"24699cdfd11b6b4552163f95f768cc5bd42824beb8cc2df8a7a177f9a316b249"} Feb 23 17:51:25 crc kubenswrapper[4724]: I0223 17:51:25.951774 4724 scope.go:117] "RemoveContainer" containerID="14e56e42a874bd312d515edbb00d44d0a514d159d98544acf28fba090f14c860" Feb 23 17:51:25 crc kubenswrapper[4724]: E0223 17:51:25.952312 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 40s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(86eb7ff0-87b2-4538-8c5b-9126768e810b)\"" pod="openstack/watcher-decision-engine-0" podUID="86eb7ff0-87b2-4538-8c5b-9126768e810b" Feb 23 17:51:26 crc kubenswrapper[4724]: I0223 17:51:26.171019 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46766c09-b7dd-4263-8e07-089095bb5cac","Type":"ContainerStarted","Data":"67c01ef983e5b4b5f3ec3a4b2830f5fff768c07aadabb38f753c330ce9d52ec7"} Feb 23 17:51:26 crc kubenswrapper[4724]: I0223 17:51:26.171434 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 23 17:51:26 crc kubenswrapper[4724]: I0223 17:51:26.200118 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.5775138819999999 podStartE2EDuration="4.200085716s" podCreationTimestamp="2026-02-23 17:51:22 +0000 UTC" firstStartedPulling="2026-02-23 17:51:23.00048591 +0000 UTC m=+1238.816685500" lastFinishedPulling="2026-02-23 17:51:25.623057694 +0000 UTC m=+1241.439257334" observedRunningTime="2026-02-23 17:51:26.187139478 +0000 UTC m=+1242.003339098" watchObservedRunningTime="2026-02-23 17:51:26.200085716 +0000 UTC m=+1242.016285316" Feb 23 17:51:29 crc kubenswrapper[4724]: I0223 17:51:29.516889 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 23 17:51:29 crc kubenswrapper[4724]: I0223 17:51:29.989305 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-5l6f9"] Feb 23 17:51:29 crc kubenswrapper[4724]: I0223 17:51:29.991106 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-5l6f9" Feb 23 17:51:29 crc kubenswrapper[4724]: I0223 17:51:29.996379 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 23 17:51:29 crc kubenswrapper[4724]: I0223 17:51:29.996421 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.000463 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-5l6f9"] Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.093455 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjxt5\" (UniqueName: \"kubernetes.io/projected/1e0a28a1-5db9-4546-836f-1cfa21d4f068-kube-api-access-rjxt5\") pod \"nova-cell0-cell-mapping-5l6f9\" (UID: \"1e0a28a1-5db9-4546-836f-1cfa21d4f068\") " pod="openstack/nova-cell0-cell-mapping-5l6f9" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.093540 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e0a28a1-5db9-4546-836f-1cfa21d4f068-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-5l6f9\" (UID: \"1e0a28a1-5db9-4546-836f-1cfa21d4f068\") " pod="openstack/nova-cell0-cell-mapping-5l6f9" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.093593 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e0a28a1-5db9-4546-836f-1cfa21d4f068-config-data\") pod \"nova-cell0-cell-mapping-5l6f9\" (UID: \"1e0a28a1-5db9-4546-836f-1cfa21d4f068\") " pod="openstack/nova-cell0-cell-mapping-5l6f9" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.093712 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e0a28a1-5db9-4546-836f-1cfa21d4f068-scripts\") pod \"nova-cell0-cell-mapping-5l6f9\" (UID: \"1e0a28a1-5db9-4546-836f-1cfa21d4f068\") " pod="openstack/nova-cell0-cell-mapping-5l6f9" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.147115 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.151144 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.153362 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.160922 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.195323 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e0a28a1-5db9-4546-836f-1cfa21d4f068-scripts\") pod \"nova-cell0-cell-mapping-5l6f9\" (UID: \"1e0a28a1-5db9-4546-836f-1cfa21d4f068\") " pod="openstack/nova-cell0-cell-mapping-5l6f9" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.196316 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjxt5\" (UniqueName: \"kubernetes.io/projected/1e0a28a1-5db9-4546-836f-1cfa21d4f068-kube-api-access-rjxt5\") pod \"nova-cell0-cell-mapping-5l6f9\" (UID: \"1e0a28a1-5db9-4546-836f-1cfa21d4f068\") " pod="openstack/nova-cell0-cell-mapping-5l6f9" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.196547 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e0a28a1-5db9-4546-836f-1cfa21d4f068-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-5l6f9\" (UID: \"1e0a28a1-5db9-4546-836f-1cfa21d4f068\") " pod="openstack/nova-cell0-cell-mapping-5l6f9" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.200276 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e0a28a1-5db9-4546-836f-1cfa21d4f068-config-data\") pod \"nova-cell0-cell-mapping-5l6f9\" (UID: \"1e0a28a1-5db9-4546-836f-1cfa21d4f068\") " pod="openstack/nova-cell0-cell-mapping-5l6f9" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.213382 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e0a28a1-5db9-4546-836f-1cfa21d4f068-scripts\") pod \"nova-cell0-cell-mapping-5l6f9\" (UID: \"1e0a28a1-5db9-4546-836f-1cfa21d4f068\") " pod="openstack/nova-cell0-cell-mapping-5l6f9" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.214132 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e0a28a1-5db9-4546-836f-1cfa21d4f068-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-5l6f9\" (UID: \"1e0a28a1-5db9-4546-836f-1cfa21d4f068\") " pod="openstack/nova-cell0-cell-mapping-5l6f9" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.235193 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjxt5\" (UniqueName: \"kubernetes.io/projected/1e0a28a1-5db9-4546-836f-1cfa21d4f068-kube-api-access-rjxt5\") pod \"nova-cell0-cell-mapping-5l6f9\" (UID: \"1e0a28a1-5db9-4546-836f-1cfa21d4f068\") " pod="openstack/nova-cell0-cell-mapping-5l6f9" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.248013 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.249628 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.250258 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e0a28a1-5db9-4546-836f-1cfa21d4f068-config-data\") pod \"nova-cell0-cell-mapping-5l6f9\" (UID: \"1e0a28a1-5db9-4546-836f-1cfa21d4f068\") " pod="openstack/nova-cell0-cell-mapping-5l6f9" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.257070 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.288417 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.306115 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83c5ec75-90ba-42cf-ab2e-602078cfc1a9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"83c5ec75-90ba-42cf-ab2e-602078cfc1a9\") " pod="openstack/nova-api-0" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.306214 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83c5ec75-90ba-42cf-ab2e-602078cfc1a9-config-data\") pod \"nova-api-0\" (UID: \"83c5ec75-90ba-42cf-ab2e-602078cfc1a9\") " pod="openstack/nova-api-0" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.306323 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83c5ec75-90ba-42cf-ab2e-602078cfc1a9-logs\") pod \"nova-api-0\" (UID: \"83c5ec75-90ba-42cf-ab2e-602078cfc1a9\") " pod="openstack/nova-api-0" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.306344 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttdg9\" (UniqueName: \"kubernetes.io/projected/83c5ec75-90ba-42cf-ab2e-602078cfc1a9-kube-api-access-ttdg9\") pod \"nova-api-0\" (UID: \"83c5ec75-90ba-42cf-ab2e-602078cfc1a9\") " pod="openstack/nova-api-0" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.309671 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.330685 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-5l6f9" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.333704 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.348048 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.409904 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94hqz\" (UniqueName: \"kubernetes.io/projected/aaff144a-3786-4d70-af6d-266870e4e6d2-kube-api-access-94hqz\") pod \"nova-cell1-novncproxy-0\" (UID: \"aaff144a-3786-4d70-af6d-266870e4e6d2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.409968 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7807bbeb-d20e-4ec9-8587-3bac2e960ab6-config-data\") pod \"nova-scheduler-0\" (UID: \"7807bbeb-d20e-4ec9-8587-3bac2e960ab6\") " pod="openstack/nova-scheduler-0" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.410028 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaff144a-3786-4d70-af6d-266870e4e6d2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"aaff144a-3786-4d70-af6d-266870e4e6d2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.410081 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83c5ec75-90ba-42cf-ab2e-602078cfc1a9-logs\") pod \"nova-api-0\" (UID: \"83c5ec75-90ba-42cf-ab2e-602078cfc1a9\") " pod="openstack/nova-api-0" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.410118 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttdg9\" (UniqueName: \"kubernetes.io/projected/83c5ec75-90ba-42cf-ab2e-602078cfc1a9-kube-api-access-ttdg9\") pod \"nova-api-0\" (UID: \"83c5ec75-90ba-42cf-ab2e-602078cfc1a9\") " pod="openstack/nova-api-0" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.410148 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7807bbeb-d20e-4ec9-8587-3bac2e960ab6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7807bbeb-d20e-4ec9-8587-3bac2e960ab6\") " pod="openstack/nova-scheduler-0" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.410265 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83c5ec75-90ba-42cf-ab2e-602078cfc1a9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"83c5ec75-90ba-42cf-ab2e-602078cfc1a9\") " pod="openstack/nova-api-0" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.410321 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjm8w\" (UniqueName: \"kubernetes.io/projected/7807bbeb-d20e-4ec9-8587-3bac2e960ab6-kube-api-access-sjm8w\") pod \"nova-scheduler-0\" (UID: \"7807bbeb-d20e-4ec9-8587-3bac2e960ab6\") " pod="openstack/nova-scheduler-0" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.410356 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83c5ec75-90ba-42cf-ab2e-602078cfc1a9-config-data\") pod \"nova-api-0\" (UID: \"83c5ec75-90ba-42cf-ab2e-602078cfc1a9\") " pod="openstack/nova-api-0" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.410425 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaff144a-3786-4d70-af6d-266870e4e6d2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"aaff144a-3786-4d70-af6d-266870e4e6d2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.411169 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83c5ec75-90ba-42cf-ab2e-602078cfc1a9-logs\") pod \"nova-api-0\" (UID: \"83c5ec75-90ba-42cf-ab2e-602078cfc1a9\") " pod="openstack/nova-api-0" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.416103 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83c5ec75-90ba-42cf-ab2e-602078cfc1a9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"83c5ec75-90ba-42cf-ab2e-602078cfc1a9\") " pod="openstack/nova-api-0" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.423281 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.440045 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83c5ec75-90ba-42cf-ab2e-602078cfc1a9-config-data\") pod \"nova-api-0\" (UID: \"83c5ec75-90ba-42cf-ab2e-602078cfc1a9\") " pod="openstack/nova-api-0" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.466222 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttdg9\" (UniqueName: \"kubernetes.io/projected/83c5ec75-90ba-42cf-ab2e-602078cfc1a9-kube-api-access-ttdg9\") pod \"nova-api-0\" (UID: \"83c5ec75-90ba-42cf-ab2e-602078cfc1a9\") " pod="openstack/nova-api-0" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.475641 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.513125 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjm8w\" (UniqueName: \"kubernetes.io/projected/7807bbeb-d20e-4ec9-8587-3bac2e960ab6-kube-api-access-sjm8w\") pod \"nova-scheduler-0\" (UID: \"7807bbeb-d20e-4ec9-8587-3bac2e960ab6\") " pod="openstack/nova-scheduler-0" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.513198 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaff144a-3786-4d70-af6d-266870e4e6d2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"aaff144a-3786-4d70-af6d-266870e4e6d2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.513228 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94hqz\" (UniqueName: \"kubernetes.io/projected/aaff144a-3786-4d70-af6d-266870e4e6d2-kube-api-access-94hqz\") pod \"nova-cell1-novncproxy-0\" (UID: \"aaff144a-3786-4d70-af6d-266870e4e6d2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.513248 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7807bbeb-d20e-4ec9-8587-3bac2e960ab6-config-data\") pod \"nova-scheduler-0\" (UID: \"7807bbeb-d20e-4ec9-8587-3bac2e960ab6\") " pod="openstack/nova-scheduler-0" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.513282 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaff144a-3786-4d70-af6d-266870e4e6d2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"aaff144a-3786-4d70-af6d-266870e4e6d2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.513320 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7807bbeb-d20e-4ec9-8587-3bac2e960ab6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7807bbeb-d20e-4ec9-8587-3bac2e960ab6\") " pod="openstack/nova-scheduler-0" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.518642 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7807bbeb-d20e-4ec9-8587-3bac2e960ab6-config-data\") pod \"nova-scheduler-0\" (UID: \"7807bbeb-d20e-4ec9-8587-3bac2e960ab6\") " pod="openstack/nova-scheduler-0" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.518714 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.520434 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.522962 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7807bbeb-d20e-4ec9-8587-3bac2e960ab6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7807bbeb-d20e-4ec9-8587-3bac2e960ab6\") " pod="openstack/nova-scheduler-0" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.524294 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaff144a-3786-4d70-af6d-266870e4e6d2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"aaff144a-3786-4d70-af6d-266870e4e6d2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.528655 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.532317 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaff144a-3786-4d70-af6d-266870e4e6d2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"aaff144a-3786-4d70-af6d-266870e4e6d2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.536974 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjm8w\" (UniqueName: \"kubernetes.io/projected/7807bbeb-d20e-4ec9-8587-3bac2e960ab6-kube-api-access-sjm8w\") pod \"nova-scheduler-0\" (UID: \"7807bbeb-d20e-4ec9-8587-3bac2e960ab6\") " pod="openstack/nova-scheduler-0" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.537198 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94hqz\" (UniqueName: \"kubernetes.io/projected/aaff144a-3786-4d70-af6d-266870e4e6d2-kube-api-access-94hqz\") pod \"nova-cell1-novncproxy-0\" (UID: \"aaff144a-3786-4d70-af6d-266870e4e6d2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.551454 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.592278 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.605955 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bc99f56d9-tp8hh"] Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.608507 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc99f56d9-tp8hh" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.619533 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01e96b58-f669-4168-9365-1c1fda437753-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"01e96b58-f669-4168-9365-1c1fda437753\") " pod="openstack/nova-metadata-0" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.619703 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdgnm\" (UniqueName: \"kubernetes.io/projected/01e96b58-f669-4168-9365-1c1fda437753-kube-api-access-vdgnm\") pod \"nova-metadata-0\" (UID: \"01e96b58-f669-4168-9365-1c1fda437753\") " pod="openstack/nova-metadata-0" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.619936 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01e96b58-f669-4168-9365-1c1fda437753-config-data\") pod \"nova-metadata-0\" (UID: \"01e96b58-f669-4168-9365-1c1fda437753\") " pod="openstack/nova-metadata-0" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.620150 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01e96b58-f669-4168-9365-1c1fda437753-logs\") pod \"nova-metadata-0\" (UID: \"01e96b58-f669-4168-9365-1c1fda437753\") " pod="openstack/nova-metadata-0" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.639239 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc99f56d9-tp8hh"] Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.644215 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.722003 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdgnm\" (UniqueName: \"kubernetes.io/projected/01e96b58-f669-4168-9365-1c1fda437753-kube-api-access-vdgnm\") pod \"nova-metadata-0\" (UID: \"01e96b58-f669-4168-9365-1c1fda437753\") " pod="openstack/nova-metadata-0" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.722324 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c546e0ba-a0ef-44b7-a810-e405f8bca93e-ovsdbserver-nb\") pod \"dnsmasq-dns-6bc99f56d9-tp8hh\" (UID: \"c546e0ba-a0ef-44b7-a810-e405f8bca93e\") " pod="openstack/dnsmasq-dns-6bc99f56d9-tp8hh" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.722428 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c546e0ba-a0ef-44b7-a810-e405f8bca93e-dns-svc\") pod \"dnsmasq-dns-6bc99f56d9-tp8hh\" (UID: \"c546e0ba-a0ef-44b7-a810-e405f8bca93e\") " pod="openstack/dnsmasq-dns-6bc99f56d9-tp8hh" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.722460 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c546e0ba-a0ef-44b7-a810-e405f8bca93e-config\") pod \"dnsmasq-dns-6bc99f56d9-tp8hh\" (UID: \"c546e0ba-a0ef-44b7-a810-e405f8bca93e\") " pod="openstack/dnsmasq-dns-6bc99f56d9-tp8hh" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.722490 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01e96b58-f669-4168-9365-1c1fda437753-config-data\") pod \"nova-metadata-0\" (UID: \"01e96b58-f669-4168-9365-1c1fda437753\") " pod="openstack/nova-metadata-0" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.722555 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxwbr\" (UniqueName: \"kubernetes.io/projected/c546e0ba-a0ef-44b7-a810-e405f8bca93e-kube-api-access-cxwbr\") pod \"dnsmasq-dns-6bc99f56d9-tp8hh\" (UID: \"c546e0ba-a0ef-44b7-a810-e405f8bca93e\") " pod="openstack/dnsmasq-dns-6bc99f56d9-tp8hh" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.722585 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c546e0ba-a0ef-44b7-a810-e405f8bca93e-dns-swift-storage-0\") pod \"dnsmasq-dns-6bc99f56d9-tp8hh\" (UID: \"c546e0ba-a0ef-44b7-a810-e405f8bca93e\") " pod="openstack/dnsmasq-dns-6bc99f56d9-tp8hh" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.722608 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01e96b58-f669-4168-9365-1c1fda437753-logs\") pod \"nova-metadata-0\" (UID: \"01e96b58-f669-4168-9365-1c1fda437753\") " pod="openstack/nova-metadata-0" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.722627 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c546e0ba-a0ef-44b7-a810-e405f8bca93e-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc99f56d9-tp8hh\" (UID: \"c546e0ba-a0ef-44b7-a810-e405f8bca93e\") " pod="openstack/dnsmasq-dns-6bc99f56d9-tp8hh" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.722677 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01e96b58-f669-4168-9365-1c1fda437753-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"01e96b58-f669-4168-9365-1c1fda437753\") " pod="openstack/nova-metadata-0" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.724191 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01e96b58-f669-4168-9365-1c1fda437753-logs\") pod \"nova-metadata-0\" (UID: \"01e96b58-f669-4168-9365-1c1fda437753\") " pod="openstack/nova-metadata-0" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.728968 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01e96b58-f669-4168-9365-1c1fda437753-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"01e96b58-f669-4168-9365-1c1fda437753\") " pod="openstack/nova-metadata-0" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.735885 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01e96b58-f669-4168-9365-1c1fda437753-config-data\") pod \"nova-metadata-0\" (UID: \"01e96b58-f669-4168-9365-1c1fda437753\") " pod="openstack/nova-metadata-0" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.742382 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdgnm\" (UniqueName: \"kubernetes.io/projected/01e96b58-f669-4168-9365-1c1fda437753-kube-api-access-vdgnm\") pod \"nova-metadata-0\" (UID: \"01e96b58-f669-4168-9365-1c1fda437753\") " pod="openstack/nova-metadata-0" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.824122 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c546e0ba-a0ef-44b7-a810-e405f8bca93e-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc99f56d9-tp8hh\" (UID: \"c546e0ba-a0ef-44b7-a810-e405f8bca93e\") " pod="openstack/dnsmasq-dns-6bc99f56d9-tp8hh" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.824485 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c546e0ba-a0ef-44b7-a810-e405f8bca93e-ovsdbserver-nb\") pod \"dnsmasq-dns-6bc99f56d9-tp8hh\" (UID: \"c546e0ba-a0ef-44b7-a810-e405f8bca93e\") " pod="openstack/dnsmasq-dns-6bc99f56d9-tp8hh" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.824660 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c546e0ba-a0ef-44b7-a810-e405f8bca93e-dns-svc\") pod \"dnsmasq-dns-6bc99f56d9-tp8hh\" (UID: \"c546e0ba-a0ef-44b7-a810-e405f8bca93e\") " pod="openstack/dnsmasq-dns-6bc99f56d9-tp8hh" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.824790 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c546e0ba-a0ef-44b7-a810-e405f8bca93e-config\") pod \"dnsmasq-dns-6bc99f56d9-tp8hh\" (UID: \"c546e0ba-a0ef-44b7-a810-e405f8bca93e\") " pod="openstack/dnsmasq-dns-6bc99f56d9-tp8hh" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.824967 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxwbr\" (UniqueName: \"kubernetes.io/projected/c546e0ba-a0ef-44b7-a810-e405f8bca93e-kube-api-access-cxwbr\") pod \"dnsmasq-dns-6bc99f56d9-tp8hh\" (UID: \"c546e0ba-a0ef-44b7-a810-e405f8bca93e\") " pod="openstack/dnsmasq-dns-6bc99f56d9-tp8hh" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.825108 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c546e0ba-a0ef-44b7-a810-e405f8bca93e-dns-swift-storage-0\") pod \"dnsmasq-dns-6bc99f56d9-tp8hh\" (UID: \"c546e0ba-a0ef-44b7-a810-e405f8bca93e\") " pod="openstack/dnsmasq-dns-6bc99f56d9-tp8hh" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.826296 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c546e0ba-a0ef-44b7-a810-e405f8bca93e-dns-swift-storage-0\") pod \"dnsmasq-dns-6bc99f56d9-tp8hh\" (UID: \"c546e0ba-a0ef-44b7-a810-e405f8bca93e\") " pod="openstack/dnsmasq-dns-6bc99f56d9-tp8hh" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.826715 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c546e0ba-a0ef-44b7-a810-e405f8bca93e-config\") pod \"dnsmasq-dns-6bc99f56d9-tp8hh\" (UID: \"c546e0ba-a0ef-44b7-a810-e405f8bca93e\") " pod="openstack/dnsmasq-dns-6bc99f56d9-tp8hh" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.826758 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c546e0ba-a0ef-44b7-a810-e405f8bca93e-dns-svc\") pod \"dnsmasq-dns-6bc99f56d9-tp8hh\" (UID: \"c546e0ba-a0ef-44b7-a810-e405f8bca93e\") " pod="openstack/dnsmasq-dns-6bc99f56d9-tp8hh" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.827442 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c546e0ba-a0ef-44b7-a810-e405f8bca93e-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc99f56d9-tp8hh\" (UID: \"c546e0ba-a0ef-44b7-a810-e405f8bca93e\") " pod="openstack/dnsmasq-dns-6bc99f56d9-tp8hh" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.827706 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c546e0ba-a0ef-44b7-a810-e405f8bca93e-ovsdbserver-nb\") pod \"dnsmasq-dns-6bc99f56d9-tp8hh\" (UID: \"c546e0ba-a0ef-44b7-a810-e405f8bca93e\") " pod="openstack/dnsmasq-dns-6bc99f56d9-tp8hh" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.844719 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxwbr\" (UniqueName: \"kubernetes.io/projected/c546e0ba-a0ef-44b7-a810-e405f8bca93e-kube-api-access-cxwbr\") pod \"dnsmasq-dns-6bc99f56d9-tp8hh\" (UID: \"c546e0ba-a0ef-44b7-a810-e405f8bca93e\") " pod="openstack/dnsmasq-dns-6bc99f56d9-tp8hh" Feb 23 17:51:30 crc kubenswrapper[4724]: I0223 17:51:30.977899 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 17:51:31 crc kubenswrapper[4724]: I0223 17:51:31.000472 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc99f56d9-tp8hh" Feb 23 17:51:31 crc kubenswrapper[4724]: I0223 17:51:31.038915 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-5l6f9"] Feb 23 17:51:31 crc kubenswrapper[4724]: I0223 17:51:31.130155 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-msc2q"] Feb 23 17:51:31 crc kubenswrapper[4724]: I0223 17:51:31.132031 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-msc2q" Feb 23 17:51:31 crc kubenswrapper[4724]: I0223 17:51:31.134872 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 23 17:51:31 crc kubenswrapper[4724]: I0223 17:51:31.135024 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 23 17:51:31 crc kubenswrapper[4724]: I0223 17:51:31.144699 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-msc2q"] Feb 23 17:51:31 crc kubenswrapper[4724]: I0223 17:51:31.204106 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 17:51:31 crc kubenswrapper[4724]: I0223 17:51:31.221975 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 23 17:51:31 crc kubenswrapper[4724]: I0223 17:51:31.240819 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54049f9e-01f1-475b-b008-401152f8ca55-scripts\") pod \"nova-cell1-conductor-db-sync-msc2q\" (UID: \"54049f9e-01f1-475b-b008-401152f8ca55\") " pod="openstack/nova-cell1-conductor-db-sync-msc2q" Feb 23 17:51:31 crc kubenswrapper[4724]: I0223 17:51:31.240894 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs7zx\" (UniqueName: \"kubernetes.io/projected/54049f9e-01f1-475b-b008-401152f8ca55-kube-api-access-bs7zx\") pod \"nova-cell1-conductor-db-sync-msc2q\" (UID: \"54049f9e-01f1-475b-b008-401152f8ca55\") " pod="openstack/nova-cell1-conductor-db-sync-msc2q" Feb 23 17:51:31 crc kubenswrapper[4724]: I0223 17:51:31.241184 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54049f9e-01f1-475b-b008-401152f8ca55-config-data\") pod \"nova-cell1-conductor-db-sync-msc2q\" (UID: \"54049f9e-01f1-475b-b008-401152f8ca55\") " pod="openstack/nova-cell1-conductor-db-sync-msc2q" Feb 23 17:51:31 crc kubenswrapper[4724]: I0223 17:51:31.241250 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54049f9e-01f1-475b-b008-401152f8ca55-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-msc2q\" (UID: \"54049f9e-01f1-475b-b008-401152f8ca55\") " pod="openstack/nova-cell1-conductor-db-sync-msc2q" Feb 23 17:51:31 crc kubenswrapper[4724]: I0223 17:51:31.302219 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-5l6f9" event={"ID":"1e0a28a1-5db9-4546-836f-1cfa21d4f068","Type":"ContainerStarted","Data":"45926abf516bb1c079f6165ad85510621e36053e595bff8b526429f0efb75a85"} Feb 23 17:51:31 crc kubenswrapper[4724]: I0223 17:51:31.304157 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7807bbeb-d20e-4ec9-8587-3bac2e960ab6","Type":"ContainerStarted","Data":"fc93fe9bfda78917dc965102bfb4c95255e5df5190e61f314e3cd252048e14c9"} Feb 23 17:51:31 crc kubenswrapper[4724]: I0223 17:51:31.305835 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"83c5ec75-90ba-42cf-ab2e-602078cfc1a9","Type":"ContainerStarted","Data":"dad5b0fa905224fe74ad05ad29ec4432941eec4a0e0600ddd0c3308ae17696db"} Feb 23 17:51:31 crc kubenswrapper[4724]: I0223 17:51:31.343120 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54049f9e-01f1-475b-b008-401152f8ca55-scripts\") pod \"nova-cell1-conductor-db-sync-msc2q\" (UID: \"54049f9e-01f1-475b-b008-401152f8ca55\") " pod="openstack/nova-cell1-conductor-db-sync-msc2q" Feb 23 17:51:31 crc kubenswrapper[4724]: I0223 17:51:31.343205 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs7zx\" (UniqueName: \"kubernetes.io/projected/54049f9e-01f1-475b-b008-401152f8ca55-kube-api-access-bs7zx\") pod \"nova-cell1-conductor-db-sync-msc2q\" (UID: \"54049f9e-01f1-475b-b008-401152f8ca55\") " pod="openstack/nova-cell1-conductor-db-sync-msc2q" Feb 23 17:51:31 crc kubenswrapper[4724]: I0223 17:51:31.343683 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54049f9e-01f1-475b-b008-401152f8ca55-config-data\") pod \"nova-cell1-conductor-db-sync-msc2q\" (UID: \"54049f9e-01f1-475b-b008-401152f8ca55\") " pod="openstack/nova-cell1-conductor-db-sync-msc2q" Feb 23 17:51:31 crc kubenswrapper[4724]: I0223 17:51:31.343745 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54049f9e-01f1-475b-b008-401152f8ca55-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-msc2q\" (UID: \"54049f9e-01f1-475b-b008-401152f8ca55\") " pod="openstack/nova-cell1-conductor-db-sync-msc2q" Feb 23 17:51:31 crc kubenswrapper[4724]: I0223 17:51:31.352220 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54049f9e-01f1-475b-b008-401152f8ca55-config-data\") pod \"nova-cell1-conductor-db-sync-msc2q\" (UID: \"54049f9e-01f1-475b-b008-401152f8ca55\") " pod="openstack/nova-cell1-conductor-db-sync-msc2q" Feb 23 17:51:31 crc kubenswrapper[4724]: I0223 17:51:31.353822 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54049f9e-01f1-475b-b008-401152f8ca55-scripts\") pod \"nova-cell1-conductor-db-sync-msc2q\" (UID: \"54049f9e-01f1-475b-b008-401152f8ca55\") " pod="openstack/nova-cell1-conductor-db-sync-msc2q" Feb 23 17:51:31 crc kubenswrapper[4724]: I0223 17:51:31.354807 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54049f9e-01f1-475b-b008-401152f8ca55-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-msc2q\" (UID: \"54049f9e-01f1-475b-b008-401152f8ca55\") " pod="openstack/nova-cell1-conductor-db-sync-msc2q" Feb 23 17:51:31 crc kubenswrapper[4724]: I0223 17:51:31.369407 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs7zx\" (UniqueName: \"kubernetes.io/projected/54049f9e-01f1-475b-b008-401152f8ca55-kube-api-access-bs7zx\") pod \"nova-cell1-conductor-db-sync-msc2q\" (UID: \"54049f9e-01f1-475b-b008-401152f8ca55\") " pod="openstack/nova-cell1-conductor-db-sync-msc2q" Feb 23 17:51:31 crc kubenswrapper[4724]: I0223 17:51:31.409696 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 23 17:51:31 crc kubenswrapper[4724]: I0223 17:51:31.503161 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-msc2q" Feb 23 17:51:31 crc kubenswrapper[4724]: I0223 17:51:31.668973 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc99f56d9-tp8hh"] Feb 23 17:51:31 crc kubenswrapper[4724]: W0223 17:51:31.679109 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc546e0ba_a0ef_44b7_a810_e405f8bca93e.slice/crio-1958cfb610345bae61b9d16606be1dd199a238fff7300767000b1cb4afc988fe WatchSource:0}: Error finding container 1958cfb610345bae61b9d16606be1dd199a238fff7300767000b1cb4afc988fe: Status 404 returned error can't find the container with id 1958cfb610345bae61b9d16606be1dd199a238fff7300767000b1cb4afc988fe Feb 23 17:51:31 crc kubenswrapper[4724]: I0223 17:51:31.844049 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 17:51:32 crc kubenswrapper[4724]: I0223 17:51:32.134735 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-msc2q"] Feb 23 17:51:32 crc kubenswrapper[4724]: I0223 17:51:32.326037 4724 generic.go:334] "Generic (PLEG): container finished" podID="c546e0ba-a0ef-44b7-a810-e405f8bca93e" containerID="f4579f75f922b51bafacf8973994907ec9d7ea11e67094d960ceb2d8068095ec" exitCode=0 Feb 23 17:51:32 crc kubenswrapper[4724]: I0223 17:51:32.326130 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc99f56d9-tp8hh" event={"ID":"c546e0ba-a0ef-44b7-a810-e405f8bca93e","Type":"ContainerDied","Data":"f4579f75f922b51bafacf8973994907ec9d7ea11e67094d960ceb2d8068095ec"} Feb 23 17:51:32 crc kubenswrapper[4724]: I0223 17:51:32.326161 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc99f56d9-tp8hh" event={"ID":"c546e0ba-a0ef-44b7-a810-e405f8bca93e","Type":"ContainerStarted","Data":"1958cfb610345bae61b9d16606be1dd199a238fff7300767000b1cb4afc988fe"} Feb 23 17:51:32 crc kubenswrapper[4724]: I0223 17:51:32.329968 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-5l6f9" event={"ID":"1e0a28a1-5db9-4546-836f-1cfa21d4f068","Type":"ContainerStarted","Data":"7695afa7c8a678e78d7c8de09aa13b51249481b2b5e92cb3f9e9b5255540d55c"} Feb 23 17:51:32 crc kubenswrapper[4724]: I0223 17:51:32.332564 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"01e96b58-f669-4168-9365-1c1fda437753","Type":"ContainerStarted","Data":"b4277c74d1a2998560c8e08ddebe1a8f09212b50cd50b5c5ea5a048011be0b18"} Feb 23 17:51:32 crc kubenswrapper[4724]: I0223 17:51:32.339351 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"aaff144a-3786-4d70-af6d-266870e4e6d2","Type":"ContainerStarted","Data":"64c0ddd05252330da59140a062bb40eaad0bfb53ad42cc7261b8505d211dad64"} Feb 23 17:51:32 crc kubenswrapper[4724]: I0223 17:51:32.381015 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-5l6f9" podStartSLOduration=3.380993523 podStartE2EDuration="3.380993523s" podCreationTimestamp="2026-02-23 17:51:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:51:32.357352396 +0000 UTC m=+1248.173552006" watchObservedRunningTime="2026-02-23 17:51:32.380993523 +0000 UTC m=+1248.197193123" Feb 23 17:51:33 crc kubenswrapper[4724]: I0223 17:51:33.810102 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 23 17:51:33 crc kubenswrapper[4724]: I0223 17:51:33.821749 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 17:51:34 crc kubenswrapper[4724]: W0223 17:51:34.299958 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54049f9e_01f1_475b_b008_401152f8ca55.slice/crio-50e33c2a98675e097c75deb6b9842a8cb2947d10902172a64b9a75f3d163f417 WatchSource:0}: Error finding container 50e33c2a98675e097c75deb6b9842a8cb2947d10902172a64b9a75f3d163f417: Status 404 returned error can't find the container with id 50e33c2a98675e097c75deb6b9842a8cb2947d10902172a64b9a75f3d163f417 Feb 23 17:51:34 crc kubenswrapper[4724]: I0223 17:51:34.378089 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-msc2q" event={"ID":"54049f9e-01f1-475b-b008-401152f8ca55","Type":"ContainerStarted","Data":"50e33c2a98675e097c75deb6b9842a8cb2947d10902172a64b9a75f3d163f417"} Feb 23 17:51:35 crc kubenswrapper[4724]: I0223 17:51:35.388778 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-msc2q" event={"ID":"54049f9e-01f1-475b-b008-401152f8ca55","Type":"ContainerStarted","Data":"0eb9e394baa3796f1f9d644774bed6ef1faa27f119bd06eda4aefb9e9ac2ec76"} Feb 23 17:51:35 crc kubenswrapper[4724]: I0223 17:51:35.393068 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7807bbeb-d20e-4ec9-8587-3bac2e960ab6","Type":"ContainerStarted","Data":"ebeb9f568cc4f635943070cc519b22051c37608c9116b04c9209f23c14d8ff12"} Feb 23 17:51:35 crc kubenswrapper[4724]: I0223 17:51:35.395298 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"aaff144a-3786-4d70-af6d-266870e4e6d2","Type":"ContainerStarted","Data":"9d5fdbc41e70ae93eb242c198bff176337f54ec1f7ec7799daa55d8471c1d811"} Feb 23 17:51:35 crc kubenswrapper[4724]: I0223 17:51:35.395965 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="aaff144a-3786-4d70-af6d-266870e4e6d2" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://9d5fdbc41e70ae93eb242c198bff176337f54ec1f7ec7799daa55d8471c1d811" gracePeriod=30 Feb 23 17:51:35 crc kubenswrapper[4724]: I0223 17:51:35.397709 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc99f56d9-tp8hh" event={"ID":"c546e0ba-a0ef-44b7-a810-e405f8bca93e","Type":"ContainerStarted","Data":"1fe1cc8de80d03f4c774cbc8279a2802d878d46a140e6309a3274349cd326acf"} Feb 23 17:51:35 crc kubenswrapper[4724]: I0223 17:51:35.398000 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bc99f56d9-tp8hh" Feb 23 17:51:35 crc kubenswrapper[4724]: I0223 17:51:35.400443 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"83c5ec75-90ba-42cf-ab2e-602078cfc1a9","Type":"ContainerStarted","Data":"8e063fb9127cff3f77eeda92e3f8d5a1fb7f04f304b536d1b7008271259be6e3"} Feb 23 17:51:35 crc kubenswrapper[4724]: I0223 17:51:35.400483 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"83c5ec75-90ba-42cf-ab2e-602078cfc1a9","Type":"ContainerStarted","Data":"ad166250820023650ef99135e3e3a62d1b170bc264efbc9e92267f78f16ecbb2"} Feb 23 17:51:35 crc kubenswrapper[4724]: I0223 17:51:35.403307 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"01e96b58-f669-4168-9365-1c1fda437753","Type":"ContainerStarted","Data":"250337ba832a8ea46f8de54085a06920bb5f7f1eb561f83f8dc71c0f1918a28a"} Feb 23 17:51:35 crc kubenswrapper[4724]: I0223 17:51:35.403621 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"01e96b58-f669-4168-9365-1c1fda437753","Type":"ContainerStarted","Data":"53ad433221c2b60f0910df8ed6ecb6897d949f24b8a494757f3a6a2ce22c7629"} Feb 23 17:51:35 crc kubenswrapper[4724]: I0223 17:51:35.403778 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="01e96b58-f669-4168-9365-1c1fda437753" containerName="nova-metadata-log" containerID="cri-o://53ad433221c2b60f0910df8ed6ecb6897d949f24b8a494757f3a6a2ce22c7629" gracePeriod=30 Feb 23 17:51:35 crc kubenswrapper[4724]: I0223 17:51:35.404169 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="01e96b58-f669-4168-9365-1c1fda437753" containerName="nova-metadata-metadata" containerID="cri-o://250337ba832a8ea46f8de54085a06920bb5f7f1eb561f83f8dc71c0f1918a28a" gracePeriod=30 Feb 23 17:51:35 crc kubenswrapper[4724]: I0223 17:51:35.427830 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-msc2q" podStartSLOduration=4.427800804 podStartE2EDuration="4.427800804s" podCreationTimestamp="2026-02-23 17:51:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:51:35.409825362 +0000 UTC m=+1251.226024962" watchObservedRunningTime="2026-02-23 17:51:35.427800804 +0000 UTC m=+1251.244000414" Feb 23 17:51:35 crc kubenswrapper[4724]: I0223 17:51:35.441034 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.8618031999999998 podStartE2EDuration="5.441014896s" podCreationTimestamp="2026-02-23 17:51:30 +0000 UTC" firstStartedPulling="2026-02-23 17:51:31.868615816 +0000 UTC m=+1247.684815416" lastFinishedPulling="2026-02-23 17:51:34.447827512 +0000 UTC m=+1250.264027112" observedRunningTime="2026-02-23 17:51:35.435504623 +0000 UTC m=+1251.251704213" watchObservedRunningTime="2026-02-23 17:51:35.441014896 +0000 UTC m=+1251.257214496" Feb 23 17:51:35 crc kubenswrapper[4724]: I0223 17:51:35.462642 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.314863312 podStartE2EDuration="5.462622838s" podCreationTimestamp="2026-02-23 17:51:30 +0000 UTC" firstStartedPulling="2026-02-23 17:51:31.279494761 +0000 UTC m=+1247.095694361" lastFinishedPulling="2026-02-23 17:51:34.427254277 +0000 UTC m=+1250.243453887" observedRunningTime="2026-02-23 17:51:35.453435424 +0000 UTC m=+1251.269635014" watchObservedRunningTime="2026-02-23 17:51:35.462622838 +0000 UTC m=+1251.278822438" Feb 23 17:51:35 crc kubenswrapper[4724]: I0223 17:51:35.483209 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.294641452 podStartE2EDuration="5.483187223s" podCreationTimestamp="2026-02-23 17:51:30 +0000 UTC" firstStartedPulling="2026-02-23 17:51:31.279514071 +0000 UTC m=+1247.095713671" lastFinishedPulling="2026-02-23 17:51:34.468059842 +0000 UTC m=+1250.284259442" observedRunningTime="2026-02-23 17:51:35.476023553 +0000 UTC m=+1251.292223153" watchObservedRunningTime="2026-02-23 17:51:35.483187223 +0000 UTC m=+1251.299386823" Feb 23 17:51:35 crc kubenswrapper[4724]: I0223 17:51:35.492514 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.445635297 podStartE2EDuration="5.49249865s" podCreationTimestamp="2026-02-23 17:51:30 +0000 UTC" firstStartedPulling="2026-02-23 17:51:31.401433617 +0000 UTC m=+1247.217633207" lastFinishedPulling="2026-02-23 17:51:34.44829696 +0000 UTC m=+1250.264496560" observedRunningTime="2026-02-23 17:51:35.490586968 +0000 UTC m=+1251.306786558" watchObservedRunningTime="2026-02-23 17:51:35.49249865 +0000 UTC m=+1251.308698250" Feb 23 17:51:35 crc kubenswrapper[4724]: I0223 17:51:35.508914 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bc99f56d9-tp8hh" podStartSLOduration=5.508897975 podStartE2EDuration="5.508897975s" podCreationTimestamp="2026-02-23 17:51:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:51:35.505439597 +0000 UTC m=+1251.321639197" watchObservedRunningTime="2026-02-23 17:51:35.508897975 +0000 UTC m=+1251.325097575" Feb 23 17:51:35 crc kubenswrapper[4724]: I0223 17:51:35.593683 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 23 17:51:35 crc kubenswrapper[4724]: I0223 17:51:35.646230 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 23 17:51:35 crc kubenswrapper[4724]: I0223 17:51:35.978111 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 23 17:51:35 crc kubenswrapper[4724]: I0223 17:51:35.978175 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 23 17:51:36 crc kubenswrapper[4724]: I0223 17:51:36.026806 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 17:51:36 crc kubenswrapper[4724]: I0223 17:51:36.064970 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01e96b58-f669-4168-9365-1c1fda437753-logs\") pod \"01e96b58-f669-4168-9365-1c1fda437753\" (UID: \"01e96b58-f669-4168-9365-1c1fda437753\") " Feb 23 17:51:36 crc kubenswrapper[4724]: I0223 17:51:36.065028 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01e96b58-f669-4168-9365-1c1fda437753-config-data\") pod \"01e96b58-f669-4168-9365-1c1fda437753\" (UID: \"01e96b58-f669-4168-9365-1c1fda437753\") " Feb 23 17:51:36 crc kubenswrapper[4724]: I0223 17:51:36.065053 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01e96b58-f669-4168-9365-1c1fda437753-combined-ca-bundle\") pod \"01e96b58-f669-4168-9365-1c1fda437753\" (UID: \"01e96b58-f669-4168-9365-1c1fda437753\") " Feb 23 17:51:36 crc kubenswrapper[4724]: I0223 17:51:36.065114 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdgnm\" (UniqueName: \"kubernetes.io/projected/01e96b58-f669-4168-9365-1c1fda437753-kube-api-access-vdgnm\") pod \"01e96b58-f669-4168-9365-1c1fda437753\" (UID: \"01e96b58-f669-4168-9365-1c1fda437753\") " Feb 23 17:51:36 crc kubenswrapper[4724]: I0223 17:51:36.065299 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01e96b58-f669-4168-9365-1c1fda437753-logs" (OuterVolumeSpecName: "logs") pod "01e96b58-f669-4168-9365-1c1fda437753" (UID: "01e96b58-f669-4168-9365-1c1fda437753"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:51:36 crc kubenswrapper[4724]: I0223 17:51:36.065681 4724 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01e96b58-f669-4168-9365-1c1fda437753-logs\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:36 crc kubenswrapper[4724]: I0223 17:51:36.072034 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01e96b58-f669-4168-9365-1c1fda437753-kube-api-access-vdgnm" (OuterVolumeSpecName: "kube-api-access-vdgnm") pod "01e96b58-f669-4168-9365-1c1fda437753" (UID: "01e96b58-f669-4168-9365-1c1fda437753"). InnerVolumeSpecName "kube-api-access-vdgnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:51:36 crc kubenswrapper[4724]: I0223 17:51:36.107153 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01e96b58-f669-4168-9365-1c1fda437753-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "01e96b58-f669-4168-9365-1c1fda437753" (UID: "01e96b58-f669-4168-9365-1c1fda437753"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:51:36 crc kubenswrapper[4724]: I0223 17:51:36.110020 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01e96b58-f669-4168-9365-1c1fda437753-config-data" (OuterVolumeSpecName: "config-data") pod "01e96b58-f669-4168-9365-1c1fda437753" (UID: "01e96b58-f669-4168-9365-1c1fda437753"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:51:36 crc kubenswrapper[4724]: I0223 17:51:36.167737 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01e96b58-f669-4168-9365-1c1fda437753-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:36 crc kubenswrapper[4724]: I0223 17:51:36.167767 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01e96b58-f669-4168-9365-1c1fda437753-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:36 crc kubenswrapper[4724]: I0223 17:51:36.167785 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdgnm\" (UniqueName: \"kubernetes.io/projected/01e96b58-f669-4168-9365-1c1fda437753-kube-api-access-vdgnm\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:36 crc kubenswrapper[4724]: I0223 17:51:36.413922 4724 generic.go:334] "Generic (PLEG): container finished" podID="01e96b58-f669-4168-9365-1c1fda437753" containerID="250337ba832a8ea46f8de54085a06920bb5f7f1eb561f83f8dc71c0f1918a28a" exitCode=0 Feb 23 17:51:36 crc kubenswrapper[4724]: I0223 17:51:36.414964 4724 generic.go:334] "Generic (PLEG): container finished" podID="01e96b58-f669-4168-9365-1c1fda437753" containerID="53ad433221c2b60f0910df8ed6ecb6897d949f24b8a494757f3a6a2ce22c7629" exitCode=143 Feb 23 17:51:36 crc kubenswrapper[4724]: I0223 17:51:36.413999 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 17:51:36 crc kubenswrapper[4724]: I0223 17:51:36.414011 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"01e96b58-f669-4168-9365-1c1fda437753","Type":"ContainerDied","Data":"250337ba832a8ea46f8de54085a06920bb5f7f1eb561f83f8dc71c0f1918a28a"} Feb 23 17:51:36 crc kubenswrapper[4724]: I0223 17:51:36.415132 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"01e96b58-f669-4168-9365-1c1fda437753","Type":"ContainerDied","Data":"53ad433221c2b60f0910df8ed6ecb6897d949f24b8a494757f3a6a2ce22c7629"} Feb 23 17:51:36 crc kubenswrapper[4724]: I0223 17:51:36.415153 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"01e96b58-f669-4168-9365-1c1fda437753","Type":"ContainerDied","Data":"b4277c74d1a2998560c8e08ddebe1a8f09212b50cd50b5c5ea5a048011be0b18"} Feb 23 17:51:36 crc kubenswrapper[4724]: I0223 17:51:36.415174 4724 scope.go:117] "RemoveContainer" containerID="250337ba832a8ea46f8de54085a06920bb5f7f1eb561f83f8dc71c0f1918a28a" Feb 23 17:51:36 crc kubenswrapper[4724]: I0223 17:51:36.438317 4724 scope.go:117] "RemoveContainer" containerID="53ad433221c2b60f0910df8ed6ecb6897d949f24b8a494757f3a6a2ce22c7629" Feb 23 17:51:36 crc kubenswrapper[4724]: I0223 17:51:36.450481 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 17:51:36 crc kubenswrapper[4724]: I0223 17:51:36.461931 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 17:51:36 crc kubenswrapper[4724]: I0223 17:51:36.476014 4724 scope.go:117] "RemoveContainer" containerID="250337ba832a8ea46f8de54085a06920bb5f7f1eb561f83f8dc71c0f1918a28a" Feb 23 17:51:36 crc kubenswrapper[4724]: E0223 17:51:36.479172 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"250337ba832a8ea46f8de54085a06920bb5f7f1eb561f83f8dc71c0f1918a28a\": container with ID starting with 250337ba832a8ea46f8de54085a06920bb5f7f1eb561f83f8dc71c0f1918a28a not found: ID does not exist" containerID="250337ba832a8ea46f8de54085a06920bb5f7f1eb561f83f8dc71c0f1918a28a" Feb 23 17:51:36 crc kubenswrapper[4724]: I0223 17:51:36.479234 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"250337ba832a8ea46f8de54085a06920bb5f7f1eb561f83f8dc71c0f1918a28a"} err="failed to get container status \"250337ba832a8ea46f8de54085a06920bb5f7f1eb561f83f8dc71c0f1918a28a\": rpc error: code = NotFound desc = could not find container \"250337ba832a8ea46f8de54085a06920bb5f7f1eb561f83f8dc71c0f1918a28a\": container with ID starting with 250337ba832a8ea46f8de54085a06920bb5f7f1eb561f83f8dc71c0f1918a28a not found: ID does not exist" Feb 23 17:51:36 crc kubenswrapper[4724]: I0223 17:51:36.479288 4724 scope.go:117] "RemoveContainer" containerID="53ad433221c2b60f0910df8ed6ecb6897d949f24b8a494757f3a6a2ce22c7629" Feb 23 17:51:36 crc kubenswrapper[4724]: E0223 17:51:36.480883 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53ad433221c2b60f0910df8ed6ecb6897d949f24b8a494757f3a6a2ce22c7629\": container with ID starting with 53ad433221c2b60f0910df8ed6ecb6897d949f24b8a494757f3a6a2ce22c7629 not found: ID does not exist" containerID="53ad433221c2b60f0910df8ed6ecb6897d949f24b8a494757f3a6a2ce22c7629" Feb 23 17:51:36 crc kubenswrapper[4724]: I0223 17:51:36.480917 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53ad433221c2b60f0910df8ed6ecb6897d949f24b8a494757f3a6a2ce22c7629"} err="failed to get container status \"53ad433221c2b60f0910df8ed6ecb6897d949f24b8a494757f3a6a2ce22c7629\": rpc error: code = NotFound desc = could not find container \"53ad433221c2b60f0910df8ed6ecb6897d949f24b8a494757f3a6a2ce22c7629\": container with ID starting with 53ad433221c2b60f0910df8ed6ecb6897d949f24b8a494757f3a6a2ce22c7629 not found: ID does not exist" Feb 23 17:51:36 crc kubenswrapper[4724]: I0223 17:51:36.480933 4724 scope.go:117] "RemoveContainer" containerID="250337ba832a8ea46f8de54085a06920bb5f7f1eb561f83f8dc71c0f1918a28a" Feb 23 17:51:36 crc kubenswrapper[4724]: I0223 17:51:36.481349 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"250337ba832a8ea46f8de54085a06920bb5f7f1eb561f83f8dc71c0f1918a28a"} err="failed to get container status \"250337ba832a8ea46f8de54085a06920bb5f7f1eb561f83f8dc71c0f1918a28a\": rpc error: code = NotFound desc = could not find container \"250337ba832a8ea46f8de54085a06920bb5f7f1eb561f83f8dc71c0f1918a28a\": container with ID starting with 250337ba832a8ea46f8de54085a06920bb5f7f1eb561f83f8dc71c0f1918a28a not found: ID does not exist" Feb 23 17:51:36 crc kubenswrapper[4724]: I0223 17:51:36.481404 4724 scope.go:117] "RemoveContainer" containerID="53ad433221c2b60f0910df8ed6ecb6897d949f24b8a494757f3a6a2ce22c7629" Feb 23 17:51:36 crc kubenswrapper[4724]: I0223 17:51:36.481655 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53ad433221c2b60f0910df8ed6ecb6897d949f24b8a494757f3a6a2ce22c7629"} err="failed to get container status \"53ad433221c2b60f0910df8ed6ecb6897d949f24b8a494757f3a6a2ce22c7629\": rpc error: code = NotFound desc = could not find container \"53ad433221c2b60f0910df8ed6ecb6897d949f24b8a494757f3a6a2ce22c7629\": container with ID starting with 53ad433221c2b60f0910df8ed6ecb6897d949f24b8a494757f3a6a2ce22c7629 not found: ID does not exist" Feb 23 17:51:36 crc kubenswrapper[4724]: I0223 17:51:36.490301 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 23 17:51:36 crc kubenswrapper[4724]: E0223 17:51:36.490878 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01e96b58-f669-4168-9365-1c1fda437753" containerName="nova-metadata-metadata" Feb 23 17:51:36 crc kubenswrapper[4724]: I0223 17:51:36.490901 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="01e96b58-f669-4168-9365-1c1fda437753" containerName="nova-metadata-metadata" Feb 23 17:51:36 crc kubenswrapper[4724]: E0223 17:51:36.490950 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01e96b58-f669-4168-9365-1c1fda437753" containerName="nova-metadata-log" Feb 23 17:51:36 crc kubenswrapper[4724]: I0223 17:51:36.490958 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="01e96b58-f669-4168-9365-1c1fda437753" containerName="nova-metadata-log" Feb 23 17:51:36 crc kubenswrapper[4724]: I0223 17:51:36.491188 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="01e96b58-f669-4168-9365-1c1fda437753" containerName="nova-metadata-metadata" Feb 23 17:51:36 crc kubenswrapper[4724]: I0223 17:51:36.491209 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="01e96b58-f669-4168-9365-1c1fda437753" containerName="nova-metadata-log" Feb 23 17:51:36 crc kubenswrapper[4724]: I0223 17:51:36.492898 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 17:51:36 crc kubenswrapper[4724]: I0223 17:51:36.495877 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 23 17:51:36 crc kubenswrapper[4724]: I0223 17:51:36.496142 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 23 17:51:36 crc kubenswrapper[4724]: I0223 17:51:36.516483 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 17:51:36 crc kubenswrapper[4724]: I0223 17:51:36.578425 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05524688-183b-4759-8dd6-98e7ceb26437-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"05524688-183b-4759-8dd6-98e7ceb26437\") " pod="openstack/nova-metadata-0" Feb 23 17:51:36 crc kubenswrapper[4724]: I0223 17:51:36.578620 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5j27\" (UniqueName: \"kubernetes.io/projected/05524688-183b-4759-8dd6-98e7ceb26437-kube-api-access-v5j27\") pod \"nova-metadata-0\" (UID: \"05524688-183b-4759-8dd6-98e7ceb26437\") " pod="openstack/nova-metadata-0" Feb 23 17:51:36 crc kubenswrapper[4724]: I0223 17:51:36.578671 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05524688-183b-4759-8dd6-98e7ceb26437-logs\") pod \"nova-metadata-0\" (UID: \"05524688-183b-4759-8dd6-98e7ceb26437\") " pod="openstack/nova-metadata-0" Feb 23 17:51:36 crc kubenswrapper[4724]: I0223 17:51:36.578745 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/05524688-183b-4759-8dd6-98e7ceb26437-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"05524688-183b-4759-8dd6-98e7ceb26437\") " pod="openstack/nova-metadata-0" Feb 23 17:51:36 crc kubenswrapper[4724]: I0223 17:51:36.578772 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05524688-183b-4759-8dd6-98e7ceb26437-config-data\") pod \"nova-metadata-0\" (UID: \"05524688-183b-4759-8dd6-98e7ceb26437\") " pod="openstack/nova-metadata-0" Feb 23 17:51:36 crc kubenswrapper[4724]: I0223 17:51:36.681024 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5j27\" (UniqueName: \"kubernetes.io/projected/05524688-183b-4759-8dd6-98e7ceb26437-kube-api-access-v5j27\") pod \"nova-metadata-0\" (UID: \"05524688-183b-4759-8dd6-98e7ceb26437\") " pod="openstack/nova-metadata-0" Feb 23 17:51:36 crc kubenswrapper[4724]: I0223 17:51:36.681109 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05524688-183b-4759-8dd6-98e7ceb26437-logs\") pod \"nova-metadata-0\" (UID: \"05524688-183b-4759-8dd6-98e7ceb26437\") " pod="openstack/nova-metadata-0" Feb 23 17:51:36 crc kubenswrapper[4724]: I0223 17:51:36.681201 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/05524688-183b-4759-8dd6-98e7ceb26437-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"05524688-183b-4759-8dd6-98e7ceb26437\") " pod="openstack/nova-metadata-0" Feb 23 17:51:36 crc kubenswrapper[4724]: I0223 17:51:36.681231 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05524688-183b-4759-8dd6-98e7ceb26437-config-data\") pod \"nova-metadata-0\" (UID: \"05524688-183b-4759-8dd6-98e7ceb26437\") " pod="openstack/nova-metadata-0" Feb 23 17:51:36 crc kubenswrapper[4724]: I0223 17:51:36.681289 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05524688-183b-4759-8dd6-98e7ceb26437-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"05524688-183b-4759-8dd6-98e7ceb26437\") " pod="openstack/nova-metadata-0" Feb 23 17:51:36 crc kubenswrapper[4724]: I0223 17:51:36.681886 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05524688-183b-4759-8dd6-98e7ceb26437-logs\") pod \"nova-metadata-0\" (UID: \"05524688-183b-4759-8dd6-98e7ceb26437\") " pod="openstack/nova-metadata-0" Feb 23 17:51:36 crc kubenswrapper[4724]: I0223 17:51:36.686045 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05524688-183b-4759-8dd6-98e7ceb26437-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"05524688-183b-4759-8dd6-98e7ceb26437\") " pod="openstack/nova-metadata-0" Feb 23 17:51:36 crc kubenswrapper[4724]: I0223 17:51:36.686963 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05524688-183b-4759-8dd6-98e7ceb26437-config-data\") pod \"nova-metadata-0\" (UID: \"05524688-183b-4759-8dd6-98e7ceb26437\") " pod="openstack/nova-metadata-0" Feb 23 17:51:36 crc kubenswrapper[4724]: I0223 17:51:36.688977 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/05524688-183b-4759-8dd6-98e7ceb26437-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"05524688-183b-4759-8dd6-98e7ceb26437\") " pod="openstack/nova-metadata-0" Feb 23 17:51:36 crc kubenswrapper[4724]: I0223 17:51:36.705373 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5j27\" (UniqueName: \"kubernetes.io/projected/05524688-183b-4759-8dd6-98e7ceb26437-kube-api-access-v5j27\") pod \"nova-metadata-0\" (UID: \"05524688-183b-4759-8dd6-98e7ceb26437\") " pod="openstack/nova-metadata-0" Feb 23 17:51:36 crc kubenswrapper[4724]: I0223 17:51:36.830301 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 17:51:36 crc kubenswrapper[4724]: I0223 17:51:36.974805 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01e96b58-f669-4168-9365-1c1fda437753" path="/var/lib/kubelet/pods/01e96b58-f669-4168-9365-1c1fda437753/volumes" Feb 23 17:51:37 crc kubenswrapper[4724]: I0223 17:51:37.312682 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 17:51:37 crc kubenswrapper[4724]: I0223 17:51:37.427975 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"05524688-183b-4759-8dd6-98e7ceb26437","Type":"ContainerStarted","Data":"a00e617ede6efe3bd61ffc123c18b1fff9631b4a47c287d77dc0cd77d54783f1"} Feb 23 17:51:37 crc kubenswrapper[4724]: I0223 17:51:37.951057 4724 scope.go:117] "RemoveContainer" containerID="14e56e42a874bd312d515edbb00d44d0a514d159d98544acf28fba090f14c860" Feb 23 17:51:38 crc kubenswrapper[4724]: I0223 17:51:38.440705 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"05524688-183b-4759-8dd6-98e7ceb26437","Type":"ContainerStarted","Data":"8670241cc5d39355c9e06d2d53450548765979159fc1493e827a63c6644192f3"} Feb 23 17:51:38 crc kubenswrapper[4724]: I0223 17:51:38.440760 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"05524688-183b-4759-8dd6-98e7ceb26437","Type":"ContainerStarted","Data":"03491b7bafc4d5438f103bfb1ebf2359ac1c28ce2cb967b9123cef73ee965895"} Feb 23 17:51:38 crc kubenswrapper[4724]: I0223 17:51:38.444219 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"86eb7ff0-87b2-4538-8c5b-9126768e810b","Type":"ContainerStarted","Data":"f0043a56eaba4ccab5e666dfe3695b37c622d8b9aa9bd52e8635b4943c61a771"} Feb 23 17:51:38 crc kubenswrapper[4724]: I0223 17:51:38.462096 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.462071585 podStartE2EDuration="2.462071585s" podCreationTimestamp="2026-02-23 17:51:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:51:38.456642724 +0000 UTC m=+1254.272842334" watchObservedRunningTime="2026-02-23 17:51:38.462071585 +0000 UTC m=+1254.278271215" Feb 23 17:51:40 crc kubenswrapper[4724]: I0223 17:51:40.476017 4724 generic.go:334] "Generic (PLEG): container finished" podID="1e0a28a1-5db9-4546-836f-1cfa21d4f068" containerID="7695afa7c8a678e78d7c8de09aa13b51249481b2b5e92cb3f9e9b5255540d55c" exitCode=0 Feb 23 17:51:40 crc kubenswrapper[4724]: I0223 17:51:40.476245 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-5l6f9" event={"ID":"1e0a28a1-5db9-4546-836f-1cfa21d4f068","Type":"ContainerDied","Data":"7695afa7c8a678e78d7c8de09aa13b51249481b2b5e92cb3f9e9b5255540d55c"} Feb 23 17:51:40 crc kubenswrapper[4724]: I0223 17:51:40.477705 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 23 17:51:40 crc kubenswrapper[4724]: I0223 17:51:40.477734 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 23 17:51:40 crc kubenswrapper[4724]: I0223 17:51:40.593526 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 23 17:51:40 crc kubenswrapper[4724]: I0223 17:51:40.622018 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 23 17:51:41 crc kubenswrapper[4724]: I0223 17:51:41.002740 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bc99f56d9-tp8hh" Feb 23 17:51:41 crc kubenswrapper[4724]: I0223 17:51:41.069003 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c8bdf9fff-r2h5q"] Feb 23 17:51:41 crc kubenswrapper[4724]: I0223 17:51:41.069213 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-c8bdf9fff-r2h5q" podUID="9d06e4b0-b516-436c-9c9f-054cfd2dd68f" containerName="dnsmasq-dns" containerID="cri-o://a35bb29586365a3c7e1d8dfa69147effd24fcaf6b4a7a6c19d16ad6dfdd3adb2" gracePeriod=10 Feb 23 17:51:41 crc kubenswrapper[4724]: I0223 17:51:41.494273 4724 generic.go:334] "Generic (PLEG): container finished" podID="9d06e4b0-b516-436c-9c9f-054cfd2dd68f" containerID="a35bb29586365a3c7e1d8dfa69147effd24fcaf6b4a7a6c19d16ad6dfdd3adb2" exitCode=0 Feb 23 17:51:41 crc kubenswrapper[4724]: I0223 17:51:41.494477 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c8bdf9fff-r2h5q" event={"ID":"9d06e4b0-b516-436c-9c9f-054cfd2dd68f","Type":"ContainerDied","Data":"a35bb29586365a3c7e1d8dfa69147effd24fcaf6b4a7a6c19d16ad6dfdd3adb2"} Feb 23 17:51:41 crc kubenswrapper[4724]: I0223 17:51:41.548051 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 23 17:51:41 crc kubenswrapper[4724]: I0223 17:51:41.556901 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 23 17:51:41 crc kubenswrapper[4724]: I0223 17:51:41.556957 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 23 17:51:41 crc kubenswrapper[4724]: I0223 17:51:41.563376 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="83c5ec75-90ba-42cf-ab2e-602078cfc1a9" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.208:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 23 17:51:41 crc kubenswrapper[4724]: I0223 17:51:41.563770 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="83c5ec75-90ba-42cf-ab2e-602078cfc1a9" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.208:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 23 17:51:41 crc kubenswrapper[4724]: I0223 17:51:41.615004 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Feb 23 17:51:41 crc kubenswrapper[4724]: I0223 17:51:41.806418 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c8bdf9fff-r2h5q" Feb 23 17:51:41 crc kubenswrapper[4724]: I0223 17:51:41.830569 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 23 17:51:41 crc kubenswrapper[4724]: I0223 17:51:41.832013 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 23 17:51:41 crc kubenswrapper[4724]: I0223 17:51:41.993595 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d06e4b0-b516-436c-9c9f-054cfd2dd68f-ovsdbserver-sb\") pod \"9d06e4b0-b516-436c-9c9f-054cfd2dd68f\" (UID: \"9d06e4b0-b516-436c-9c9f-054cfd2dd68f\") " Feb 23 17:51:41 crc kubenswrapper[4724]: I0223 17:51:41.993653 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d06e4b0-b516-436c-9c9f-054cfd2dd68f-config\") pod \"9d06e4b0-b516-436c-9c9f-054cfd2dd68f\" (UID: \"9d06e4b0-b516-436c-9c9f-054cfd2dd68f\") " Feb 23 17:51:41 crc kubenswrapper[4724]: I0223 17:51:41.993719 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkszq\" (UniqueName: \"kubernetes.io/projected/9d06e4b0-b516-436c-9c9f-054cfd2dd68f-kube-api-access-fkszq\") pod \"9d06e4b0-b516-436c-9c9f-054cfd2dd68f\" (UID: \"9d06e4b0-b516-436c-9c9f-054cfd2dd68f\") " Feb 23 17:51:41 crc kubenswrapper[4724]: I0223 17:51:41.993773 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d06e4b0-b516-436c-9c9f-054cfd2dd68f-dns-svc\") pod \"9d06e4b0-b516-436c-9c9f-054cfd2dd68f\" (UID: \"9d06e4b0-b516-436c-9c9f-054cfd2dd68f\") " Feb 23 17:51:41 crc kubenswrapper[4724]: I0223 17:51:41.993830 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d06e4b0-b516-436c-9c9f-054cfd2dd68f-ovsdbserver-nb\") pod \"9d06e4b0-b516-436c-9c9f-054cfd2dd68f\" (UID: \"9d06e4b0-b516-436c-9c9f-054cfd2dd68f\") " Feb 23 17:51:41 crc kubenswrapper[4724]: I0223 17:51:41.993899 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d06e4b0-b516-436c-9c9f-054cfd2dd68f-dns-swift-storage-0\") pod \"9d06e4b0-b516-436c-9c9f-054cfd2dd68f\" (UID: \"9d06e4b0-b516-436c-9c9f-054cfd2dd68f\") " Feb 23 17:51:42 crc kubenswrapper[4724]: I0223 17:51:42.001512 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d06e4b0-b516-436c-9c9f-054cfd2dd68f-kube-api-access-fkszq" (OuterVolumeSpecName: "kube-api-access-fkszq") pod "9d06e4b0-b516-436c-9c9f-054cfd2dd68f" (UID: "9d06e4b0-b516-436c-9c9f-054cfd2dd68f"). InnerVolumeSpecName "kube-api-access-fkszq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:51:42 crc kubenswrapper[4724]: I0223 17:51:42.007030 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-5l6f9" Feb 23 17:51:42 crc kubenswrapper[4724]: I0223 17:51:42.059874 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d06e4b0-b516-436c-9c9f-054cfd2dd68f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9d06e4b0-b516-436c-9c9f-054cfd2dd68f" (UID: "9d06e4b0-b516-436c-9c9f-054cfd2dd68f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:51:42 crc kubenswrapper[4724]: I0223 17:51:42.064717 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d06e4b0-b516-436c-9c9f-054cfd2dd68f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9d06e4b0-b516-436c-9c9f-054cfd2dd68f" (UID: "9d06e4b0-b516-436c-9c9f-054cfd2dd68f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:51:42 crc kubenswrapper[4724]: I0223 17:51:42.082078 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d06e4b0-b516-436c-9c9f-054cfd2dd68f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9d06e4b0-b516-436c-9c9f-054cfd2dd68f" (UID: "9d06e4b0-b516-436c-9c9f-054cfd2dd68f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:51:42 crc kubenswrapper[4724]: I0223 17:51:42.090553 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d06e4b0-b516-436c-9c9f-054cfd2dd68f-config" (OuterVolumeSpecName: "config") pod "9d06e4b0-b516-436c-9c9f-054cfd2dd68f" (UID: "9d06e4b0-b516-436c-9c9f-054cfd2dd68f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:51:42 crc kubenswrapper[4724]: I0223 17:51:42.092312 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d06e4b0-b516-436c-9c9f-054cfd2dd68f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9d06e4b0-b516-436c-9c9f-054cfd2dd68f" (UID: "9d06e4b0-b516-436c-9c9f-054cfd2dd68f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:51:42 crc kubenswrapper[4724]: I0223 17:51:42.100780 4724 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d06e4b0-b516-436c-9c9f-054cfd2dd68f-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:42 crc kubenswrapper[4724]: I0223 17:51:42.100815 4724 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d06e4b0-b516-436c-9c9f-054cfd2dd68f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:42 crc kubenswrapper[4724]: I0223 17:51:42.100826 4724 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d06e4b0-b516-436c-9c9f-054cfd2dd68f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:42 crc kubenswrapper[4724]: I0223 17:51:42.100835 4724 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d06e4b0-b516-436c-9c9f-054cfd2dd68f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:42 crc kubenswrapper[4724]: I0223 17:51:42.100844 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d06e4b0-b516-436c-9c9f-054cfd2dd68f-config\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:42 crc kubenswrapper[4724]: I0223 17:51:42.100852 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkszq\" (UniqueName: \"kubernetes.io/projected/9d06e4b0-b516-436c-9c9f-054cfd2dd68f-kube-api-access-fkszq\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:42 crc kubenswrapper[4724]: I0223 17:51:42.203113 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e0a28a1-5db9-4546-836f-1cfa21d4f068-config-data\") pod \"1e0a28a1-5db9-4546-836f-1cfa21d4f068\" (UID: \"1e0a28a1-5db9-4546-836f-1cfa21d4f068\") " Feb 23 17:51:42 crc kubenswrapper[4724]: I0223 17:51:42.203191 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e0a28a1-5db9-4546-836f-1cfa21d4f068-scripts\") pod \"1e0a28a1-5db9-4546-836f-1cfa21d4f068\" (UID: \"1e0a28a1-5db9-4546-836f-1cfa21d4f068\") " Feb 23 17:51:42 crc kubenswrapper[4724]: I0223 17:51:42.203283 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjxt5\" (UniqueName: \"kubernetes.io/projected/1e0a28a1-5db9-4546-836f-1cfa21d4f068-kube-api-access-rjxt5\") pod \"1e0a28a1-5db9-4546-836f-1cfa21d4f068\" (UID: \"1e0a28a1-5db9-4546-836f-1cfa21d4f068\") " Feb 23 17:51:42 crc kubenswrapper[4724]: I0223 17:51:42.203524 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e0a28a1-5db9-4546-836f-1cfa21d4f068-combined-ca-bundle\") pod \"1e0a28a1-5db9-4546-836f-1cfa21d4f068\" (UID: \"1e0a28a1-5db9-4546-836f-1cfa21d4f068\") " Feb 23 17:51:42 crc kubenswrapper[4724]: I0223 17:51:42.207228 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e0a28a1-5db9-4546-836f-1cfa21d4f068-kube-api-access-rjxt5" (OuterVolumeSpecName: "kube-api-access-rjxt5") pod "1e0a28a1-5db9-4546-836f-1cfa21d4f068" (UID: "1e0a28a1-5db9-4546-836f-1cfa21d4f068"). InnerVolumeSpecName "kube-api-access-rjxt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:51:42 crc kubenswrapper[4724]: I0223 17:51:42.210520 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e0a28a1-5db9-4546-836f-1cfa21d4f068-scripts" (OuterVolumeSpecName: "scripts") pod "1e0a28a1-5db9-4546-836f-1cfa21d4f068" (UID: "1e0a28a1-5db9-4546-836f-1cfa21d4f068"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:51:42 crc kubenswrapper[4724]: I0223 17:51:42.238777 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e0a28a1-5db9-4546-836f-1cfa21d4f068-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1e0a28a1-5db9-4546-836f-1cfa21d4f068" (UID: "1e0a28a1-5db9-4546-836f-1cfa21d4f068"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:51:42 crc kubenswrapper[4724]: I0223 17:51:42.247954 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e0a28a1-5db9-4546-836f-1cfa21d4f068-config-data" (OuterVolumeSpecName: "config-data") pod "1e0a28a1-5db9-4546-836f-1cfa21d4f068" (UID: "1e0a28a1-5db9-4546-836f-1cfa21d4f068"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:51:42 crc kubenswrapper[4724]: I0223 17:51:42.306316 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e0a28a1-5db9-4546-836f-1cfa21d4f068-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:42 crc kubenswrapper[4724]: I0223 17:51:42.306361 4724 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e0a28a1-5db9-4546-836f-1cfa21d4f068-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:42 crc kubenswrapper[4724]: I0223 17:51:42.306374 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjxt5\" (UniqueName: \"kubernetes.io/projected/1e0a28a1-5db9-4546-836f-1cfa21d4f068-kube-api-access-rjxt5\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:42 crc kubenswrapper[4724]: I0223 17:51:42.306403 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e0a28a1-5db9-4546-836f-1cfa21d4f068-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:42 crc kubenswrapper[4724]: I0223 17:51:42.506857 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c8bdf9fff-r2h5q" event={"ID":"9d06e4b0-b516-436c-9c9f-054cfd2dd68f","Type":"ContainerDied","Data":"055834f770288674a53f529da3838bdfc532017fc9e8c0ea2d43b5a92e9979ee"} Feb 23 17:51:42 crc kubenswrapper[4724]: I0223 17:51:42.507224 4724 scope.go:117] "RemoveContainer" containerID="a35bb29586365a3c7e1d8dfa69147effd24fcaf6b4a7a6c19d16ad6dfdd3adb2" Feb 23 17:51:42 crc kubenswrapper[4724]: I0223 17:51:42.507478 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c8bdf9fff-r2h5q" Feb 23 17:51:42 crc kubenswrapper[4724]: I0223 17:51:42.509901 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-5l6f9" event={"ID":"1e0a28a1-5db9-4546-836f-1cfa21d4f068","Type":"ContainerDied","Data":"45926abf516bb1c079f6165ad85510621e36053e595bff8b526429f0efb75a85"} Feb 23 17:51:42 crc kubenswrapper[4724]: I0223 17:51:42.510961 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45926abf516bb1c079f6165ad85510621e36053e595bff8b526429f0efb75a85" Feb 23 17:51:42 crc kubenswrapper[4724]: I0223 17:51:42.511122 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-5l6f9" Feb 23 17:51:42 crc kubenswrapper[4724]: I0223 17:51:42.542300 4724 scope.go:117] "RemoveContainer" containerID="3de46e65402a8d7a6f5944d471d9ae5153f298bbd9e9ce7341c3e50f05f53251" Feb 23 17:51:42 crc kubenswrapper[4724]: I0223 17:51:42.565863 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Feb 23 17:51:42 crc kubenswrapper[4724]: I0223 17:51:42.576874 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c8bdf9fff-r2h5q"] Feb 23 17:51:42 crc kubenswrapper[4724]: I0223 17:51:42.636869 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-c8bdf9fff-r2h5q"] Feb 23 17:51:42 crc kubenswrapper[4724]: I0223 17:51:42.656159 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 23 17:51:42 crc kubenswrapper[4724]: I0223 17:51:42.721255 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 23 17:51:42 crc kubenswrapper[4724]: I0223 17:51:42.721586 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="83c5ec75-90ba-42cf-ab2e-602078cfc1a9" containerName="nova-api-log" containerID="cri-o://ad166250820023650ef99135e3e3a62d1b170bc264efbc9e92267f78f16ecbb2" gracePeriod=30 Feb 23 17:51:42 crc kubenswrapper[4724]: I0223 17:51:42.721839 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="83c5ec75-90ba-42cf-ab2e-602078cfc1a9" containerName="nova-api-api" containerID="cri-o://8e063fb9127cff3f77eeda92e3f8d5a1fb7f04f304b536d1b7008271259be6e3" gracePeriod=30 Feb 23 17:51:42 crc kubenswrapper[4724]: I0223 17:51:42.733650 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 17:51:42 crc kubenswrapper[4724]: I0223 17:51:42.746226 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 17:51:42 crc kubenswrapper[4724]: I0223 17:51:42.964737 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d06e4b0-b516-436c-9c9f-054cfd2dd68f" path="/var/lib/kubelet/pods/9d06e4b0-b516-436c-9c9f-054cfd2dd68f/volumes" Feb 23 17:51:43 crc kubenswrapper[4724]: I0223 17:51:43.522948 4724 generic.go:334] "Generic (PLEG): container finished" podID="83c5ec75-90ba-42cf-ab2e-602078cfc1a9" containerID="ad166250820023650ef99135e3e3a62d1b170bc264efbc9e92267f78f16ecbb2" exitCode=143 Feb 23 17:51:43 crc kubenswrapper[4724]: I0223 17:51:43.523058 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"83c5ec75-90ba-42cf-ab2e-602078cfc1a9","Type":"ContainerDied","Data":"ad166250820023650ef99135e3e3a62d1b170bc264efbc9e92267f78f16ecbb2"} Feb 23 17:51:43 crc kubenswrapper[4724]: I0223 17:51:43.523182 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="7807bbeb-d20e-4ec9-8587-3bac2e960ab6" containerName="nova-scheduler-scheduler" containerID="cri-o://ebeb9f568cc4f635943070cc519b22051c37608c9116b04c9209f23c14d8ff12" gracePeriod=30 Feb 23 17:51:43 crc kubenswrapper[4724]: I0223 17:51:43.523291 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="05524688-183b-4759-8dd6-98e7ceb26437" containerName="nova-metadata-log" containerID="cri-o://03491b7bafc4d5438f103bfb1ebf2359ac1c28ce2cb967b9123cef73ee965895" gracePeriod=30 Feb 23 17:51:43 crc kubenswrapper[4724]: I0223 17:51:43.523359 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="05524688-183b-4759-8dd6-98e7ceb26437" containerName="nova-metadata-metadata" containerID="cri-o://8670241cc5d39355c9e06d2d53450548765979159fc1493e827a63c6644192f3" gracePeriod=30 Feb 23 17:51:43 crc kubenswrapper[4724]: I0223 17:51:43.998775 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 23 17:51:43 crc kubenswrapper[4724]: I0223 17:51:43.999275 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="f5c6dff7-7008-48cf-8e14-42d2f92c9221" containerName="watcher-api-log" containerID="cri-o://ac6c88806f705901a97da86a033889d8172d1e5b2239d121c938c286b6e1f18e" gracePeriod=30 Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:43.999515 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="f5c6dff7-7008-48cf-8e14-42d2f92c9221" containerName="watcher-api" containerID="cri-o://b1fced8e7a4ad18c13b9e149eb44f6aee202723b77b5c2401142dd3f1574efe9" gracePeriod=30 Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.027765 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.027978 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-applier-0" podUID="9f1d15c2-eeeb-41fe-89c7-27ad522e5c56" containerName="watcher-applier" containerID="cri-o://83420c1acfc2061d9432f2021c0cc50a6bcef3c81d941d660d001230df4580db" gracePeriod=30 Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.197493 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.373602 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05524688-183b-4759-8dd6-98e7ceb26437-logs\") pod \"05524688-183b-4759-8dd6-98e7ceb26437\" (UID: \"05524688-183b-4759-8dd6-98e7ceb26437\") " Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.373764 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5j27\" (UniqueName: \"kubernetes.io/projected/05524688-183b-4759-8dd6-98e7ceb26437-kube-api-access-v5j27\") pod \"05524688-183b-4759-8dd6-98e7ceb26437\" (UID: \"05524688-183b-4759-8dd6-98e7ceb26437\") " Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.373817 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/05524688-183b-4759-8dd6-98e7ceb26437-nova-metadata-tls-certs\") pod \"05524688-183b-4759-8dd6-98e7ceb26437\" (UID: \"05524688-183b-4759-8dd6-98e7ceb26437\") " Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.373877 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05524688-183b-4759-8dd6-98e7ceb26437-config-data\") pod \"05524688-183b-4759-8dd6-98e7ceb26437\" (UID: \"05524688-183b-4759-8dd6-98e7ceb26437\") " Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.373953 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05524688-183b-4759-8dd6-98e7ceb26437-combined-ca-bundle\") pod \"05524688-183b-4759-8dd6-98e7ceb26437\" (UID: \"05524688-183b-4759-8dd6-98e7ceb26437\") " Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.375246 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05524688-183b-4759-8dd6-98e7ceb26437-logs" (OuterVolumeSpecName: "logs") pod "05524688-183b-4759-8dd6-98e7ceb26437" (UID: "05524688-183b-4759-8dd6-98e7ceb26437"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.395607 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05524688-183b-4759-8dd6-98e7ceb26437-kube-api-access-v5j27" (OuterVolumeSpecName: "kube-api-access-v5j27") pod "05524688-183b-4759-8dd6-98e7ceb26437" (UID: "05524688-183b-4759-8dd6-98e7ceb26437"). InnerVolumeSpecName "kube-api-access-v5j27". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.411541 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05524688-183b-4759-8dd6-98e7ceb26437-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "05524688-183b-4759-8dd6-98e7ceb26437" (UID: "05524688-183b-4759-8dd6-98e7ceb26437"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.417692 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05524688-183b-4759-8dd6-98e7ceb26437-config-data" (OuterVolumeSpecName: "config-data") pod "05524688-183b-4759-8dd6-98e7ceb26437" (UID: "05524688-183b-4759-8dd6-98e7ceb26437"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.443548 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05524688-183b-4759-8dd6-98e7ceb26437-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "05524688-183b-4759-8dd6-98e7ceb26437" (UID: "05524688-183b-4759-8dd6-98e7ceb26437"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.476154 4724 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05524688-183b-4759-8dd6-98e7ceb26437-logs\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.476196 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5j27\" (UniqueName: \"kubernetes.io/projected/05524688-183b-4759-8dd6-98e7ceb26437-kube-api-access-v5j27\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.476209 4724 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/05524688-183b-4759-8dd6-98e7ceb26437-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.476220 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05524688-183b-4759-8dd6-98e7ceb26437-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.476234 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05524688-183b-4759-8dd6-98e7ceb26437-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.535930 4724 generic.go:334] "Generic (PLEG): container finished" podID="54049f9e-01f1-475b-b008-401152f8ca55" containerID="0eb9e394baa3796f1f9d644774bed6ef1faa27f119bd06eda4aefb9e9ac2ec76" exitCode=0 Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.536019 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-msc2q" event={"ID":"54049f9e-01f1-475b-b008-401152f8ca55","Type":"ContainerDied","Data":"0eb9e394baa3796f1f9d644774bed6ef1faa27f119bd06eda4aefb9e9ac2ec76"} Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.539348 4724 generic.go:334] "Generic (PLEG): container finished" podID="f5c6dff7-7008-48cf-8e14-42d2f92c9221" containerID="ac6c88806f705901a97da86a033889d8172d1e5b2239d121c938c286b6e1f18e" exitCode=143 Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.539423 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"f5c6dff7-7008-48cf-8e14-42d2f92c9221","Type":"ContainerDied","Data":"ac6c88806f705901a97da86a033889d8172d1e5b2239d121c938c286b6e1f18e"} Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.546336 4724 generic.go:334] "Generic (PLEG): container finished" podID="05524688-183b-4759-8dd6-98e7ceb26437" containerID="8670241cc5d39355c9e06d2d53450548765979159fc1493e827a63c6644192f3" exitCode=0 Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.546375 4724 generic.go:334] "Generic (PLEG): container finished" podID="05524688-183b-4759-8dd6-98e7ceb26437" containerID="03491b7bafc4d5438f103bfb1ebf2359ac1c28ce2cb967b9123cef73ee965895" exitCode=143 Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.546605 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="86eb7ff0-87b2-4538-8c5b-9126768e810b" containerName="watcher-decision-engine" containerID="cri-o://f0043a56eaba4ccab5e666dfe3695b37c622d8b9aa9bd52e8635b4943c61a771" gracePeriod=30 Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.546724 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.549041 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"05524688-183b-4759-8dd6-98e7ceb26437","Type":"ContainerDied","Data":"8670241cc5d39355c9e06d2d53450548765979159fc1493e827a63c6644192f3"} Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.549088 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"05524688-183b-4759-8dd6-98e7ceb26437","Type":"ContainerDied","Data":"03491b7bafc4d5438f103bfb1ebf2359ac1c28ce2cb967b9123cef73ee965895"} Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.549100 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"05524688-183b-4759-8dd6-98e7ceb26437","Type":"ContainerDied","Data":"a00e617ede6efe3bd61ffc123c18b1fff9631b4a47c287d77dc0cd77d54783f1"} Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.549120 4724 scope.go:117] "RemoveContainer" containerID="8670241cc5d39355c9e06d2d53450548765979159fc1493e827a63c6644192f3" Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.588893 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.590741 4724 scope.go:117] "RemoveContainer" containerID="03491b7bafc4d5438f103bfb1ebf2359ac1c28ce2cb967b9123cef73ee965895" Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.624575 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.643214 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 23 17:51:44 crc kubenswrapper[4724]: E0223 17:51:44.644038 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d06e4b0-b516-436c-9c9f-054cfd2dd68f" containerName="init" Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.644070 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d06e4b0-b516-436c-9c9f-054cfd2dd68f" containerName="init" Feb 23 17:51:44 crc kubenswrapper[4724]: E0223 17:51:44.644102 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05524688-183b-4759-8dd6-98e7ceb26437" containerName="nova-metadata-log" Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.644115 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="05524688-183b-4759-8dd6-98e7ceb26437" containerName="nova-metadata-log" Feb 23 17:51:44 crc kubenswrapper[4724]: E0223 17:51:44.644134 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05524688-183b-4759-8dd6-98e7ceb26437" containerName="nova-metadata-metadata" Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.644146 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="05524688-183b-4759-8dd6-98e7ceb26437" containerName="nova-metadata-metadata" Feb 23 17:51:44 crc kubenswrapper[4724]: E0223 17:51:44.644159 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e0a28a1-5db9-4546-836f-1cfa21d4f068" containerName="nova-manage" Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.644171 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e0a28a1-5db9-4546-836f-1cfa21d4f068" containerName="nova-manage" Feb 23 17:51:44 crc kubenswrapper[4724]: E0223 17:51:44.644230 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d06e4b0-b516-436c-9c9f-054cfd2dd68f" containerName="dnsmasq-dns" Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.644241 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d06e4b0-b516-436c-9c9f-054cfd2dd68f" containerName="dnsmasq-dns" Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.644574 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e0a28a1-5db9-4546-836f-1cfa21d4f068" containerName="nova-manage" Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.644610 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d06e4b0-b516-436c-9c9f-054cfd2dd68f" containerName="dnsmasq-dns" Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.644635 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="05524688-183b-4759-8dd6-98e7ceb26437" containerName="nova-metadata-log" Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.644661 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="05524688-183b-4759-8dd6-98e7ceb26437" containerName="nova-metadata-metadata" Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.646370 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.650366 4724 scope.go:117] "RemoveContainer" containerID="8670241cc5d39355c9e06d2d53450548765979159fc1493e827a63c6644192f3" Feb 23 17:51:44 crc kubenswrapper[4724]: E0223 17:51:44.655636 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8670241cc5d39355c9e06d2d53450548765979159fc1493e827a63c6644192f3\": container with ID starting with 8670241cc5d39355c9e06d2d53450548765979159fc1493e827a63c6644192f3 not found: ID does not exist" containerID="8670241cc5d39355c9e06d2d53450548765979159fc1493e827a63c6644192f3" Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.655681 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8670241cc5d39355c9e06d2d53450548765979159fc1493e827a63c6644192f3"} err="failed to get container status \"8670241cc5d39355c9e06d2d53450548765979159fc1493e827a63c6644192f3\": rpc error: code = NotFound desc = could not find container \"8670241cc5d39355c9e06d2d53450548765979159fc1493e827a63c6644192f3\": container with ID starting with 8670241cc5d39355c9e06d2d53450548765979159fc1493e827a63c6644192f3 not found: ID does not exist" Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.655705 4724 scope.go:117] "RemoveContainer" containerID="03491b7bafc4d5438f103bfb1ebf2359ac1c28ce2cb967b9123cef73ee965895" Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.659678 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.659917 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 23 17:51:44 crc kubenswrapper[4724]: E0223 17:51:44.660500 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03491b7bafc4d5438f103bfb1ebf2359ac1c28ce2cb967b9123cef73ee965895\": container with ID starting with 03491b7bafc4d5438f103bfb1ebf2359ac1c28ce2cb967b9123cef73ee965895 not found: ID does not exist" containerID="03491b7bafc4d5438f103bfb1ebf2359ac1c28ce2cb967b9123cef73ee965895" Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.660531 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03491b7bafc4d5438f103bfb1ebf2359ac1c28ce2cb967b9123cef73ee965895"} err="failed to get container status \"03491b7bafc4d5438f103bfb1ebf2359ac1c28ce2cb967b9123cef73ee965895\": rpc error: code = NotFound desc = could not find container \"03491b7bafc4d5438f103bfb1ebf2359ac1c28ce2cb967b9123cef73ee965895\": container with ID starting with 03491b7bafc4d5438f103bfb1ebf2359ac1c28ce2cb967b9123cef73ee965895 not found: ID does not exist" Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.660550 4724 scope.go:117] "RemoveContainer" containerID="8670241cc5d39355c9e06d2d53450548765979159fc1493e827a63c6644192f3" Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.664686 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8670241cc5d39355c9e06d2d53450548765979159fc1493e827a63c6644192f3"} err="failed to get container status \"8670241cc5d39355c9e06d2d53450548765979159fc1493e827a63c6644192f3\": rpc error: code = NotFound desc = could not find container \"8670241cc5d39355c9e06d2d53450548765979159fc1493e827a63c6644192f3\": container with ID starting with 8670241cc5d39355c9e06d2d53450548765979159fc1493e827a63c6644192f3 not found: ID does not exist" Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.664726 4724 scope.go:117] "RemoveContainer" containerID="03491b7bafc4d5438f103bfb1ebf2359ac1c28ce2cb967b9123cef73ee965895" Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.670593 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03491b7bafc4d5438f103bfb1ebf2359ac1c28ce2cb967b9123cef73ee965895"} err="failed to get container status \"03491b7bafc4d5438f103bfb1ebf2359ac1c28ce2cb967b9123cef73ee965895\": rpc error: code = NotFound desc = could not find container \"03491b7bafc4d5438f103bfb1ebf2359ac1c28ce2cb967b9123cef73ee965895\": container with ID starting with 03491b7bafc4d5438f103bfb1ebf2359ac1c28ce2cb967b9123cef73ee965895 not found: ID does not exist" Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.705917 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.783522 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b3714b0-4281-4cf0-be57-789820a25116-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0b3714b0-4281-4cf0-be57-789820a25116\") " pod="openstack/nova-metadata-0" Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.783583 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b3714b0-4281-4cf0-be57-789820a25116-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0b3714b0-4281-4cf0-be57-789820a25116\") " pod="openstack/nova-metadata-0" Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.783641 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58l72\" (UniqueName: \"kubernetes.io/projected/0b3714b0-4281-4cf0-be57-789820a25116-kube-api-access-58l72\") pod \"nova-metadata-0\" (UID: \"0b3714b0-4281-4cf0-be57-789820a25116\") " pod="openstack/nova-metadata-0" Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.783693 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b3714b0-4281-4cf0-be57-789820a25116-logs\") pod \"nova-metadata-0\" (UID: \"0b3714b0-4281-4cf0-be57-789820a25116\") " pod="openstack/nova-metadata-0" Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.783788 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b3714b0-4281-4cf0-be57-789820a25116-config-data\") pod \"nova-metadata-0\" (UID: \"0b3714b0-4281-4cf0-be57-789820a25116\") " pod="openstack/nova-metadata-0" Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.885021 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b3714b0-4281-4cf0-be57-789820a25116-config-data\") pod \"nova-metadata-0\" (UID: \"0b3714b0-4281-4cf0-be57-789820a25116\") " pod="openstack/nova-metadata-0" Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.885147 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b3714b0-4281-4cf0-be57-789820a25116-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0b3714b0-4281-4cf0-be57-789820a25116\") " pod="openstack/nova-metadata-0" Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.885180 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b3714b0-4281-4cf0-be57-789820a25116-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0b3714b0-4281-4cf0-be57-789820a25116\") " pod="openstack/nova-metadata-0" Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.885235 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58l72\" (UniqueName: \"kubernetes.io/projected/0b3714b0-4281-4cf0-be57-789820a25116-kube-api-access-58l72\") pod \"nova-metadata-0\" (UID: \"0b3714b0-4281-4cf0-be57-789820a25116\") " pod="openstack/nova-metadata-0" Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.885288 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b3714b0-4281-4cf0-be57-789820a25116-logs\") pod \"nova-metadata-0\" (UID: \"0b3714b0-4281-4cf0-be57-789820a25116\") " pod="openstack/nova-metadata-0" Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.885949 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b3714b0-4281-4cf0-be57-789820a25116-logs\") pod \"nova-metadata-0\" (UID: \"0b3714b0-4281-4cf0-be57-789820a25116\") " pod="openstack/nova-metadata-0" Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.886712 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.887327 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.890637 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b3714b0-4281-4cf0-be57-789820a25116-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0b3714b0-4281-4cf0-be57-789820a25116\") " pod="openstack/nova-metadata-0" Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.898474 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b3714b0-4281-4cf0-be57-789820a25116-config-data\") pod \"nova-metadata-0\" (UID: \"0b3714b0-4281-4cf0-be57-789820a25116\") " pod="openstack/nova-metadata-0" Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.898485 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b3714b0-4281-4cf0-be57-789820a25116-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0b3714b0-4281-4cf0-be57-789820a25116\") " pod="openstack/nova-metadata-0" Feb 23 17:51:44 crc kubenswrapper[4724]: I0223 17:51:44.910134 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58l72\" (UniqueName: \"kubernetes.io/projected/0b3714b0-4281-4cf0-be57-789820a25116-kube-api-access-58l72\") pod \"nova-metadata-0\" (UID: \"0b3714b0-4281-4cf0-be57-789820a25116\") " pod="openstack/nova-metadata-0" Feb 23 17:51:45 crc kubenswrapper[4724]: I0223 17:51:45.000745 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05524688-183b-4759-8dd6-98e7ceb26437" path="/var/lib/kubelet/pods/05524688-183b-4759-8dd6-98e7ceb26437/volumes" Feb 23 17:51:45 crc kubenswrapper[4724]: I0223 17:51:45.050891 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 17:51:45 crc kubenswrapper[4724]: I0223 17:51:45.131041 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="f5c6dff7-7008-48cf-8e14-42d2f92c9221" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.179:9322/\": read tcp 10.217.0.2:41254->10.217.0.179:9322: read: connection reset by peer" Feb 23 17:51:45 crc kubenswrapper[4724]: I0223 17:51:45.131042 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="f5c6dff7-7008-48cf-8e14-42d2f92c9221" containerName="watcher-api-log" probeResult="failure" output="Get \"https://10.217.0.179:9322/\": read tcp 10.217.0.2:41260->10.217.0.179:9322: read: connection reset by peer" Feb 23 17:51:45 crc kubenswrapper[4724]: I0223 17:51:45.505627 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 23 17:51:45 crc kubenswrapper[4724]: I0223 17:51:45.512800 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 17:51:45 crc kubenswrapper[4724]: I0223 17:51:45.578442 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0b3714b0-4281-4cf0-be57-789820a25116","Type":"ContainerStarted","Data":"6e599623739e2360c620e636c930b00bd74642869deac683d561b43f74b9a545"} Feb 23 17:51:45 crc kubenswrapper[4724]: I0223 17:51:45.581180 4724 generic.go:334] "Generic (PLEG): container finished" podID="f5c6dff7-7008-48cf-8e14-42d2f92c9221" containerID="b1fced8e7a4ad18c13b9e149eb44f6aee202723b77b5c2401142dd3f1574efe9" exitCode=0 Feb 23 17:51:45 crc kubenswrapper[4724]: I0223 17:51:45.581778 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 23 17:51:45 crc kubenswrapper[4724]: I0223 17:51:45.582843 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"f5c6dff7-7008-48cf-8e14-42d2f92c9221","Type":"ContainerDied","Data":"b1fced8e7a4ad18c13b9e149eb44f6aee202723b77b5c2401142dd3f1574efe9"} Feb 23 17:51:45 crc kubenswrapper[4724]: I0223 17:51:45.582886 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"f5c6dff7-7008-48cf-8e14-42d2f92c9221","Type":"ContainerDied","Data":"029280f351739876bb2c782a0c5082a8e9d6eee074f830f8b08b85c495015690"} Feb 23 17:51:45 crc kubenswrapper[4724]: I0223 17:51:45.582907 4724 scope.go:117] "RemoveContainer" containerID="b1fced8e7a4ad18c13b9e149eb44f6aee202723b77b5c2401142dd3f1574efe9" Feb 23 17:51:45 crc kubenswrapper[4724]: E0223 17:51:45.599829 4724 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ebeb9f568cc4f635943070cc519b22051c37608c9116b04c9209f23c14d8ff12" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 23 17:51:45 crc kubenswrapper[4724]: E0223 17:51:45.602129 4724 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ebeb9f568cc4f635943070cc519b22051c37608c9116b04c9209f23c14d8ff12" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 23 17:51:45 crc kubenswrapper[4724]: E0223 17:51:45.604273 4724 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ebeb9f568cc4f635943070cc519b22051c37608c9116b04c9209f23c14d8ff12" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 23 17:51:45 crc kubenswrapper[4724]: E0223 17:51:45.604332 4724 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="7807bbeb-d20e-4ec9-8587-3bac2e960ab6" containerName="nova-scheduler-scheduler" Feb 23 17:51:45 crc kubenswrapper[4724]: I0223 17:51:45.604891 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5c6dff7-7008-48cf-8e14-42d2f92c9221-public-tls-certs\") pod \"f5c6dff7-7008-48cf-8e14-42d2f92c9221\" (UID: \"f5c6dff7-7008-48cf-8e14-42d2f92c9221\") " Feb 23 17:51:45 crc kubenswrapper[4724]: I0223 17:51:45.604950 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f5c6dff7-7008-48cf-8e14-42d2f92c9221-custom-prometheus-ca\") pod \"f5c6dff7-7008-48cf-8e14-42d2f92c9221\" (UID: \"f5c6dff7-7008-48cf-8e14-42d2f92c9221\") " Feb 23 17:51:45 crc kubenswrapper[4724]: I0223 17:51:45.605010 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5c6dff7-7008-48cf-8e14-42d2f92c9221-internal-tls-certs\") pod \"f5c6dff7-7008-48cf-8e14-42d2f92c9221\" (UID: \"f5c6dff7-7008-48cf-8e14-42d2f92c9221\") " Feb 23 17:51:45 crc kubenswrapper[4724]: I0223 17:51:45.605042 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5c6dff7-7008-48cf-8e14-42d2f92c9221-config-data\") pod \"f5c6dff7-7008-48cf-8e14-42d2f92c9221\" (UID: \"f5c6dff7-7008-48cf-8e14-42d2f92c9221\") " Feb 23 17:51:45 crc kubenswrapper[4724]: I0223 17:51:45.605090 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5c6dff7-7008-48cf-8e14-42d2f92c9221-combined-ca-bundle\") pod \"f5c6dff7-7008-48cf-8e14-42d2f92c9221\" (UID: \"f5c6dff7-7008-48cf-8e14-42d2f92c9221\") " Feb 23 17:51:45 crc kubenswrapper[4724]: I0223 17:51:45.605110 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5c6dff7-7008-48cf-8e14-42d2f92c9221-logs\") pod \"f5c6dff7-7008-48cf-8e14-42d2f92c9221\" (UID: \"f5c6dff7-7008-48cf-8e14-42d2f92c9221\") " Feb 23 17:51:45 crc kubenswrapper[4724]: I0223 17:51:45.605150 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpj59\" (UniqueName: \"kubernetes.io/projected/f5c6dff7-7008-48cf-8e14-42d2f92c9221-kube-api-access-zpj59\") pod \"f5c6dff7-7008-48cf-8e14-42d2f92c9221\" (UID: \"f5c6dff7-7008-48cf-8e14-42d2f92c9221\") " Feb 23 17:51:45 crc kubenswrapper[4724]: I0223 17:51:45.606511 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5c6dff7-7008-48cf-8e14-42d2f92c9221-logs" (OuterVolumeSpecName: "logs") pod "f5c6dff7-7008-48cf-8e14-42d2f92c9221" (UID: "f5c6dff7-7008-48cf-8e14-42d2f92c9221"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:51:45 crc kubenswrapper[4724]: I0223 17:51:45.629318 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5c6dff7-7008-48cf-8e14-42d2f92c9221-kube-api-access-zpj59" (OuterVolumeSpecName: "kube-api-access-zpj59") pod "f5c6dff7-7008-48cf-8e14-42d2f92c9221" (UID: "f5c6dff7-7008-48cf-8e14-42d2f92c9221"). InnerVolumeSpecName "kube-api-access-zpj59". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:51:45 crc kubenswrapper[4724]: I0223 17:51:45.668107 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5c6dff7-7008-48cf-8e14-42d2f92c9221-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f5c6dff7-7008-48cf-8e14-42d2f92c9221" (UID: "f5c6dff7-7008-48cf-8e14-42d2f92c9221"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:51:45 crc kubenswrapper[4724]: I0223 17:51:45.688609 4724 scope.go:117] "RemoveContainer" containerID="ac6c88806f705901a97da86a033889d8172d1e5b2239d121c938c286b6e1f18e" Feb 23 17:51:45 crc kubenswrapper[4724]: I0223 17:51:45.694948 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5c6dff7-7008-48cf-8e14-42d2f92c9221-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f5c6dff7-7008-48cf-8e14-42d2f92c9221" (UID: "f5c6dff7-7008-48cf-8e14-42d2f92c9221"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:51:45 crc kubenswrapper[4724]: I0223 17:51:45.698215 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5c6dff7-7008-48cf-8e14-42d2f92c9221-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "f5c6dff7-7008-48cf-8e14-42d2f92c9221" (UID: "f5c6dff7-7008-48cf-8e14-42d2f92c9221"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:51:45 crc kubenswrapper[4724]: I0223 17:51:45.711667 4724 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f5c6dff7-7008-48cf-8e14-42d2f92c9221-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:45 crc kubenswrapper[4724]: I0223 17:51:45.711702 4724 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5c6dff7-7008-48cf-8e14-42d2f92c9221-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:45 crc kubenswrapper[4724]: I0223 17:51:45.711711 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5c6dff7-7008-48cf-8e14-42d2f92c9221-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:45 crc kubenswrapper[4724]: I0223 17:51:45.711722 4724 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5c6dff7-7008-48cf-8e14-42d2f92c9221-logs\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:45 crc kubenswrapper[4724]: I0223 17:51:45.711732 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpj59\" (UniqueName: \"kubernetes.io/projected/f5c6dff7-7008-48cf-8e14-42d2f92c9221-kube-api-access-zpj59\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:45 crc kubenswrapper[4724]: I0223 17:51:45.736580 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5c6dff7-7008-48cf-8e14-42d2f92c9221-config-data" (OuterVolumeSpecName: "config-data") pod "f5c6dff7-7008-48cf-8e14-42d2f92c9221" (UID: "f5c6dff7-7008-48cf-8e14-42d2f92c9221"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:51:45 crc kubenswrapper[4724]: I0223 17:51:45.759208 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5c6dff7-7008-48cf-8e14-42d2f92c9221-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f5c6dff7-7008-48cf-8e14-42d2f92c9221" (UID: "f5c6dff7-7008-48cf-8e14-42d2f92c9221"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:51:45 crc kubenswrapper[4724]: I0223 17:51:45.813732 4724 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5c6dff7-7008-48cf-8e14-42d2f92c9221-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:45 crc kubenswrapper[4724]: I0223 17:51:45.813764 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5c6dff7-7008-48cf-8e14-42d2f92c9221-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:45 crc kubenswrapper[4724]: I0223 17:51:45.845084 4724 scope.go:117] "RemoveContainer" containerID="b1fced8e7a4ad18c13b9e149eb44f6aee202723b77b5c2401142dd3f1574efe9" Feb 23 17:51:45 crc kubenswrapper[4724]: E0223 17:51:45.845578 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1fced8e7a4ad18c13b9e149eb44f6aee202723b77b5c2401142dd3f1574efe9\": container with ID starting with b1fced8e7a4ad18c13b9e149eb44f6aee202723b77b5c2401142dd3f1574efe9 not found: ID does not exist" containerID="b1fced8e7a4ad18c13b9e149eb44f6aee202723b77b5c2401142dd3f1574efe9" Feb 23 17:51:45 crc kubenswrapper[4724]: I0223 17:51:45.845606 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1fced8e7a4ad18c13b9e149eb44f6aee202723b77b5c2401142dd3f1574efe9"} err="failed to get container status \"b1fced8e7a4ad18c13b9e149eb44f6aee202723b77b5c2401142dd3f1574efe9\": rpc error: code = NotFound desc = could not find container \"b1fced8e7a4ad18c13b9e149eb44f6aee202723b77b5c2401142dd3f1574efe9\": container with ID starting with b1fced8e7a4ad18c13b9e149eb44f6aee202723b77b5c2401142dd3f1574efe9 not found: ID does not exist" Feb 23 17:51:45 crc kubenswrapper[4724]: I0223 17:51:45.845626 4724 scope.go:117] "RemoveContainer" containerID="ac6c88806f705901a97da86a033889d8172d1e5b2239d121c938c286b6e1f18e" Feb 23 17:51:45 crc kubenswrapper[4724]: E0223 17:51:45.845923 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac6c88806f705901a97da86a033889d8172d1e5b2239d121c938c286b6e1f18e\": container with ID starting with ac6c88806f705901a97da86a033889d8172d1e5b2239d121c938c286b6e1f18e not found: ID does not exist" containerID="ac6c88806f705901a97da86a033889d8172d1e5b2239d121c938c286b6e1f18e" Feb 23 17:51:45 crc kubenswrapper[4724]: I0223 17:51:45.845958 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac6c88806f705901a97da86a033889d8172d1e5b2239d121c938c286b6e1f18e"} err="failed to get container status \"ac6c88806f705901a97da86a033889d8172d1e5b2239d121c938c286b6e1f18e\": rpc error: code = NotFound desc = could not find container \"ac6c88806f705901a97da86a033889d8172d1e5b2239d121c938c286b6e1f18e\": container with ID starting with ac6c88806f705901a97da86a033889d8172d1e5b2239d121c938c286b6e1f18e not found: ID does not exist" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.095349 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-msc2q" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.114719 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.118667 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54049f9e-01f1-475b-b008-401152f8ca55-combined-ca-bundle\") pod \"54049f9e-01f1-475b-b008-401152f8ca55\" (UID: \"54049f9e-01f1-475b-b008-401152f8ca55\") " Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.118738 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54049f9e-01f1-475b-b008-401152f8ca55-config-data\") pod \"54049f9e-01f1-475b-b008-401152f8ca55\" (UID: \"54049f9e-01f1-475b-b008-401152f8ca55\") " Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.118824 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bs7zx\" (UniqueName: \"kubernetes.io/projected/54049f9e-01f1-475b-b008-401152f8ca55-kube-api-access-bs7zx\") pod \"54049f9e-01f1-475b-b008-401152f8ca55\" (UID: \"54049f9e-01f1-475b-b008-401152f8ca55\") " Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.118898 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54049f9e-01f1-475b-b008-401152f8ca55-scripts\") pod \"54049f9e-01f1-475b-b008-401152f8ca55\" (UID: \"54049f9e-01f1-475b-b008-401152f8ca55\") " Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.130492 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54049f9e-01f1-475b-b008-401152f8ca55-scripts" (OuterVolumeSpecName: "scripts") pod "54049f9e-01f1-475b-b008-401152f8ca55" (UID: "54049f9e-01f1-475b-b008-401152f8ca55"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.148689 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.154574 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54049f9e-01f1-475b-b008-401152f8ca55-kube-api-access-bs7zx" (OuterVolumeSpecName: "kube-api-access-bs7zx") pod "54049f9e-01f1-475b-b008-401152f8ca55" (UID: "54049f9e-01f1-475b-b008-401152f8ca55"). InnerVolumeSpecName "kube-api-access-bs7zx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.171336 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54049f9e-01f1-475b-b008-401152f8ca55-config-data" (OuterVolumeSpecName: "config-data") pod "54049f9e-01f1-475b-b008-401152f8ca55" (UID: "54049f9e-01f1-475b-b008-401152f8ca55"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.173195 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Feb 23 17:51:46 crc kubenswrapper[4724]: E0223 17:51:46.173745 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5c6dff7-7008-48cf-8e14-42d2f92c9221" containerName="watcher-api-log" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.173759 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5c6dff7-7008-48cf-8e14-42d2f92c9221" containerName="watcher-api-log" Feb 23 17:51:46 crc kubenswrapper[4724]: E0223 17:51:46.173775 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5c6dff7-7008-48cf-8e14-42d2f92c9221" containerName="watcher-api" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.173783 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5c6dff7-7008-48cf-8e14-42d2f92c9221" containerName="watcher-api" Feb 23 17:51:46 crc kubenswrapper[4724]: E0223 17:51:46.173794 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54049f9e-01f1-475b-b008-401152f8ca55" containerName="nova-cell1-conductor-db-sync" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.173803 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="54049f9e-01f1-475b-b008-401152f8ca55" containerName="nova-cell1-conductor-db-sync" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.173983 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5c6dff7-7008-48cf-8e14-42d2f92c9221" containerName="watcher-api" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.174005 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5c6dff7-7008-48cf-8e14-42d2f92c9221" containerName="watcher-api-log" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.174021 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="54049f9e-01f1-475b-b008-401152f8ca55" containerName="nova-cell1-conductor-db-sync" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.175708 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.178786 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-internal-svc" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.179929 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.180910 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-public-svc" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.214783 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54049f9e-01f1-475b-b008-401152f8ca55-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "54049f9e-01f1-475b-b008-401152f8ca55" (UID: "54049f9e-01f1-475b-b008-401152f8ca55"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.217193 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.220788 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc365749-e4ec-46b3-9aa8-522dac685189-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"dc365749-e4ec-46b3-9aa8-522dac685189\") " pod="openstack/watcher-api-0" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.220883 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc365749-e4ec-46b3-9aa8-522dac685189-logs\") pod \"watcher-api-0\" (UID: \"dc365749-e4ec-46b3-9aa8-522dac685189\") " pod="openstack/watcher-api-0" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.221006 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc365749-e4ec-46b3-9aa8-522dac685189-config-data\") pod \"watcher-api-0\" (UID: \"dc365749-e4ec-46b3-9aa8-522dac685189\") " pod="openstack/watcher-api-0" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.221030 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm9p2\" (UniqueName: \"kubernetes.io/projected/dc365749-e4ec-46b3-9aa8-522dac685189-kube-api-access-vm9p2\") pod \"watcher-api-0\" (UID: \"dc365749-e4ec-46b3-9aa8-522dac685189\") " pod="openstack/watcher-api-0" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.221056 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc365749-e4ec-46b3-9aa8-522dac685189-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"dc365749-e4ec-46b3-9aa8-522dac685189\") " pod="openstack/watcher-api-0" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.221083 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc365749-e4ec-46b3-9aa8-522dac685189-public-tls-certs\") pod \"watcher-api-0\" (UID: \"dc365749-e4ec-46b3-9aa8-522dac685189\") " pod="openstack/watcher-api-0" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.221110 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/dc365749-e4ec-46b3-9aa8-522dac685189-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"dc365749-e4ec-46b3-9aa8-522dac685189\") " pod="openstack/watcher-api-0" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.221201 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54049f9e-01f1-475b-b008-401152f8ca55-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.221213 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54049f9e-01f1-475b-b008-401152f8ca55-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.221223 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bs7zx\" (UniqueName: \"kubernetes.io/projected/54049f9e-01f1-475b-b008-401152f8ca55-kube-api-access-bs7zx\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.221232 4724 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54049f9e-01f1-475b-b008-401152f8ca55-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.307349 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.323178 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc365749-e4ec-46b3-9aa8-522dac685189-config-data\") pod \"watcher-api-0\" (UID: \"dc365749-e4ec-46b3-9aa8-522dac685189\") " pod="openstack/watcher-api-0" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.323227 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vm9p2\" (UniqueName: \"kubernetes.io/projected/dc365749-e4ec-46b3-9aa8-522dac685189-kube-api-access-vm9p2\") pod \"watcher-api-0\" (UID: \"dc365749-e4ec-46b3-9aa8-522dac685189\") " pod="openstack/watcher-api-0" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.323261 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc365749-e4ec-46b3-9aa8-522dac685189-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"dc365749-e4ec-46b3-9aa8-522dac685189\") " pod="openstack/watcher-api-0" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.323291 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc365749-e4ec-46b3-9aa8-522dac685189-public-tls-certs\") pod \"watcher-api-0\" (UID: \"dc365749-e4ec-46b3-9aa8-522dac685189\") " pod="openstack/watcher-api-0" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.323342 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/dc365749-e4ec-46b3-9aa8-522dac685189-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"dc365749-e4ec-46b3-9aa8-522dac685189\") " pod="openstack/watcher-api-0" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.323377 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc365749-e4ec-46b3-9aa8-522dac685189-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"dc365749-e4ec-46b3-9aa8-522dac685189\") " pod="openstack/watcher-api-0" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.323442 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc365749-e4ec-46b3-9aa8-522dac685189-logs\") pod \"watcher-api-0\" (UID: \"dc365749-e4ec-46b3-9aa8-522dac685189\") " pod="openstack/watcher-api-0" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.328269 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc365749-e4ec-46b3-9aa8-522dac685189-logs\") pod \"watcher-api-0\" (UID: \"dc365749-e4ec-46b3-9aa8-522dac685189\") " pod="openstack/watcher-api-0" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.329702 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc365749-e4ec-46b3-9aa8-522dac685189-public-tls-certs\") pod \"watcher-api-0\" (UID: \"dc365749-e4ec-46b3-9aa8-522dac685189\") " pod="openstack/watcher-api-0" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.330974 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc365749-e4ec-46b3-9aa8-522dac685189-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"dc365749-e4ec-46b3-9aa8-522dac685189\") " pod="openstack/watcher-api-0" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.332787 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc365749-e4ec-46b3-9aa8-522dac685189-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"dc365749-e4ec-46b3-9aa8-522dac685189\") " pod="openstack/watcher-api-0" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.332895 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc365749-e4ec-46b3-9aa8-522dac685189-config-data\") pod \"watcher-api-0\" (UID: \"dc365749-e4ec-46b3-9aa8-522dac685189\") " pod="openstack/watcher-api-0" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.334234 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/dc365749-e4ec-46b3-9aa8-522dac685189-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"dc365749-e4ec-46b3-9aa8-522dac685189\") " pod="openstack/watcher-api-0" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.346129 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm9p2\" (UniqueName: \"kubernetes.io/projected/dc365749-e4ec-46b3-9aa8-522dac685189-kube-api-access-vm9p2\") pod \"watcher-api-0\" (UID: \"dc365749-e4ec-46b3-9aa8-522dac685189\") " pod="openstack/watcher-api-0" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.425215 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttdg9\" (UniqueName: \"kubernetes.io/projected/83c5ec75-90ba-42cf-ab2e-602078cfc1a9-kube-api-access-ttdg9\") pod \"83c5ec75-90ba-42cf-ab2e-602078cfc1a9\" (UID: \"83c5ec75-90ba-42cf-ab2e-602078cfc1a9\") " Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.426002 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83c5ec75-90ba-42cf-ab2e-602078cfc1a9-config-data\") pod \"83c5ec75-90ba-42cf-ab2e-602078cfc1a9\" (UID: \"83c5ec75-90ba-42cf-ab2e-602078cfc1a9\") " Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.426049 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83c5ec75-90ba-42cf-ab2e-602078cfc1a9-logs\") pod \"83c5ec75-90ba-42cf-ab2e-602078cfc1a9\" (UID: \"83c5ec75-90ba-42cf-ab2e-602078cfc1a9\") " Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.426078 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83c5ec75-90ba-42cf-ab2e-602078cfc1a9-combined-ca-bundle\") pod \"83c5ec75-90ba-42cf-ab2e-602078cfc1a9\" (UID: \"83c5ec75-90ba-42cf-ab2e-602078cfc1a9\") " Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.426761 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83c5ec75-90ba-42cf-ab2e-602078cfc1a9-logs" (OuterVolumeSpecName: "logs") pod "83c5ec75-90ba-42cf-ab2e-602078cfc1a9" (UID: "83c5ec75-90ba-42cf-ab2e-602078cfc1a9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.428965 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83c5ec75-90ba-42cf-ab2e-602078cfc1a9-kube-api-access-ttdg9" (OuterVolumeSpecName: "kube-api-access-ttdg9") pod "83c5ec75-90ba-42cf-ab2e-602078cfc1a9" (UID: "83c5ec75-90ba-42cf-ab2e-602078cfc1a9"). InnerVolumeSpecName "kube-api-access-ttdg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.456978 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83c5ec75-90ba-42cf-ab2e-602078cfc1a9-config-data" (OuterVolumeSpecName: "config-data") pod "83c5ec75-90ba-42cf-ab2e-602078cfc1a9" (UID: "83c5ec75-90ba-42cf-ab2e-602078cfc1a9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.462050 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83c5ec75-90ba-42cf-ab2e-602078cfc1a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "83c5ec75-90ba-42cf-ab2e-602078cfc1a9" (UID: "83c5ec75-90ba-42cf-ab2e-602078cfc1a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.478623 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.501674 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.527616 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86eb7ff0-87b2-4538-8c5b-9126768e810b-combined-ca-bundle\") pod \"86eb7ff0-87b2-4538-8c5b-9126768e810b\" (UID: \"86eb7ff0-87b2-4538-8c5b-9126768e810b\") " Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.527943 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/86eb7ff0-87b2-4538-8c5b-9126768e810b-custom-prometheus-ca\") pod \"86eb7ff0-87b2-4538-8c5b-9126768e810b\" (UID: \"86eb7ff0-87b2-4538-8c5b-9126768e810b\") " Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.528109 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86eb7ff0-87b2-4538-8c5b-9126768e810b-logs\") pod \"86eb7ff0-87b2-4538-8c5b-9126768e810b\" (UID: \"86eb7ff0-87b2-4538-8c5b-9126768e810b\") " Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.528240 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcb7c\" (UniqueName: \"kubernetes.io/projected/86eb7ff0-87b2-4538-8c5b-9126768e810b-kube-api-access-mcb7c\") pod \"86eb7ff0-87b2-4538-8c5b-9126768e810b\" (UID: \"86eb7ff0-87b2-4538-8c5b-9126768e810b\") " Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.528463 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86eb7ff0-87b2-4538-8c5b-9126768e810b-config-data\") pod \"86eb7ff0-87b2-4538-8c5b-9126768e810b\" (UID: \"86eb7ff0-87b2-4538-8c5b-9126768e810b\") " Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.528979 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83c5ec75-90ba-42cf-ab2e-602078cfc1a9-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.529089 4724 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83c5ec75-90ba-42cf-ab2e-602078cfc1a9-logs\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.529165 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83c5ec75-90ba-42cf-ab2e-602078cfc1a9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.529237 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttdg9\" (UniqueName: \"kubernetes.io/projected/83c5ec75-90ba-42cf-ab2e-602078cfc1a9-kube-api-access-ttdg9\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.531118 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86eb7ff0-87b2-4538-8c5b-9126768e810b-logs" (OuterVolumeSpecName: "logs") pod "86eb7ff0-87b2-4538-8c5b-9126768e810b" (UID: "86eb7ff0-87b2-4538-8c5b-9126768e810b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.539766 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86eb7ff0-87b2-4538-8c5b-9126768e810b-kube-api-access-mcb7c" (OuterVolumeSpecName: "kube-api-access-mcb7c") pod "86eb7ff0-87b2-4538-8c5b-9126768e810b" (UID: "86eb7ff0-87b2-4538-8c5b-9126768e810b"). InnerVolumeSpecName "kube-api-access-mcb7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.573862 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86eb7ff0-87b2-4538-8c5b-9126768e810b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "86eb7ff0-87b2-4538-8c5b-9126768e810b" (UID: "86eb7ff0-87b2-4538-8c5b-9126768e810b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.619093 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86eb7ff0-87b2-4538-8c5b-9126768e810b-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "86eb7ff0-87b2-4538-8c5b-9126768e810b" (UID: "86eb7ff0-87b2-4538-8c5b-9126768e810b"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.630936 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86eb7ff0-87b2-4538-8c5b-9126768e810b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.630970 4724 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/86eb7ff0-87b2-4538-8c5b-9126768e810b-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.630983 4724 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86eb7ff0-87b2-4538-8c5b-9126768e810b-logs\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.630996 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcb7c\" (UniqueName: \"kubernetes.io/projected/86eb7ff0-87b2-4538-8c5b-9126768e810b-kube-api-access-mcb7c\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.638567 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86eb7ff0-87b2-4538-8c5b-9126768e810b-config-data" (OuterVolumeSpecName: "config-data") pod "86eb7ff0-87b2-4538-8c5b-9126768e810b" (UID: "86eb7ff0-87b2-4538-8c5b-9126768e810b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.639888 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 23 17:51:46 crc kubenswrapper[4724]: E0223 17:51:46.640364 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86eb7ff0-87b2-4538-8c5b-9126768e810b" containerName="watcher-decision-engine" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.640381 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="86eb7ff0-87b2-4538-8c5b-9126768e810b" containerName="watcher-decision-engine" Feb 23 17:51:46 crc kubenswrapper[4724]: E0223 17:51:46.640416 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83c5ec75-90ba-42cf-ab2e-602078cfc1a9" containerName="nova-api-api" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.640424 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="83c5ec75-90ba-42cf-ab2e-602078cfc1a9" containerName="nova-api-api" Feb 23 17:51:46 crc kubenswrapper[4724]: E0223 17:51:46.640440 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86eb7ff0-87b2-4538-8c5b-9126768e810b" containerName="watcher-decision-engine" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.640447 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="86eb7ff0-87b2-4538-8c5b-9126768e810b" containerName="watcher-decision-engine" Feb 23 17:51:46 crc kubenswrapper[4724]: E0223 17:51:46.640460 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83c5ec75-90ba-42cf-ab2e-602078cfc1a9" containerName="nova-api-log" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.640469 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="83c5ec75-90ba-42cf-ab2e-602078cfc1a9" containerName="nova-api-log" Feb 23 17:51:46 crc kubenswrapper[4724]: E0223 17:51:46.640481 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86eb7ff0-87b2-4538-8c5b-9126768e810b" containerName="watcher-decision-engine" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.640487 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="86eb7ff0-87b2-4538-8c5b-9126768e810b" containerName="watcher-decision-engine" Feb 23 17:51:46 crc kubenswrapper[4724]: E0223 17:51:46.640502 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86eb7ff0-87b2-4538-8c5b-9126768e810b" containerName="watcher-decision-engine" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.640509 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="86eb7ff0-87b2-4538-8c5b-9126768e810b" containerName="watcher-decision-engine" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.640735 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="86eb7ff0-87b2-4538-8c5b-9126768e810b" containerName="watcher-decision-engine" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.640754 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="83c5ec75-90ba-42cf-ab2e-602078cfc1a9" containerName="nova-api-api" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.640767 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="86eb7ff0-87b2-4538-8c5b-9126768e810b" containerName="watcher-decision-engine" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.640779 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="83c5ec75-90ba-42cf-ab2e-602078cfc1a9" containerName="nova-api-log" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.640797 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="86eb7ff0-87b2-4538-8c5b-9126768e810b" containerName="watcher-decision-engine" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.640811 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="86eb7ff0-87b2-4538-8c5b-9126768e810b" containerName="watcher-decision-engine" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.642721 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.644218 4724 generic.go:334] "Generic (PLEG): container finished" podID="83c5ec75-90ba-42cf-ab2e-602078cfc1a9" containerID="8e063fb9127cff3f77eeda92e3f8d5a1fb7f04f304b536d1b7008271259be6e3" exitCode=0 Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.644284 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.644316 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"83c5ec75-90ba-42cf-ab2e-602078cfc1a9","Type":"ContainerDied","Data":"8e063fb9127cff3f77eeda92e3f8d5a1fb7f04f304b536d1b7008271259be6e3"} Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.644342 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"83c5ec75-90ba-42cf-ab2e-602078cfc1a9","Type":"ContainerDied","Data":"dad5b0fa905224fe74ad05ad29ec4432941eec4a0e0600ddd0c3308ae17696db"} Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.644359 4724 scope.go:117] "RemoveContainer" containerID="8e063fb9127cff3f77eeda92e3f8d5a1fb7f04f304b536d1b7008271259be6e3" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.662166 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0b3714b0-4281-4cf0-be57-789820a25116","Type":"ContainerStarted","Data":"792a0d4d43c661958eda7724bb846bc7d555e66ea15dabb0504ba4b891b4e8b4"} Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.662212 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0b3714b0-4281-4cf0-be57-789820a25116","Type":"ContainerStarted","Data":"9a51b8a304a68102b316f2aa35c02397f86a4b236ec2c4ec7cc8d3a134a06003"} Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.667286 4724 generic.go:334] "Generic (PLEG): container finished" podID="86eb7ff0-87b2-4538-8c5b-9126768e810b" containerID="f0043a56eaba4ccab5e666dfe3695b37c622d8b9aa9bd52e8635b4943c61a771" exitCode=0 Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.667347 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"86eb7ff0-87b2-4538-8c5b-9126768e810b","Type":"ContainerDied","Data":"f0043a56eaba4ccab5e666dfe3695b37c622d8b9aa9bd52e8635b4943c61a771"} Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.667776 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"86eb7ff0-87b2-4538-8c5b-9126768e810b","Type":"ContainerDied","Data":"096be882dd2c9ce1636d5343c6cff8a7494ec3928dcd07af220ff82278694312"} Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.667380 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.677077 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-msc2q" event={"ID":"54049f9e-01f1-475b-b008-401152f8ca55","Type":"ContainerDied","Data":"50e33c2a98675e097c75deb6b9842a8cb2947d10902172a64b9a75f3d163f417"} Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.677188 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50e33c2a98675e097c75deb6b9842a8cb2947d10902172a64b9a75f3d163f417" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.677291 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-msc2q" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.690538 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.735056 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86eb7ff0-87b2-4538-8c5b-9126768e810b-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.762992 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.762970623 podStartE2EDuration="2.762970623s" podCreationTimestamp="2026-02-23 17:51:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:51:46.739694063 +0000 UTC m=+1262.555893663" watchObservedRunningTime="2026-02-23 17:51:46.762970623 +0000 UTC m=+1262.579170223" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.837497 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e54fa012-7969-4917-888f-a2f822eb9449-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e54fa012-7969-4917-888f-a2f822eb9449\") " pod="openstack/nova-cell1-conductor-0" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.837555 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e54fa012-7969-4917-888f-a2f822eb9449-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e54fa012-7969-4917-888f-a2f822eb9449\") " pod="openstack/nova-cell1-conductor-0" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.837639 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzz5c\" (UniqueName: \"kubernetes.io/projected/e54fa012-7969-4917-888f-a2f822eb9449-kube-api-access-pzz5c\") pod \"nova-cell1-conductor-0\" (UID: \"e54fa012-7969-4917-888f-a2f822eb9449\") " pod="openstack/nova-cell1-conductor-0" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.904600 4724 scope.go:117] "RemoveContainer" containerID="ad166250820023650ef99135e3e3a62d1b170bc264efbc9e92267f78f16ecbb2" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.943499 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e54fa012-7969-4917-888f-a2f822eb9449-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e54fa012-7969-4917-888f-a2f822eb9449\") " pod="openstack/nova-cell1-conductor-0" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.943553 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e54fa012-7969-4917-888f-a2f822eb9449-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e54fa012-7969-4917-888f-a2f822eb9449\") " pod="openstack/nova-cell1-conductor-0" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.946127 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzz5c\" (UniqueName: \"kubernetes.io/projected/e54fa012-7969-4917-888f-a2f822eb9449-kube-api-access-pzz5c\") pod \"nova-cell1-conductor-0\" (UID: \"e54fa012-7969-4917-888f-a2f822eb9449\") " pod="openstack/nova-cell1-conductor-0" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.983788 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5c6dff7-7008-48cf-8e14-42d2f92c9221" path="/var/lib/kubelet/pods/f5c6dff7-7008-48cf-8e14-42d2f92c9221/volumes" Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.990967 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 23 17:51:46 crc kubenswrapper[4724]: I0223 17:51:46.995035 4724 scope.go:117] "RemoveContainer" containerID="8e063fb9127cff3f77eeda92e3f8d5a1fb7f04f304b536d1b7008271259be6e3" Feb 23 17:51:47 crc kubenswrapper[4724]: E0223 17:51:46.995497 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e063fb9127cff3f77eeda92e3f8d5a1fb7f04f304b536d1b7008271259be6e3\": container with ID starting with 8e063fb9127cff3f77eeda92e3f8d5a1fb7f04f304b536d1b7008271259be6e3 not found: ID does not exist" containerID="8e063fb9127cff3f77eeda92e3f8d5a1fb7f04f304b536d1b7008271259be6e3" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:46.995537 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e063fb9127cff3f77eeda92e3f8d5a1fb7f04f304b536d1b7008271259be6e3"} err="failed to get container status \"8e063fb9127cff3f77eeda92e3f8d5a1fb7f04f304b536d1b7008271259be6e3\": rpc error: code = NotFound desc = could not find container \"8e063fb9127cff3f77eeda92e3f8d5a1fb7f04f304b536d1b7008271259be6e3\": container with ID starting with 8e063fb9127cff3f77eeda92e3f8d5a1fb7f04f304b536d1b7008271259be6e3 not found: ID does not exist" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:46.995555 4724 scope.go:117] "RemoveContainer" containerID="ad166250820023650ef99135e3e3a62d1b170bc264efbc9e92267f78f16ecbb2" Feb 23 17:51:47 crc kubenswrapper[4724]: E0223 17:51:46.995941 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad166250820023650ef99135e3e3a62d1b170bc264efbc9e92267f78f16ecbb2\": container with ID starting with ad166250820023650ef99135e3e3a62d1b170bc264efbc9e92267f78f16ecbb2 not found: ID does not exist" containerID="ad166250820023650ef99135e3e3a62d1b170bc264efbc9e92267f78f16ecbb2" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:46.995959 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad166250820023650ef99135e3e3a62d1b170bc264efbc9e92267f78f16ecbb2"} err="failed to get container status \"ad166250820023650ef99135e3e3a62d1b170bc264efbc9e92267f78f16ecbb2\": rpc error: code = NotFound desc = could not find container \"ad166250820023650ef99135e3e3a62d1b170bc264efbc9e92267f78f16ecbb2\": container with ID starting with ad166250820023650ef99135e3e3a62d1b170bc264efbc9e92267f78f16ecbb2 not found: ID does not exist" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:46.995972 4724 scope.go:117] "RemoveContainer" containerID="f0043a56eaba4ccab5e666dfe3695b37c622d8b9aa9bd52e8635b4943c61a771" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:46.998326 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzz5c\" (UniqueName: \"kubernetes.io/projected/e54fa012-7969-4917-888f-a2f822eb9449-kube-api-access-pzz5c\") pod \"nova-cell1-conductor-0\" (UID: \"e54fa012-7969-4917-888f-a2f822eb9449\") " pod="openstack/nova-cell1-conductor-0" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:46.998612 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e54fa012-7969-4917-888f-a2f822eb9449-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e54fa012-7969-4917-888f-a2f822eb9449\") " pod="openstack/nova-cell1-conductor-0" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.002507 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e54fa012-7969-4917-888f-a2f822eb9449-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e54fa012-7969-4917-888f-a2f822eb9449\") " pod="openstack/nova-cell1-conductor-0" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.017704 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.065913 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.100287 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.101821 4724 scope.go:117] "RemoveContainer" containerID="14e56e42a874bd312d515edbb00d44d0a514d159d98544acf28fba090f14c860" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.116639 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 23 17:51:47 crc kubenswrapper[4724]: E0223 17:51:47.117247 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86eb7ff0-87b2-4538-8c5b-9126768e810b" containerName="watcher-decision-engine" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.117263 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="86eb7ff0-87b2-4538-8c5b-9126768e810b" containerName="watcher-decision-engine" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.117544 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="86eb7ff0-87b2-4538-8c5b-9126768e810b" containerName="watcher-decision-engine" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.118418 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.120917 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.135677 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.149450 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.151543 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.152165 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9lts\" (UniqueName: \"kubernetes.io/projected/935753ed-464b-4bac-af1f-e356a473c78f-kube-api-access-z9lts\") pod \"watcher-decision-engine-0\" (UID: \"935753ed-464b-4bac-af1f-e356a473c78f\") " pod="openstack/watcher-decision-engine-0" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.152432 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/935753ed-464b-4bac-af1f-e356a473c78f-config-data\") pod \"watcher-decision-engine-0\" (UID: \"935753ed-464b-4bac-af1f-e356a473c78f\") " pod="openstack/watcher-decision-engine-0" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.152512 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/935753ed-464b-4bac-af1f-e356a473c78f-logs\") pod \"watcher-decision-engine-0\" (UID: \"935753ed-464b-4bac-af1f-e356a473c78f\") " pod="openstack/watcher-decision-engine-0" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.152604 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/935753ed-464b-4bac-af1f-e356a473c78f-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"935753ed-464b-4bac-af1f-e356a473c78f\") " pod="openstack/watcher-decision-engine-0" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.152650 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/935753ed-464b-4bac-af1f-e356a473c78f-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"935753ed-464b-4bac-af1f-e356a473c78f\") " pod="openstack/watcher-decision-engine-0" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.154678 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.165502 4724 scope.go:117] "RemoveContainer" containerID="f0043a56eaba4ccab5e666dfe3695b37c622d8b9aa9bd52e8635b4943c61a771" Feb 23 17:51:47 crc kubenswrapper[4724]: E0223 17:51:47.167549 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0043a56eaba4ccab5e666dfe3695b37c622d8b9aa9bd52e8635b4943c61a771\": container with ID starting with f0043a56eaba4ccab5e666dfe3695b37c622d8b9aa9bd52e8635b4943c61a771 not found: ID does not exist" containerID="f0043a56eaba4ccab5e666dfe3695b37c622d8b9aa9bd52e8635b4943c61a771" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.167634 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0043a56eaba4ccab5e666dfe3695b37c622d8b9aa9bd52e8635b4943c61a771"} err="failed to get container status \"f0043a56eaba4ccab5e666dfe3695b37c622d8b9aa9bd52e8635b4943c61a771\": rpc error: code = NotFound desc = could not find container \"f0043a56eaba4ccab5e666dfe3695b37c622d8b9aa9bd52e8635b4943c61a771\": container with ID starting with f0043a56eaba4ccab5e666dfe3695b37c622d8b9aa9bd52e8635b4943c61a771 not found: ID does not exist" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.167703 4724 scope.go:117] "RemoveContainer" containerID="14e56e42a874bd312d515edbb00d44d0a514d159d98544acf28fba090f14c860" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.168235 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 23 17:51:47 crc kubenswrapper[4724]: E0223 17:51:47.168514 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14e56e42a874bd312d515edbb00d44d0a514d159d98544acf28fba090f14c860\": container with ID starting with 14e56e42a874bd312d515edbb00d44d0a514d159d98544acf28fba090f14c860 not found: ID does not exist" containerID="14e56e42a874bd312d515edbb00d44d0a514d159d98544acf28fba090f14c860" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.168544 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14e56e42a874bd312d515edbb00d44d0a514d159d98544acf28fba090f14c860"} err="failed to get container status \"14e56e42a874bd312d515edbb00d44d0a514d159d98544acf28fba090f14c860\": rpc error: code = NotFound desc = could not find container \"14e56e42a874bd312d515edbb00d44d0a514d159d98544acf28fba090f14c860\": container with ID starting with 14e56e42a874bd312d515edbb00d44d0a514d159d98544acf28fba090f14c860 not found: ID does not exist" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.187326 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.220507 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.253634 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7807bbeb-d20e-4ec9-8587-3bac2e960ab6-combined-ca-bundle\") pod \"7807bbeb-d20e-4ec9-8587-3bac2e960ab6\" (UID: \"7807bbeb-d20e-4ec9-8587-3bac2e960ab6\") " Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.253765 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjm8w\" (UniqueName: \"kubernetes.io/projected/7807bbeb-d20e-4ec9-8587-3bac2e960ab6-kube-api-access-sjm8w\") pod \"7807bbeb-d20e-4ec9-8587-3bac2e960ab6\" (UID: \"7807bbeb-d20e-4ec9-8587-3bac2e960ab6\") " Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.253803 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7807bbeb-d20e-4ec9-8587-3bac2e960ab6-config-data\") pod \"7807bbeb-d20e-4ec9-8587-3bac2e960ab6\" (UID: \"7807bbeb-d20e-4ec9-8587-3bac2e960ab6\") " Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.253952 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/261b32f3-e185-41ff-bd81-f6c6a5baea04-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"261b32f3-e185-41ff-bd81-f6c6a5baea04\") " pod="openstack/nova-api-0" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.253991 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/261b32f3-e185-41ff-bd81-f6c6a5baea04-config-data\") pod \"nova-api-0\" (UID: \"261b32f3-e185-41ff-bd81-f6c6a5baea04\") " pod="openstack/nova-api-0" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.254012 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb2sz\" (UniqueName: \"kubernetes.io/projected/261b32f3-e185-41ff-bd81-f6c6a5baea04-kube-api-access-xb2sz\") pod \"nova-api-0\" (UID: \"261b32f3-e185-41ff-bd81-f6c6a5baea04\") " pod="openstack/nova-api-0" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.254070 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/935753ed-464b-4bac-af1f-e356a473c78f-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"935753ed-464b-4bac-af1f-e356a473c78f\") " pod="openstack/watcher-decision-engine-0" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.254096 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/935753ed-464b-4bac-af1f-e356a473c78f-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"935753ed-464b-4bac-af1f-e356a473c78f\") " pod="openstack/watcher-decision-engine-0" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.254139 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9lts\" (UniqueName: \"kubernetes.io/projected/935753ed-464b-4bac-af1f-e356a473c78f-kube-api-access-z9lts\") pod \"watcher-decision-engine-0\" (UID: \"935753ed-464b-4bac-af1f-e356a473c78f\") " pod="openstack/watcher-decision-engine-0" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.254238 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/935753ed-464b-4bac-af1f-e356a473c78f-config-data\") pod \"watcher-decision-engine-0\" (UID: \"935753ed-464b-4bac-af1f-e356a473c78f\") " pod="openstack/watcher-decision-engine-0" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.254275 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/261b32f3-e185-41ff-bd81-f6c6a5baea04-logs\") pod \"nova-api-0\" (UID: \"261b32f3-e185-41ff-bd81-f6c6a5baea04\") " pod="openstack/nova-api-0" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.254306 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/935753ed-464b-4bac-af1f-e356a473c78f-logs\") pod \"watcher-decision-engine-0\" (UID: \"935753ed-464b-4bac-af1f-e356a473c78f\") " pod="openstack/watcher-decision-engine-0" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.254691 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/935753ed-464b-4bac-af1f-e356a473c78f-logs\") pod \"watcher-decision-engine-0\" (UID: \"935753ed-464b-4bac-af1f-e356a473c78f\") " pod="openstack/watcher-decision-engine-0" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.258041 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7807bbeb-d20e-4ec9-8587-3bac2e960ab6-kube-api-access-sjm8w" (OuterVolumeSpecName: "kube-api-access-sjm8w") pod "7807bbeb-d20e-4ec9-8587-3bac2e960ab6" (UID: "7807bbeb-d20e-4ec9-8587-3bac2e960ab6"). InnerVolumeSpecName "kube-api-access-sjm8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.259906 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/935753ed-464b-4bac-af1f-e356a473c78f-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"935753ed-464b-4bac-af1f-e356a473c78f\") " pod="openstack/watcher-decision-engine-0" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.261066 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/935753ed-464b-4bac-af1f-e356a473c78f-config-data\") pod \"watcher-decision-engine-0\" (UID: \"935753ed-464b-4bac-af1f-e356a473c78f\") " pod="openstack/watcher-decision-engine-0" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.261306 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/935753ed-464b-4bac-af1f-e356a473c78f-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"935753ed-464b-4bac-af1f-e356a473c78f\") " pod="openstack/watcher-decision-engine-0" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.272805 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9lts\" (UniqueName: \"kubernetes.io/projected/935753ed-464b-4bac-af1f-e356a473c78f-kube-api-access-z9lts\") pod \"watcher-decision-engine-0\" (UID: \"935753ed-464b-4bac-af1f-e356a473c78f\") " pod="openstack/watcher-decision-engine-0" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.281386 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7807bbeb-d20e-4ec9-8587-3bac2e960ab6-config-data" (OuterVolumeSpecName: "config-data") pod "7807bbeb-d20e-4ec9-8587-3bac2e960ab6" (UID: "7807bbeb-d20e-4ec9-8587-3bac2e960ab6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.289759 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7807bbeb-d20e-4ec9-8587-3bac2e960ab6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7807bbeb-d20e-4ec9-8587-3bac2e960ab6" (UID: "7807bbeb-d20e-4ec9-8587-3bac2e960ab6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.356042 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/261b32f3-e185-41ff-bd81-f6c6a5baea04-logs\") pod \"nova-api-0\" (UID: \"261b32f3-e185-41ff-bd81-f6c6a5baea04\") " pod="openstack/nova-api-0" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.356105 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/261b32f3-e185-41ff-bd81-f6c6a5baea04-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"261b32f3-e185-41ff-bd81-f6c6a5baea04\") " pod="openstack/nova-api-0" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.356130 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/261b32f3-e185-41ff-bd81-f6c6a5baea04-config-data\") pod \"nova-api-0\" (UID: \"261b32f3-e185-41ff-bd81-f6c6a5baea04\") " pod="openstack/nova-api-0" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.356152 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xb2sz\" (UniqueName: \"kubernetes.io/projected/261b32f3-e185-41ff-bd81-f6c6a5baea04-kube-api-access-xb2sz\") pod \"nova-api-0\" (UID: \"261b32f3-e185-41ff-bd81-f6c6a5baea04\") " pod="openstack/nova-api-0" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.356281 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjm8w\" (UniqueName: \"kubernetes.io/projected/7807bbeb-d20e-4ec9-8587-3bac2e960ab6-kube-api-access-sjm8w\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.356293 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7807bbeb-d20e-4ec9-8587-3bac2e960ab6-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.356303 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7807bbeb-d20e-4ec9-8587-3bac2e960ab6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.356492 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/261b32f3-e185-41ff-bd81-f6c6a5baea04-logs\") pod \"nova-api-0\" (UID: \"261b32f3-e185-41ff-bd81-f6c6a5baea04\") " pod="openstack/nova-api-0" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.360020 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/261b32f3-e185-41ff-bd81-f6c6a5baea04-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"261b32f3-e185-41ff-bd81-f6c6a5baea04\") " pod="openstack/nova-api-0" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.363293 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/261b32f3-e185-41ff-bd81-f6c6a5baea04-config-data\") pod \"nova-api-0\" (UID: \"261b32f3-e185-41ff-bd81-f6c6a5baea04\") " pod="openstack/nova-api-0" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.374758 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xb2sz\" (UniqueName: \"kubernetes.io/projected/261b32f3-e185-41ff-bd81-f6c6a5baea04-kube-api-access-xb2sz\") pod \"nova-api-0\" (UID: \"261b32f3-e185-41ff-bd81-f6c6a5baea04\") " pod="openstack/nova-api-0" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.425336 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 23 17:51:47 crc kubenswrapper[4724]: W0223 17:51:47.427920 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc365749_e4ec_46b3_9aa8_522dac685189.slice/crio-e57acd01a48593b7b2c5cb5691cd8730c74d32988c5d54749bc81c610f4e858c WatchSource:0}: Error finding container e57acd01a48593b7b2c5cb5691cd8730c74d32988c5d54749bc81c610f4e858c: Status 404 returned error can't find the container with id e57acd01a48593b7b2c5cb5691cd8730c74d32988c5d54749bc81c610f4e858c Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.446949 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.482164 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.657729 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 23 17:51:47 crc kubenswrapper[4724]: E0223 17:51:47.668238 4724 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="83420c1acfc2061d9432f2021c0cc50a6bcef3c81d941d660d001230df4580db" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 23 17:51:47 crc kubenswrapper[4724]: E0223 17:51:47.669315 4724 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="83420c1acfc2061d9432f2021c0cc50a6bcef3c81d941d660d001230df4580db" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 23 17:51:47 crc kubenswrapper[4724]: E0223 17:51:47.671380 4724 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="83420c1acfc2061d9432f2021c0cc50a6bcef3c81d941d660d001230df4580db" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 23 17:51:47 crc kubenswrapper[4724]: E0223 17:51:47.671466 4724 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="9f1d15c2-eeeb-41fe-89c7-27ad522e5c56" containerName="watcher-applier" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.690700 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"dc365749-e4ec-46b3-9aa8-522dac685189","Type":"ContainerStarted","Data":"88b465b3220d6428e1ebea1b53f614ee7d382cedf9fcdf6c06cf511ff8a639e8"} Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.690738 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"dc365749-e4ec-46b3-9aa8-522dac685189","Type":"ContainerStarted","Data":"e57acd01a48593b7b2c5cb5691cd8730c74d32988c5d54749bc81c610f4e858c"} Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.692332 4724 generic.go:334] "Generic (PLEG): container finished" podID="7807bbeb-d20e-4ec9-8587-3bac2e960ab6" containerID="ebeb9f568cc4f635943070cc519b22051c37608c9116b04c9209f23c14d8ff12" exitCode=0 Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.692437 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7807bbeb-d20e-4ec9-8587-3bac2e960ab6","Type":"ContainerDied","Data":"ebeb9f568cc4f635943070cc519b22051c37608c9116b04c9209f23c14d8ff12"} Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.692437 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.692496 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7807bbeb-d20e-4ec9-8587-3bac2e960ab6","Type":"ContainerDied","Data":"fc93fe9bfda78917dc965102bfb4c95255e5df5190e61f314e3cd252048e14c9"} Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.692518 4724 scope.go:117] "RemoveContainer" containerID="ebeb9f568cc4f635943070cc519b22051c37608c9116b04c9209f23c14d8ff12" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.694704 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e54fa012-7969-4917-888f-a2f822eb9449","Type":"ContainerStarted","Data":"030720816a38bbdfa92b48b9ee2a27e80688e787d10e05d9507f4c902e9c4ca0"} Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.877869 4724 scope.go:117] "RemoveContainer" containerID="ebeb9f568cc4f635943070cc519b22051c37608c9116b04c9209f23c14d8ff12" Feb 23 17:51:47 crc kubenswrapper[4724]: E0223 17:51:47.878756 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebeb9f568cc4f635943070cc519b22051c37608c9116b04c9209f23c14d8ff12\": container with ID starting with ebeb9f568cc4f635943070cc519b22051c37608c9116b04c9209f23c14d8ff12 not found: ID does not exist" containerID="ebeb9f568cc4f635943070cc519b22051c37608c9116b04c9209f23c14d8ff12" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.878794 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebeb9f568cc4f635943070cc519b22051c37608c9116b04c9209f23c14d8ff12"} err="failed to get container status \"ebeb9f568cc4f635943070cc519b22051c37608c9116b04c9209f23c14d8ff12\": rpc error: code = NotFound desc = could not find container \"ebeb9f568cc4f635943070cc519b22051c37608c9116b04c9209f23c14d8ff12\": container with ID starting with ebeb9f568cc4f635943070cc519b22051c37608c9116b04c9209f23c14d8ff12 not found: ID does not exist" Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.930543 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.950145 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 17:51:47 crc kubenswrapper[4724]: I0223 17:51:47.993023 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 23 17:51:48 crc kubenswrapper[4724]: I0223 17:51:48.011008 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 17:51:48 crc kubenswrapper[4724]: E0223 17:51:48.011555 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7807bbeb-d20e-4ec9-8587-3bac2e960ab6" containerName="nova-scheduler-scheduler" Feb 23 17:51:48 crc kubenswrapper[4724]: I0223 17:51:48.011573 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="7807bbeb-d20e-4ec9-8587-3bac2e960ab6" containerName="nova-scheduler-scheduler" Feb 23 17:51:48 crc kubenswrapper[4724]: I0223 17:51:48.011908 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="7807bbeb-d20e-4ec9-8587-3bac2e960ab6" containerName="nova-scheduler-scheduler" Feb 23 17:51:48 crc kubenswrapper[4724]: I0223 17:51:48.012741 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 23 17:51:48 crc kubenswrapper[4724]: I0223 17:51:48.016691 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 23 17:51:48 crc kubenswrapper[4724]: I0223 17:51:48.028979 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 17:51:48 crc kubenswrapper[4724]: W0223 17:51:48.057610 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod261b32f3_e185_41ff_bd81_f6c6a5baea04.slice/crio-5d13d0c51589a1840f27c9046ea19ad9686b3b02ab33c6bd98099f3d0627fa8b WatchSource:0}: Error finding container 5d13d0c51589a1840f27c9046ea19ad9686b3b02ab33c6bd98099f3d0627fa8b: Status 404 returned error can't find the container with id 5d13d0c51589a1840f27c9046ea19ad9686b3b02ab33c6bd98099f3d0627fa8b Feb 23 17:51:48 crc kubenswrapper[4724]: I0223 17:51:48.058090 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 23 17:51:48 crc kubenswrapper[4724]: I0223 17:51:48.077897 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d2f1a31-7f08-451d-962d-88ee8fd7f246-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9d2f1a31-7f08-451d-962d-88ee8fd7f246\") " pod="openstack/nova-scheduler-0" Feb 23 17:51:48 crc kubenswrapper[4724]: I0223 17:51:48.077942 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z4cz\" (UniqueName: \"kubernetes.io/projected/9d2f1a31-7f08-451d-962d-88ee8fd7f246-kube-api-access-8z4cz\") pod \"nova-scheduler-0\" (UID: \"9d2f1a31-7f08-451d-962d-88ee8fd7f246\") " pod="openstack/nova-scheduler-0" Feb 23 17:51:48 crc kubenswrapper[4724]: I0223 17:51:48.078347 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d2f1a31-7f08-451d-962d-88ee8fd7f246-config-data\") pod \"nova-scheduler-0\" (UID: \"9d2f1a31-7f08-451d-962d-88ee8fd7f246\") " pod="openstack/nova-scheduler-0" Feb 23 17:51:48 crc kubenswrapper[4724]: I0223 17:51:48.180362 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d2f1a31-7f08-451d-962d-88ee8fd7f246-config-data\") pod \"nova-scheduler-0\" (UID: \"9d2f1a31-7f08-451d-962d-88ee8fd7f246\") " pod="openstack/nova-scheduler-0" Feb 23 17:51:48 crc kubenswrapper[4724]: I0223 17:51:48.180662 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d2f1a31-7f08-451d-962d-88ee8fd7f246-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9d2f1a31-7f08-451d-962d-88ee8fd7f246\") " pod="openstack/nova-scheduler-0" Feb 23 17:51:48 crc kubenswrapper[4724]: I0223 17:51:48.180693 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z4cz\" (UniqueName: \"kubernetes.io/projected/9d2f1a31-7f08-451d-962d-88ee8fd7f246-kube-api-access-8z4cz\") pod \"nova-scheduler-0\" (UID: \"9d2f1a31-7f08-451d-962d-88ee8fd7f246\") " pod="openstack/nova-scheduler-0" Feb 23 17:51:48 crc kubenswrapper[4724]: I0223 17:51:48.188893 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d2f1a31-7f08-451d-962d-88ee8fd7f246-config-data\") pod \"nova-scheduler-0\" (UID: \"9d2f1a31-7f08-451d-962d-88ee8fd7f246\") " pod="openstack/nova-scheduler-0" Feb 23 17:51:48 crc kubenswrapper[4724]: I0223 17:51:48.189948 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d2f1a31-7f08-451d-962d-88ee8fd7f246-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9d2f1a31-7f08-451d-962d-88ee8fd7f246\") " pod="openstack/nova-scheduler-0" Feb 23 17:51:48 crc kubenswrapper[4724]: I0223 17:51:48.201032 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z4cz\" (UniqueName: \"kubernetes.io/projected/9d2f1a31-7f08-451d-962d-88ee8fd7f246-kube-api-access-8z4cz\") pod \"nova-scheduler-0\" (UID: \"9d2f1a31-7f08-451d-962d-88ee8fd7f246\") " pod="openstack/nova-scheduler-0" Feb 23 17:51:48 crc kubenswrapper[4724]: I0223 17:51:48.366682 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 23 17:51:48 crc kubenswrapper[4724]: I0223 17:51:48.609833 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 23 17:51:48 crc kubenswrapper[4724]: I0223 17:51:48.687995 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8p4t\" (UniqueName: \"kubernetes.io/projected/9f1d15c2-eeeb-41fe-89c7-27ad522e5c56-kube-api-access-m8p4t\") pod \"9f1d15c2-eeeb-41fe-89c7-27ad522e5c56\" (UID: \"9f1d15c2-eeeb-41fe-89c7-27ad522e5c56\") " Feb 23 17:51:48 crc kubenswrapper[4724]: I0223 17:51:48.688473 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f1d15c2-eeeb-41fe-89c7-27ad522e5c56-combined-ca-bundle\") pod \"9f1d15c2-eeeb-41fe-89c7-27ad522e5c56\" (UID: \"9f1d15c2-eeeb-41fe-89c7-27ad522e5c56\") " Feb 23 17:51:48 crc kubenswrapper[4724]: I0223 17:51:48.688517 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f1d15c2-eeeb-41fe-89c7-27ad522e5c56-logs\") pod \"9f1d15c2-eeeb-41fe-89c7-27ad522e5c56\" (UID: \"9f1d15c2-eeeb-41fe-89c7-27ad522e5c56\") " Feb 23 17:51:48 crc kubenswrapper[4724]: I0223 17:51:48.688657 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f1d15c2-eeeb-41fe-89c7-27ad522e5c56-config-data\") pod \"9f1d15c2-eeeb-41fe-89c7-27ad522e5c56\" (UID: \"9f1d15c2-eeeb-41fe-89c7-27ad522e5c56\") " Feb 23 17:51:48 crc kubenswrapper[4724]: I0223 17:51:48.688944 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f1d15c2-eeeb-41fe-89c7-27ad522e5c56-logs" (OuterVolumeSpecName: "logs") pod "9f1d15c2-eeeb-41fe-89c7-27ad522e5c56" (UID: "9f1d15c2-eeeb-41fe-89c7-27ad522e5c56"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:51:48 crc kubenswrapper[4724]: I0223 17:51:48.689574 4724 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f1d15c2-eeeb-41fe-89c7-27ad522e5c56-logs\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:48 crc kubenswrapper[4724]: I0223 17:51:48.696815 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f1d15c2-eeeb-41fe-89c7-27ad522e5c56-kube-api-access-m8p4t" (OuterVolumeSpecName: "kube-api-access-m8p4t") pod "9f1d15c2-eeeb-41fe-89c7-27ad522e5c56" (UID: "9f1d15c2-eeeb-41fe-89c7-27ad522e5c56"). InnerVolumeSpecName "kube-api-access-m8p4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:51:48 crc kubenswrapper[4724]: I0223 17:51:48.744450 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"dc365749-e4ec-46b3-9aa8-522dac685189","Type":"ContainerStarted","Data":"702bceefba9e56e94fbea29ca03702a166ee9ab587b5c2c08863ef0e048c5d91"} Feb 23 17:51:48 crc kubenswrapper[4724]: I0223 17:51:48.748079 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 23 17:51:48 crc kubenswrapper[4724]: I0223 17:51:48.759609 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f1d15c2-eeeb-41fe-89c7-27ad522e5c56-config-data" (OuterVolumeSpecName: "config-data") pod "9f1d15c2-eeeb-41fe-89c7-27ad522e5c56" (UID: "9f1d15c2-eeeb-41fe-89c7-27ad522e5c56"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:51:48 crc kubenswrapper[4724]: I0223 17:51:48.770732 4724 generic.go:334] "Generic (PLEG): container finished" podID="9f1d15c2-eeeb-41fe-89c7-27ad522e5c56" containerID="83420c1acfc2061d9432f2021c0cc50a6bcef3c81d941d660d001230df4580db" exitCode=0 Feb 23 17:51:48 crc kubenswrapper[4724]: I0223 17:51:48.770798 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"9f1d15c2-eeeb-41fe-89c7-27ad522e5c56","Type":"ContainerDied","Data":"83420c1acfc2061d9432f2021c0cc50a6bcef3c81d941d660d001230df4580db"} Feb 23 17:51:48 crc kubenswrapper[4724]: I0223 17:51:48.770825 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"9f1d15c2-eeeb-41fe-89c7-27ad522e5c56","Type":"ContainerDied","Data":"27b16a0bdfadc318538cd26596ec33af0f6e3f5bb6d50e49f2e9bcf4e10ba500"} Feb 23 17:51:48 crc kubenswrapper[4724]: I0223 17:51:48.770841 4724 scope.go:117] "RemoveContainer" containerID="83420c1acfc2061d9432f2021c0cc50a6bcef3c81d941d660d001230df4580db" Feb 23 17:51:48 crc kubenswrapper[4724]: I0223 17:51:48.770935 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 23 17:51:48 crc kubenswrapper[4724]: I0223 17:51:48.770939 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=2.770916745 podStartE2EDuration="2.770916745s" podCreationTimestamp="2026-02-23 17:51:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:51:48.768051467 +0000 UTC m=+1264.584251067" watchObservedRunningTime="2026-02-23 17:51:48.770916745 +0000 UTC m=+1264.587116365" Feb 23 17:51:48 crc kubenswrapper[4724]: I0223 17:51:48.781588 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"261b32f3-e185-41ff-bd81-f6c6a5baea04","Type":"ContainerStarted","Data":"52d70ef2072bf7a24af0c68548af254c5d855ffe788c6d59e7d89e825836face"} Feb 23 17:51:48 crc kubenswrapper[4724]: I0223 17:51:48.781635 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"261b32f3-e185-41ff-bd81-f6c6a5baea04","Type":"ContainerStarted","Data":"730af69de6073de9614d07a449de7dc7d3b37f3bfc710e74d6f106f87df75c13"} Feb 23 17:51:48 crc kubenswrapper[4724]: I0223 17:51:48.781649 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"261b32f3-e185-41ff-bd81-f6c6a5baea04","Type":"ContainerStarted","Data":"5d13d0c51589a1840f27c9046ea19ad9686b3b02ab33c6bd98099f3d0627fa8b"} Feb 23 17:51:48 crc kubenswrapper[4724]: I0223 17:51:48.791139 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8p4t\" (UniqueName: \"kubernetes.io/projected/9f1d15c2-eeeb-41fe-89c7-27ad522e5c56-kube-api-access-m8p4t\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:48 crc kubenswrapper[4724]: I0223 17:51:48.791168 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f1d15c2-eeeb-41fe-89c7-27ad522e5c56-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:48 crc kubenswrapper[4724]: I0223 17:51:48.792890 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e54fa012-7969-4917-888f-a2f822eb9449","Type":"ContainerStarted","Data":"85f8c641e7b8809d4bbdaf52e99a5db2a80385ef82a6298482be1987efb69625"} Feb 23 17:51:48 crc kubenswrapper[4724]: I0223 17:51:48.793932 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 23 17:51:48 crc kubenswrapper[4724]: I0223 17:51:48.795977 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f1d15c2-eeeb-41fe-89c7-27ad522e5c56-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9f1d15c2-eeeb-41fe-89c7-27ad522e5c56" (UID: "9f1d15c2-eeeb-41fe-89c7-27ad522e5c56"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:51:48 crc kubenswrapper[4724]: I0223 17:51:48.797532 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"935753ed-464b-4bac-af1f-e356a473c78f","Type":"ContainerStarted","Data":"c50bd87d83587ef8799f7341d843d7db1898c763b99e286eb5fe80028cbcd3b9"} Feb 23 17:51:48 crc kubenswrapper[4724]: I0223 17:51:48.797588 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"935753ed-464b-4bac-af1f-e356a473c78f","Type":"ContainerStarted","Data":"ce29c1eef24e3c8bb7802ede453f8bcd47c997035f623628d95415e71dc040ca"} Feb 23 17:51:48 crc kubenswrapper[4724]: I0223 17:51:48.813834 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.813792604 podStartE2EDuration="2.813792604s" podCreationTimestamp="2026-02-23 17:51:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:51:48.809342119 +0000 UTC m=+1264.625541719" watchObservedRunningTime="2026-02-23 17:51:48.813792604 +0000 UTC m=+1264.629992204" Feb 23 17:51:48 crc kubenswrapper[4724]: I0223 17:51:48.859758 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.859741505 podStartE2EDuration="2.859741505s" podCreationTimestamp="2026-02-23 17:51:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:51:48.827947542 +0000 UTC m=+1264.644147142" watchObservedRunningTime="2026-02-23 17:51:48.859741505 +0000 UTC m=+1264.675941105" Feb 23 17:51:48 crc kubenswrapper[4724]: I0223 17:51:48.870302 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=2.870282912 podStartE2EDuration="2.870282912s" podCreationTimestamp="2026-02-23 17:51:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:51:48.843416261 +0000 UTC m=+1264.659615861" watchObservedRunningTime="2026-02-23 17:51:48.870282912 +0000 UTC m=+1264.686482512" Feb 23 17:51:48 crc kubenswrapper[4724]: I0223 17:51:48.887446 4724 scope.go:117] "RemoveContainer" containerID="83420c1acfc2061d9432f2021c0cc50a6bcef3c81d941d660d001230df4580db" Feb 23 17:51:48 crc kubenswrapper[4724]: E0223 17:51:48.888599 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83420c1acfc2061d9432f2021c0cc50a6bcef3c81d941d660d001230df4580db\": container with ID starting with 83420c1acfc2061d9432f2021c0cc50a6bcef3c81d941d660d001230df4580db not found: ID does not exist" containerID="83420c1acfc2061d9432f2021c0cc50a6bcef3c81d941d660d001230df4580db" Feb 23 17:51:48 crc kubenswrapper[4724]: I0223 17:51:48.888683 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83420c1acfc2061d9432f2021c0cc50a6bcef3c81d941d660d001230df4580db"} err="failed to get container status \"83420c1acfc2061d9432f2021c0cc50a6bcef3c81d941d660d001230df4580db\": rpc error: code = NotFound desc = could not find container \"83420c1acfc2061d9432f2021c0cc50a6bcef3c81d941d660d001230df4580db\": container with ID starting with 83420c1acfc2061d9432f2021c0cc50a6bcef3c81d941d660d001230df4580db not found: ID does not exist" Feb 23 17:51:48 crc kubenswrapper[4724]: I0223 17:51:48.893941 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f1d15c2-eeeb-41fe-89c7-27ad522e5c56-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:48 crc kubenswrapper[4724]: W0223 17:51:48.897478 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d2f1a31_7f08_451d_962d_88ee8fd7f246.slice/crio-781596aaefb4dd95f00e5e80d58a51d1008a1868d2cfb0826d2ec3df1d9439ff WatchSource:0}: Error finding container 781596aaefb4dd95f00e5e80d58a51d1008a1868d2cfb0826d2ec3df1d9439ff: Status 404 returned error can't find the container with id 781596aaefb4dd95f00e5e80d58a51d1008a1868d2cfb0826d2ec3df1d9439ff Feb 23 17:51:48 crc kubenswrapper[4724]: I0223 17:51:48.899533 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 17:51:48 crc kubenswrapper[4724]: I0223 17:51:48.967218 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7807bbeb-d20e-4ec9-8587-3bac2e960ab6" path="/var/lib/kubelet/pods/7807bbeb-d20e-4ec9-8587-3bac2e960ab6/volumes" Feb 23 17:51:48 crc kubenswrapper[4724]: I0223 17:51:48.968442 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83c5ec75-90ba-42cf-ab2e-602078cfc1a9" path="/var/lib/kubelet/pods/83c5ec75-90ba-42cf-ab2e-602078cfc1a9/volumes" Feb 23 17:51:48 crc kubenswrapper[4724]: I0223 17:51:48.969300 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86eb7ff0-87b2-4538-8c5b-9126768e810b" path="/var/lib/kubelet/pods/86eb7ff0-87b2-4538-8c5b-9126768e810b/volumes" Feb 23 17:51:49 crc kubenswrapper[4724]: I0223 17:51:49.097267 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Feb 23 17:51:49 crc kubenswrapper[4724]: I0223 17:51:49.113283 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-applier-0"] Feb 23 17:51:49 crc kubenswrapper[4724]: I0223 17:51:49.127949 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Feb 23 17:51:49 crc kubenswrapper[4724]: E0223 17:51:49.128462 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f1d15c2-eeeb-41fe-89c7-27ad522e5c56" containerName="watcher-applier" Feb 23 17:51:49 crc kubenswrapper[4724]: I0223 17:51:49.128486 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f1d15c2-eeeb-41fe-89c7-27ad522e5c56" containerName="watcher-applier" Feb 23 17:51:49 crc kubenswrapper[4724]: I0223 17:51:49.128791 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f1d15c2-eeeb-41fe-89c7-27ad522e5c56" containerName="watcher-applier" Feb 23 17:51:49 crc kubenswrapper[4724]: I0223 17:51:49.129602 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 23 17:51:49 crc kubenswrapper[4724]: I0223 17:51:49.132729 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Feb 23 17:51:49 crc kubenswrapper[4724]: I0223 17:51:49.160643 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 23 17:51:49 crc kubenswrapper[4724]: I0223 17:51:49.300486 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad11f589-aa5d-493e-b431-25f6f7b0675b-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"ad11f589-aa5d-493e-b431-25f6f7b0675b\") " pod="openstack/watcher-applier-0" Feb 23 17:51:49 crc kubenswrapper[4724]: I0223 17:51:49.300638 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad11f589-aa5d-493e-b431-25f6f7b0675b-config-data\") pod \"watcher-applier-0\" (UID: \"ad11f589-aa5d-493e-b431-25f6f7b0675b\") " pod="openstack/watcher-applier-0" Feb 23 17:51:49 crc kubenswrapper[4724]: I0223 17:51:49.300678 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qclx9\" (UniqueName: \"kubernetes.io/projected/ad11f589-aa5d-493e-b431-25f6f7b0675b-kube-api-access-qclx9\") pod \"watcher-applier-0\" (UID: \"ad11f589-aa5d-493e-b431-25f6f7b0675b\") " pod="openstack/watcher-applier-0" Feb 23 17:51:49 crc kubenswrapper[4724]: I0223 17:51:49.300718 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad11f589-aa5d-493e-b431-25f6f7b0675b-logs\") pod \"watcher-applier-0\" (UID: \"ad11f589-aa5d-493e-b431-25f6f7b0675b\") " pod="openstack/watcher-applier-0" Feb 23 17:51:49 crc kubenswrapper[4724]: I0223 17:51:49.402758 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad11f589-aa5d-493e-b431-25f6f7b0675b-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"ad11f589-aa5d-493e-b431-25f6f7b0675b\") " pod="openstack/watcher-applier-0" Feb 23 17:51:49 crc kubenswrapper[4724]: I0223 17:51:49.402894 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad11f589-aa5d-493e-b431-25f6f7b0675b-config-data\") pod \"watcher-applier-0\" (UID: \"ad11f589-aa5d-493e-b431-25f6f7b0675b\") " pod="openstack/watcher-applier-0" Feb 23 17:51:49 crc kubenswrapper[4724]: I0223 17:51:49.402934 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qclx9\" (UniqueName: \"kubernetes.io/projected/ad11f589-aa5d-493e-b431-25f6f7b0675b-kube-api-access-qclx9\") pod \"watcher-applier-0\" (UID: \"ad11f589-aa5d-493e-b431-25f6f7b0675b\") " pod="openstack/watcher-applier-0" Feb 23 17:51:49 crc kubenswrapper[4724]: I0223 17:51:49.402967 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad11f589-aa5d-493e-b431-25f6f7b0675b-logs\") pod \"watcher-applier-0\" (UID: \"ad11f589-aa5d-493e-b431-25f6f7b0675b\") " pod="openstack/watcher-applier-0" Feb 23 17:51:49 crc kubenswrapper[4724]: I0223 17:51:49.403444 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad11f589-aa5d-493e-b431-25f6f7b0675b-logs\") pod \"watcher-applier-0\" (UID: \"ad11f589-aa5d-493e-b431-25f6f7b0675b\") " pod="openstack/watcher-applier-0" Feb 23 17:51:49 crc kubenswrapper[4724]: I0223 17:51:49.409012 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad11f589-aa5d-493e-b431-25f6f7b0675b-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"ad11f589-aa5d-493e-b431-25f6f7b0675b\") " pod="openstack/watcher-applier-0" Feb 23 17:51:49 crc kubenswrapper[4724]: I0223 17:51:49.409914 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad11f589-aa5d-493e-b431-25f6f7b0675b-config-data\") pod \"watcher-applier-0\" (UID: \"ad11f589-aa5d-493e-b431-25f6f7b0675b\") " pod="openstack/watcher-applier-0" Feb 23 17:51:49 crc kubenswrapper[4724]: I0223 17:51:49.420615 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qclx9\" (UniqueName: \"kubernetes.io/projected/ad11f589-aa5d-493e-b431-25f6f7b0675b-kube-api-access-qclx9\") pod \"watcher-applier-0\" (UID: \"ad11f589-aa5d-493e-b431-25f6f7b0675b\") " pod="openstack/watcher-applier-0" Feb 23 17:51:49 crc kubenswrapper[4724]: I0223 17:51:49.448718 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 23 17:51:49 crc kubenswrapper[4724]: I0223 17:51:49.808762 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9d2f1a31-7f08-451d-962d-88ee8fd7f246","Type":"ContainerStarted","Data":"49aa0036214b5ead073c6b65b357033059d6c529826df11f2dd7d4b85177798a"} Feb 23 17:51:49 crc kubenswrapper[4724]: I0223 17:51:49.808823 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9d2f1a31-7f08-451d-962d-88ee8fd7f246","Type":"ContainerStarted","Data":"781596aaefb4dd95f00e5e80d58a51d1008a1868d2cfb0826d2ec3df1d9439ff"} Feb 23 17:51:49 crc kubenswrapper[4724]: I0223 17:51:49.832598 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.832579467 podStartE2EDuration="2.832579467s" podCreationTimestamp="2026-02-23 17:51:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:51:49.823593306 +0000 UTC m=+1265.639792906" watchObservedRunningTime="2026-02-23 17:51:49.832579467 +0000 UTC m=+1265.648779067" Feb 23 17:51:49 crc kubenswrapper[4724]: I0223 17:51:49.901437 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 23 17:51:50 crc kubenswrapper[4724]: I0223 17:51:50.050993 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 23 17:51:50 crc kubenswrapper[4724]: I0223 17:51:50.051043 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 23 17:51:50 crc kubenswrapper[4724]: I0223 17:51:50.821243 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"ad11f589-aa5d-493e-b431-25f6f7b0675b","Type":"ContainerStarted","Data":"83c522b39bb3395f7878194919442c452454d9aaba87ac46206d001845b5c4c8"} Feb 23 17:51:50 crc kubenswrapper[4724]: I0223 17:51:50.821570 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"ad11f589-aa5d-493e-b431-25f6f7b0675b","Type":"ContainerStarted","Data":"d8e9e8ec06a8a827cc70dc022ea76b809dd71dacc30fa7ee943fe132d0173e71"} Feb 23 17:51:50 crc kubenswrapper[4724]: I0223 17:51:50.821435 4724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 17:51:50 crc kubenswrapper[4724]: I0223 17:51:50.842023 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=1.8419769339999998 podStartE2EDuration="1.841976934s" podCreationTimestamp="2026-02-23 17:51:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:51:50.835228601 +0000 UTC m=+1266.651428201" watchObservedRunningTime="2026-02-23 17:51:50.841976934 +0000 UTC m=+1266.658176534" Feb 23 17:51:50 crc kubenswrapper[4724]: I0223 17:51:50.962663 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f1d15c2-eeeb-41fe-89c7-27ad522e5c56" path="/var/lib/kubelet/pods/9f1d15c2-eeeb-41fe-89c7-27ad522e5c56/volumes" Feb 23 17:51:51 crc kubenswrapper[4724]: I0223 17:51:51.089810 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 23 17:51:51 crc kubenswrapper[4724]: I0223 17:51:51.503335 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 23 17:51:52 crc kubenswrapper[4724]: I0223 17:51:52.527021 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 23 17:51:53 crc kubenswrapper[4724]: I0223 17:51:53.367838 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 23 17:51:54 crc kubenswrapper[4724]: I0223 17:51:54.450109 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 23 17:51:55 crc kubenswrapper[4724]: I0223 17:51:55.052430 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 23 17:51:55 crc kubenswrapper[4724]: I0223 17:51:55.052663 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 23 17:51:55 crc kubenswrapper[4724]: I0223 17:51:55.843516 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 23 17:51:55 crc kubenswrapper[4724]: I0223 17:51:55.843767 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="5d4d82a5-2e0e-459f-b9f4-bb1ab23b3c52" containerName="kube-state-metrics" containerID="cri-o://54bddba8b4506cc3fb1debe40f7cc99f224f4b7b488855fee980efb175119e2a" gracePeriod=30 Feb 23 17:51:56 crc kubenswrapper[4724]: I0223 17:51:56.065542 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0b3714b0-4281-4cf0-be57-789820a25116" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.215:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 17:51:56 crc kubenswrapper[4724]: I0223 17:51:56.065554 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0b3714b0-4281-4cf0-be57-789820a25116" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.215:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 17:51:56 crc kubenswrapper[4724]: I0223 17:51:56.344323 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 23 17:51:56 crc kubenswrapper[4724]: I0223 17:51:56.480586 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcgvs\" (UniqueName: \"kubernetes.io/projected/5d4d82a5-2e0e-459f-b9f4-bb1ab23b3c52-kube-api-access-fcgvs\") pod \"5d4d82a5-2e0e-459f-b9f4-bb1ab23b3c52\" (UID: \"5d4d82a5-2e0e-459f-b9f4-bb1ab23b3c52\") " Feb 23 17:51:56 crc kubenswrapper[4724]: I0223 17:51:56.488507 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d4d82a5-2e0e-459f-b9f4-bb1ab23b3c52-kube-api-access-fcgvs" (OuterVolumeSpecName: "kube-api-access-fcgvs") pod "5d4d82a5-2e0e-459f-b9f4-bb1ab23b3c52" (UID: "5d4d82a5-2e0e-459f-b9f4-bb1ab23b3c52"). InnerVolumeSpecName "kube-api-access-fcgvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:51:56 crc kubenswrapper[4724]: I0223 17:51:56.503355 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Feb 23 17:51:56 crc kubenswrapper[4724]: I0223 17:51:56.510652 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Feb 23 17:51:56 crc kubenswrapper[4724]: I0223 17:51:56.583724 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcgvs\" (UniqueName: \"kubernetes.io/projected/5d4d82a5-2e0e-459f-b9f4-bb1ab23b3c52-kube-api-access-fcgvs\") on node \"crc\" DevicePath \"\"" Feb 23 17:51:56 crc kubenswrapper[4724]: I0223 17:51:56.875464 4724 generic.go:334] "Generic (PLEG): container finished" podID="5d4d82a5-2e0e-459f-b9f4-bb1ab23b3c52" containerID="54bddba8b4506cc3fb1debe40f7cc99f224f4b7b488855fee980efb175119e2a" exitCode=2 Feb 23 17:51:56 crc kubenswrapper[4724]: I0223 17:51:56.876939 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 23 17:51:56 crc kubenswrapper[4724]: I0223 17:51:56.877248 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"5d4d82a5-2e0e-459f-b9f4-bb1ab23b3c52","Type":"ContainerDied","Data":"54bddba8b4506cc3fb1debe40f7cc99f224f4b7b488855fee980efb175119e2a"} Feb 23 17:51:56 crc kubenswrapper[4724]: I0223 17:51:56.877278 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"5d4d82a5-2e0e-459f-b9f4-bb1ab23b3c52","Type":"ContainerDied","Data":"d093a0de64a221c2d8e46d488793acc49c2d668c942b59f2c8acf9c8d616944a"} Feb 23 17:51:56 crc kubenswrapper[4724]: I0223 17:51:56.877294 4724 scope.go:117] "RemoveContainer" containerID="54bddba8b4506cc3fb1debe40f7cc99f224f4b7b488855fee980efb175119e2a" Feb 23 17:51:56 crc kubenswrapper[4724]: I0223 17:51:56.912163 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 23 17:51:56 crc kubenswrapper[4724]: I0223 17:51:56.919642 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 23 17:51:56 crc kubenswrapper[4724]: I0223 17:51:56.924311 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 23 17:51:56 crc kubenswrapper[4724]: I0223 17:51:56.931101 4724 scope.go:117] "RemoveContainer" containerID="54bddba8b4506cc3fb1debe40f7cc99f224f4b7b488855fee980efb175119e2a" Feb 23 17:51:56 crc kubenswrapper[4724]: E0223 17:51:56.933304 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54bddba8b4506cc3fb1debe40f7cc99f224f4b7b488855fee980efb175119e2a\": container with ID starting with 54bddba8b4506cc3fb1debe40f7cc99f224f4b7b488855fee980efb175119e2a not found: ID does not exist" containerID="54bddba8b4506cc3fb1debe40f7cc99f224f4b7b488855fee980efb175119e2a" Feb 23 17:51:56 crc kubenswrapper[4724]: I0223 17:51:56.933337 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54bddba8b4506cc3fb1debe40f7cc99f224f4b7b488855fee980efb175119e2a"} err="failed to get container status \"54bddba8b4506cc3fb1debe40f7cc99f224f4b7b488855fee980efb175119e2a\": rpc error: code = NotFound desc = could not find container \"54bddba8b4506cc3fb1debe40f7cc99f224f4b7b488855fee980efb175119e2a\": container with ID starting with 54bddba8b4506cc3fb1debe40f7cc99f224f4b7b488855fee980efb175119e2a not found: ID does not exist" Feb 23 17:51:56 crc kubenswrapper[4724]: I0223 17:51:56.938643 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 23 17:51:56 crc kubenswrapper[4724]: E0223 17:51:56.939027 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d4d82a5-2e0e-459f-b9f4-bb1ab23b3c52" containerName="kube-state-metrics" Feb 23 17:51:56 crc kubenswrapper[4724]: I0223 17:51:56.939046 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d4d82a5-2e0e-459f-b9f4-bb1ab23b3c52" containerName="kube-state-metrics" Feb 23 17:51:56 crc kubenswrapper[4724]: I0223 17:51:56.939220 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d4d82a5-2e0e-459f-b9f4-bb1ab23b3c52" containerName="kube-state-metrics" Feb 23 17:51:56 crc kubenswrapper[4724]: I0223 17:51:56.939867 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 23 17:51:56 crc kubenswrapper[4724]: I0223 17:51:56.947901 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 23 17:51:56 crc kubenswrapper[4724]: I0223 17:51:56.947952 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 23 17:51:56 crc kubenswrapper[4724]: I0223 17:51:56.987521 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d4d82a5-2e0e-459f-b9f4-bb1ab23b3c52" path="/var/lib/kubelet/pods/5d4d82a5-2e0e-459f-b9f4-bb1ab23b3c52/volumes" Feb 23 17:51:56 crc kubenswrapper[4724]: I0223 17:51:56.989669 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 23 17:51:57 crc kubenswrapper[4724]: I0223 17:51:57.093744 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85b4f79b-e696-483e-8ee7-8653f8c07a40-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"85b4f79b-e696-483e-8ee7-8653f8c07a40\") " pod="openstack/kube-state-metrics-0" Feb 23 17:51:57 crc kubenswrapper[4724]: I0223 17:51:57.093784 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/85b4f79b-e696-483e-8ee7-8653f8c07a40-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"85b4f79b-e696-483e-8ee7-8653f8c07a40\") " pod="openstack/kube-state-metrics-0" Feb 23 17:51:57 crc kubenswrapper[4724]: I0223 17:51:57.093822 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/85b4f79b-e696-483e-8ee7-8653f8c07a40-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"85b4f79b-e696-483e-8ee7-8653f8c07a40\") " pod="openstack/kube-state-metrics-0" Feb 23 17:51:57 crc kubenswrapper[4724]: I0223 17:51:57.093986 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk7wh\" (UniqueName: \"kubernetes.io/projected/85b4f79b-e696-483e-8ee7-8653f8c07a40-kube-api-access-rk7wh\") pod \"kube-state-metrics-0\" (UID: \"85b4f79b-e696-483e-8ee7-8653f8c07a40\") " pod="openstack/kube-state-metrics-0" Feb 23 17:51:57 crc kubenswrapper[4724]: I0223 17:51:57.196061 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rk7wh\" (UniqueName: \"kubernetes.io/projected/85b4f79b-e696-483e-8ee7-8653f8c07a40-kube-api-access-rk7wh\") pod \"kube-state-metrics-0\" (UID: \"85b4f79b-e696-483e-8ee7-8653f8c07a40\") " pod="openstack/kube-state-metrics-0" Feb 23 17:51:57 crc kubenswrapper[4724]: I0223 17:51:57.196185 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85b4f79b-e696-483e-8ee7-8653f8c07a40-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"85b4f79b-e696-483e-8ee7-8653f8c07a40\") " pod="openstack/kube-state-metrics-0" Feb 23 17:51:57 crc kubenswrapper[4724]: I0223 17:51:57.196208 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/85b4f79b-e696-483e-8ee7-8653f8c07a40-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"85b4f79b-e696-483e-8ee7-8653f8c07a40\") " pod="openstack/kube-state-metrics-0" Feb 23 17:51:57 crc kubenswrapper[4724]: I0223 17:51:57.196241 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/85b4f79b-e696-483e-8ee7-8653f8c07a40-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"85b4f79b-e696-483e-8ee7-8653f8c07a40\") " pod="openstack/kube-state-metrics-0" Feb 23 17:51:57 crc kubenswrapper[4724]: I0223 17:51:57.200635 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85b4f79b-e696-483e-8ee7-8653f8c07a40-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"85b4f79b-e696-483e-8ee7-8653f8c07a40\") " pod="openstack/kube-state-metrics-0" Feb 23 17:51:57 crc kubenswrapper[4724]: I0223 17:51:57.201716 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/85b4f79b-e696-483e-8ee7-8653f8c07a40-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"85b4f79b-e696-483e-8ee7-8653f8c07a40\") " pod="openstack/kube-state-metrics-0" Feb 23 17:51:57 crc kubenswrapper[4724]: I0223 17:51:57.204994 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/85b4f79b-e696-483e-8ee7-8653f8c07a40-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"85b4f79b-e696-483e-8ee7-8653f8c07a40\") " pod="openstack/kube-state-metrics-0" Feb 23 17:51:57 crc kubenswrapper[4724]: I0223 17:51:57.215772 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rk7wh\" (UniqueName: \"kubernetes.io/projected/85b4f79b-e696-483e-8ee7-8653f8c07a40-kube-api-access-rk7wh\") pod \"kube-state-metrics-0\" (UID: \"85b4f79b-e696-483e-8ee7-8653f8c07a40\") " pod="openstack/kube-state-metrics-0" Feb 23 17:51:57 crc kubenswrapper[4724]: I0223 17:51:57.260134 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 23 17:51:57 crc kubenswrapper[4724]: I0223 17:51:57.311613 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 23 17:51:57 crc kubenswrapper[4724]: I0223 17:51:57.447253 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 23 17:51:57 crc kubenswrapper[4724]: I0223 17:51:57.485764 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 23 17:51:57 crc kubenswrapper[4724]: I0223 17:51:57.489142 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 23 17:51:57 crc kubenswrapper[4724]: I0223 17:51:57.495140 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Feb 23 17:51:57 crc kubenswrapper[4724]: I0223 17:51:57.752262 4724 patch_prober.go:28] interesting pod/machine-config-daemon-rw78r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 17:51:57 crc kubenswrapper[4724]: I0223 17:51:57.752666 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 17:51:57 crc kubenswrapper[4724]: W0223 17:51:57.817525 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85b4f79b_e696_483e_8ee7_8653f8c07a40.slice/crio-099ad456a39176f27e9051f3830008149cde7500c3ab9c17cebe800848b9313a WatchSource:0}: Error finding container 099ad456a39176f27e9051f3830008149cde7500c3ab9c17cebe800848b9313a: Status 404 returned error can't find the container with id 099ad456a39176f27e9051f3830008149cde7500c3ab9c17cebe800848b9313a Feb 23 17:51:57 crc kubenswrapper[4724]: I0223 17:51:57.818847 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 23 17:51:57 crc kubenswrapper[4724]: I0223 17:51:57.887561 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"85b4f79b-e696-483e-8ee7-8653f8c07a40","Type":"ContainerStarted","Data":"099ad456a39176f27e9051f3830008149cde7500c3ab9c17cebe800848b9313a"} Feb 23 17:51:57 crc kubenswrapper[4724]: I0223 17:51:57.890253 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 23 17:51:57 crc kubenswrapper[4724]: I0223 17:51:57.922156 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Feb 23 17:51:58 crc kubenswrapper[4724]: I0223 17:51:58.207755 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 17:51:58 crc kubenswrapper[4724]: I0223 17:51:58.208019 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="46766c09-b7dd-4263-8e07-089095bb5cac" containerName="ceilometer-central-agent" containerID="cri-o://fc2c97350ceae43cec4fd85fb3317fa56f876e004aba7f5ab5fe4766ee6765ef" gracePeriod=30 Feb 23 17:51:58 crc kubenswrapper[4724]: I0223 17:51:58.208087 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="46766c09-b7dd-4263-8e07-089095bb5cac" containerName="proxy-httpd" containerID="cri-o://67c01ef983e5b4b5f3ec3a4b2830f5fff768c07aadabb38f753c330ce9d52ec7" gracePeriod=30 Feb 23 17:51:58 crc kubenswrapper[4724]: I0223 17:51:58.208131 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="46766c09-b7dd-4263-8e07-089095bb5cac" containerName="sg-core" containerID="cri-o://24699cdfd11b6b4552163f95f768cc5bd42824beb8cc2df8a7a177f9a316b249" gracePeriod=30 Feb 23 17:51:58 crc kubenswrapper[4724]: I0223 17:51:58.208166 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="46766c09-b7dd-4263-8e07-089095bb5cac" containerName="ceilometer-notification-agent" containerID="cri-o://78c868e904ea64e5a5111b28a0772dd194161393b04e7bd194f0b954bcb2b143" gracePeriod=30 Feb 23 17:51:58 crc kubenswrapper[4724]: I0223 17:51:58.367357 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 23 17:51:58 crc kubenswrapper[4724]: I0223 17:51:58.414845 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 23 17:51:58 crc kubenswrapper[4724]: I0223 17:51:58.568648 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="261b32f3-e185-41ff-bd81-f6c6a5baea04" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.219:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 23 17:51:58 crc kubenswrapper[4724]: I0223 17:51:58.568681 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="261b32f3-e185-41ff-bd81-f6c6a5baea04" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.219:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 23 17:51:58 crc kubenswrapper[4724]: I0223 17:51:58.899169 4724 generic.go:334] "Generic (PLEG): container finished" podID="46766c09-b7dd-4263-8e07-089095bb5cac" containerID="67c01ef983e5b4b5f3ec3a4b2830f5fff768c07aadabb38f753c330ce9d52ec7" exitCode=0 Feb 23 17:51:58 crc kubenswrapper[4724]: I0223 17:51:58.899963 4724 generic.go:334] "Generic (PLEG): container finished" podID="46766c09-b7dd-4263-8e07-089095bb5cac" containerID="24699cdfd11b6b4552163f95f768cc5bd42824beb8cc2df8a7a177f9a316b249" exitCode=2 Feb 23 17:51:58 crc kubenswrapper[4724]: I0223 17:51:58.900065 4724 generic.go:334] "Generic (PLEG): container finished" podID="46766c09-b7dd-4263-8e07-089095bb5cac" containerID="fc2c97350ceae43cec4fd85fb3317fa56f876e004aba7f5ab5fe4766ee6765ef" exitCode=0 Feb 23 17:51:58 crc kubenswrapper[4724]: I0223 17:51:58.899255 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46766c09-b7dd-4263-8e07-089095bb5cac","Type":"ContainerDied","Data":"67c01ef983e5b4b5f3ec3a4b2830f5fff768c07aadabb38f753c330ce9d52ec7"} Feb 23 17:51:58 crc kubenswrapper[4724]: I0223 17:51:58.900291 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46766c09-b7dd-4263-8e07-089095bb5cac","Type":"ContainerDied","Data":"24699cdfd11b6b4552163f95f768cc5bd42824beb8cc2df8a7a177f9a316b249"} Feb 23 17:51:58 crc kubenswrapper[4724]: I0223 17:51:58.900375 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46766c09-b7dd-4263-8e07-089095bb5cac","Type":"ContainerDied","Data":"fc2c97350ceae43cec4fd85fb3317fa56f876e004aba7f5ab5fe4766ee6765ef"} Feb 23 17:51:58 crc kubenswrapper[4724]: I0223 17:51:58.902458 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"85b4f79b-e696-483e-8ee7-8653f8c07a40","Type":"ContainerStarted","Data":"160067efe69d7fd354255b53f5d91e869e3ef1fe12324ff3f6b37bec4de91bc6"} Feb 23 17:51:58 crc kubenswrapper[4724]: I0223 17:51:58.929996 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.564852504 podStartE2EDuration="2.929974561s" podCreationTimestamp="2026-02-23 17:51:56 +0000 UTC" firstStartedPulling="2026-02-23 17:51:57.820382902 +0000 UTC m=+1273.636582502" lastFinishedPulling="2026-02-23 17:51:58.185504969 +0000 UTC m=+1274.001704559" observedRunningTime="2026-02-23 17:51:58.917367879 +0000 UTC m=+1274.733567479" watchObservedRunningTime="2026-02-23 17:51:58.929974561 +0000 UTC m=+1274.746174161" Feb 23 17:51:58 crc kubenswrapper[4724]: I0223 17:51:58.934552 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 23 17:51:59 crc kubenswrapper[4724]: I0223 17:51:59.449688 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 23 17:51:59 crc kubenswrapper[4724]: I0223 17:51:59.495662 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Feb 23 17:51:59 crc kubenswrapper[4724]: I0223 17:51:59.911913 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 23 17:51:59 crc kubenswrapper[4724]: I0223 17:51:59.938847 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Feb 23 17:52:00 crc kubenswrapper[4724]: I0223 17:52:00.673252 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 17:52:00 crc kubenswrapper[4724]: I0223 17:52:00.765842 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46766c09-b7dd-4263-8e07-089095bb5cac-combined-ca-bundle\") pod \"46766c09-b7dd-4263-8e07-089095bb5cac\" (UID: \"46766c09-b7dd-4263-8e07-089095bb5cac\") " Feb 23 17:52:00 crc kubenswrapper[4724]: I0223 17:52:00.765965 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46766c09-b7dd-4263-8e07-089095bb5cac-run-httpd\") pod \"46766c09-b7dd-4263-8e07-089095bb5cac\" (UID: \"46766c09-b7dd-4263-8e07-089095bb5cac\") " Feb 23 17:52:00 crc kubenswrapper[4724]: I0223 17:52:00.766001 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46766c09-b7dd-4263-8e07-089095bb5cac-scripts\") pod \"46766c09-b7dd-4263-8e07-089095bb5cac\" (UID: \"46766c09-b7dd-4263-8e07-089095bb5cac\") " Feb 23 17:52:00 crc kubenswrapper[4724]: I0223 17:52:00.766067 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cvgc\" (UniqueName: \"kubernetes.io/projected/46766c09-b7dd-4263-8e07-089095bb5cac-kube-api-access-8cvgc\") pod \"46766c09-b7dd-4263-8e07-089095bb5cac\" (UID: \"46766c09-b7dd-4263-8e07-089095bb5cac\") " Feb 23 17:52:00 crc kubenswrapper[4724]: I0223 17:52:00.766096 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46766c09-b7dd-4263-8e07-089095bb5cac-config-data\") pod \"46766c09-b7dd-4263-8e07-089095bb5cac\" (UID: \"46766c09-b7dd-4263-8e07-089095bb5cac\") " Feb 23 17:52:00 crc kubenswrapper[4724]: I0223 17:52:00.766140 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46766c09-b7dd-4263-8e07-089095bb5cac-log-httpd\") pod \"46766c09-b7dd-4263-8e07-089095bb5cac\" (UID: \"46766c09-b7dd-4263-8e07-089095bb5cac\") " Feb 23 17:52:00 crc kubenswrapper[4724]: I0223 17:52:00.766167 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/46766c09-b7dd-4263-8e07-089095bb5cac-sg-core-conf-yaml\") pod \"46766c09-b7dd-4263-8e07-089095bb5cac\" (UID: \"46766c09-b7dd-4263-8e07-089095bb5cac\") " Feb 23 17:52:00 crc kubenswrapper[4724]: I0223 17:52:00.766306 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46766c09-b7dd-4263-8e07-089095bb5cac-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "46766c09-b7dd-4263-8e07-089095bb5cac" (UID: "46766c09-b7dd-4263-8e07-089095bb5cac"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:52:00 crc kubenswrapper[4724]: I0223 17:52:00.766611 4724 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46766c09-b7dd-4263-8e07-089095bb5cac-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 17:52:00 crc kubenswrapper[4724]: I0223 17:52:00.766667 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46766c09-b7dd-4263-8e07-089095bb5cac-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "46766c09-b7dd-4263-8e07-089095bb5cac" (UID: "46766c09-b7dd-4263-8e07-089095bb5cac"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:52:00 crc kubenswrapper[4724]: I0223 17:52:00.791248 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46766c09-b7dd-4263-8e07-089095bb5cac-scripts" (OuterVolumeSpecName: "scripts") pod "46766c09-b7dd-4263-8e07-089095bb5cac" (UID: "46766c09-b7dd-4263-8e07-089095bb5cac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:52:00 crc kubenswrapper[4724]: I0223 17:52:00.794370 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46766c09-b7dd-4263-8e07-089095bb5cac-kube-api-access-8cvgc" (OuterVolumeSpecName: "kube-api-access-8cvgc") pod "46766c09-b7dd-4263-8e07-089095bb5cac" (UID: "46766c09-b7dd-4263-8e07-089095bb5cac"). InnerVolumeSpecName "kube-api-access-8cvgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:52:00 crc kubenswrapper[4724]: I0223 17:52:00.807111 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46766c09-b7dd-4263-8e07-089095bb5cac-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "46766c09-b7dd-4263-8e07-089095bb5cac" (UID: "46766c09-b7dd-4263-8e07-089095bb5cac"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:52:00 crc kubenswrapper[4724]: I0223 17:52:00.854928 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46766c09-b7dd-4263-8e07-089095bb5cac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "46766c09-b7dd-4263-8e07-089095bb5cac" (UID: "46766c09-b7dd-4263-8e07-089095bb5cac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:52:00 crc kubenswrapper[4724]: I0223 17:52:00.868427 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46766c09-b7dd-4263-8e07-089095bb5cac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 17:52:00 crc kubenswrapper[4724]: I0223 17:52:00.868461 4724 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46766c09-b7dd-4263-8e07-089095bb5cac-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 17:52:00 crc kubenswrapper[4724]: I0223 17:52:00.868473 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cvgc\" (UniqueName: \"kubernetes.io/projected/46766c09-b7dd-4263-8e07-089095bb5cac-kube-api-access-8cvgc\") on node \"crc\" DevicePath \"\"" Feb 23 17:52:00 crc kubenswrapper[4724]: I0223 17:52:00.868485 4724 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46766c09-b7dd-4263-8e07-089095bb5cac-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 17:52:00 crc kubenswrapper[4724]: I0223 17:52:00.868493 4724 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/46766c09-b7dd-4263-8e07-089095bb5cac-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 23 17:52:00 crc kubenswrapper[4724]: I0223 17:52:00.878154 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46766c09-b7dd-4263-8e07-089095bb5cac-config-data" (OuterVolumeSpecName: "config-data") pod "46766c09-b7dd-4263-8e07-089095bb5cac" (UID: "46766c09-b7dd-4263-8e07-089095bb5cac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:52:00 crc kubenswrapper[4724]: I0223 17:52:00.925458 4724 generic.go:334] "Generic (PLEG): container finished" podID="46766c09-b7dd-4263-8e07-089095bb5cac" containerID="78c868e904ea64e5a5111b28a0772dd194161393b04e7bd194f0b954bcb2b143" exitCode=0 Feb 23 17:52:00 crc kubenswrapper[4724]: I0223 17:52:00.925534 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46766c09-b7dd-4263-8e07-089095bb5cac","Type":"ContainerDied","Data":"78c868e904ea64e5a5111b28a0772dd194161393b04e7bd194f0b954bcb2b143"} Feb 23 17:52:00 crc kubenswrapper[4724]: I0223 17:52:00.925572 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 17:52:00 crc kubenswrapper[4724]: I0223 17:52:00.925590 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46766c09-b7dd-4263-8e07-089095bb5cac","Type":"ContainerDied","Data":"c4f61a2556b232fbdf3f5d35217153ff743138ebe121e05aeb62aaad422b1d88"} Feb 23 17:52:00 crc kubenswrapper[4724]: I0223 17:52:00.925611 4724 scope.go:117] "RemoveContainer" containerID="67c01ef983e5b4b5f3ec3a4b2830f5fff768c07aadabb38f753c330ce9d52ec7" Feb 23 17:52:00 crc kubenswrapper[4724]: I0223 17:52:00.955307 4724 scope.go:117] "RemoveContainer" containerID="24699cdfd11b6b4552163f95f768cc5bd42824beb8cc2df8a7a177f9a316b249" Feb 23 17:52:00 crc kubenswrapper[4724]: I0223 17:52:00.970690 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46766c09-b7dd-4263-8e07-089095bb5cac-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 17:52:00 crc kubenswrapper[4724]: I0223 17:52:00.971689 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 17:52:00 crc kubenswrapper[4724]: I0223 17:52:00.975461 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 23 17:52:00 crc kubenswrapper[4724]: I0223 17:52:00.986321 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 23 17:52:00 crc kubenswrapper[4724]: E0223 17:52:00.989905 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46766c09-b7dd-4263-8e07-089095bb5cac" containerName="sg-core" Feb 23 17:52:00 crc kubenswrapper[4724]: I0223 17:52:00.990021 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="46766c09-b7dd-4263-8e07-089095bb5cac" containerName="sg-core" Feb 23 17:52:00 crc kubenswrapper[4724]: E0223 17:52:00.990046 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46766c09-b7dd-4263-8e07-089095bb5cac" containerName="ceilometer-notification-agent" Feb 23 17:52:00 crc kubenswrapper[4724]: I0223 17:52:00.990063 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="46766c09-b7dd-4263-8e07-089095bb5cac" containerName="ceilometer-notification-agent" Feb 23 17:52:00 crc kubenswrapper[4724]: E0223 17:52:00.990087 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46766c09-b7dd-4263-8e07-089095bb5cac" containerName="ceilometer-central-agent" Feb 23 17:52:00 crc kubenswrapper[4724]: I0223 17:52:00.990092 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="46766c09-b7dd-4263-8e07-089095bb5cac" containerName="ceilometer-central-agent" Feb 23 17:52:00 crc kubenswrapper[4724]: E0223 17:52:00.990108 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46766c09-b7dd-4263-8e07-089095bb5cac" containerName="proxy-httpd" Feb 23 17:52:00 crc kubenswrapper[4724]: I0223 17:52:00.990114 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="46766c09-b7dd-4263-8e07-089095bb5cac" containerName="proxy-httpd" Feb 23 17:52:00 crc kubenswrapper[4724]: I0223 17:52:00.990556 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="46766c09-b7dd-4263-8e07-089095bb5cac" containerName="ceilometer-central-agent" Feb 23 17:52:00 crc kubenswrapper[4724]: I0223 17:52:00.990597 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="46766c09-b7dd-4263-8e07-089095bb5cac" containerName="proxy-httpd" Feb 23 17:52:00 crc kubenswrapper[4724]: I0223 17:52:00.990609 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="46766c09-b7dd-4263-8e07-089095bb5cac" containerName="ceilometer-notification-agent" Feb 23 17:52:00 crc kubenswrapper[4724]: I0223 17:52:00.990622 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="46766c09-b7dd-4263-8e07-089095bb5cac" containerName="sg-core" Feb 23 17:52:00 crc kubenswrapper[4724]: I0223 17:52:00.992419 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 17:52:00 crc kubenswrapper[4724]: I0223 17:52:00.994663 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 23 17:52:00 crc kubenswrapper[4724]: I0223 17:52:00.994946 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 23 17:52:00 crc kubenswrapper[4724]: I0223 17:52:00.995096 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 23 17:52:01 crc kubenswrapper[4724]: I0223 17:52:01.007716 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 17:52:01 crc kubenswrapper[4724]: I0223 17:52:01.021373 4724 scope.go:117] "RemoveContainer" containerID="78c868e904ea64e5a5111b28a0772dd194161393b04e7bd194f0b954bcb2b143" Feb 23 17:52:01 crc kubenswrapper[4724]: I0223 17:52:01.049658 4724 scope.go:117] "RemoveContainer" containerID="fc2c97350ceae43cec4fd85fb3317fa56f876e004aba7f5ab5fe4766ee6765ef" Feb 23 17:52:01 crc kubenswrapper[4724]: I0223 17:52:01.069174 4724 scope.go:117] "RemoveContainer" containerID="67c01ef983e5b4b5f3ec3a4b2830f5fff768c07aadabb38f753c330ce9d52ec7" Feb 23 17:52:01 crc kubenswrapper[4724]: E0223 17:52:01.069502 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67c01ef983e5b4b5f3ec3a4b2830f5fff768c07aadabb38f753c330ce9d52ec7\": container with ID starting with 67c01ef983e5b4b5f3ec3a4b2830f5fff768c07aadabb38f753c330ce9d52ec7 not found: ID does not exist" containerID="67c01ef983e5b4b5f3ec3a4b2830f5fff768c07aadabb38f753c330ce9d52ec7" Feb 23 17:52:01 crc kubenswrapper[4724]: I0223 17:52:01.069530 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67c01ef983e5b4b5f3ec3a4b2830f5fff768c07aadabb38f753c330ce9d52ec7"} err="failed to get container status \"67c01ef983e5b4b5f3ec3a4b2830f5fff768c07aadabb38f753c330ce9d52ec7\": rpc error: code = NotFound desc = could not find container \"67c01ef983e5b4b5f3ec3a4b2830f5fff768c07aadabb38f753c330ce9d52ec7\": container with ID starting with 67c01ef983e5b4b5f3ec3a4b2830f5fff768c07aadabb38f753c330ce9d52ec7 not found: ID does not exist" Feb 23 17:52:01 crc kubenswrapper[4724]: I0223 17:52:01.069553 4724 scope.go:117] "RemoveContainer" containerID="24699cdfd11b6b4552163f95f768cc5bd42824beb8cc2df8a7a177f9a316b249" Feb 23 17:52:01 crc kubenswrapper[4724]: E0223 17:52:01.069754 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24699cdfd11b6b4552163f95f768cc5bd42824beb8cc2df8a7a177f9a316b249\": container with ID starting with 24699cdfd11b6b4552163f95f768cc5bd42824beb8cc2df8a7a177f9a316b249 not found: ID does not exist" containerID="24699cdfd11b6b4552163f95f768cc5bd42824beb8cc2df8a7a177f9a316b249" Feb 23 17:52:01 crc kubenswrapper[4724]: I0223 17:52:01.069779 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24699cdfd11b6b4552163f95f768cc5bd42824beb8cc2df8a7a177f9a316b249"} err="failed to get container status \"24699cdfd11b6b4552163f95f768cc5bd42824beb8cc2df8a7a177f9a316b249\": rpc error: code = NotFound desc = could not find container \"24699cdfd11b6b4552163f95f768cc5bd42824beb8cc2df8a7a177f9a316b249\": container with ID starting with 24699cdfd11b6b4552163f95f768cc5bd42824beb8cc2df8a7a177f9a316b249 not found: ID does not exist" Feb 23 17:52:01 crc kubenswrapper[4724]: I0223 17:52:01.069791 4724 scope.go:117] "RemoveContainer" containerID="78c868e904ea64e5a5111b28a0772dd194161393b04e7bd194f0b954bcb2b143" Feb 23 17:52:01 crc kubenswrapper[4724]: E0223 17:52:01.070142 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78c868e904ea64e5a5111b28a0772dd194161393b04e7bd194f0b954bcb2b143\": container with ID starting with 78c868e904ea64e5a5111b28a0772dd194161393b04e7bd194f0b954bcb2b143 not found: ID does not exist" containerID="78c868e904ea64e5a5111b28a0772dd194161393b04e7bd194f0b954bcb2b143" Feb 23 17:52:01 crc kubenswrapper[4724]: I0223 17:52:01.070162 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78c868e904ea64e5a5111b28a0772dd194161393b04e7bd194f0b954bcb2b143"} err="failed to get container status \"78c868e904ea64e5a5111b28a0772dd194161393b04e7bd194f0b954bcb2b143\": rpc error: code = NotFound desc = could not find container \"78c868e904ea64e5a5111b28a0772dd194161393b04e7bd194f0b954bcb2b143\": container with ID starting with 78c868e904ea64e5a5111b28a0772dd194161393b04e7bd194f0b954bcb2b143 not found: ID does not exist" Feb 23 17:52:01 crc kubenswrapper[4724]: I0223 17:52:01.070173 4724 scope.go:117] "RemoveContainer" containerID="fc2c97350ceae43cec4fd85fb3317fa56f876e004aba7f5ab5fe4766ee6765ef" Feb 23 17:52:01 crc kubenswrapper[4724]: E0223 17:52:01.070349 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc2c97350ceae43cec4fd85fb3317fa56f876e004aba7f5ab5fe4766ee6765ef\": container with ID starting with fc2c97350ceae43cec4fd85fb3317fa56f876e004aba7f5ab5fe4766ee6765ef not found: ID does not exist" containerID="fc2c97350ceae43cec4fd85fb3317fa56f876e004aba7f5ab5fe4766ee6765ef" Feb 23 17:52:01 crc kubenswrapper[4724]: I0223 17:52:01.070364 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc2c97350ceae43cec4fd85fb3317fa56f876e004aba7f5ab5fe4766ee6765ef"} err="failed to get container status \"fc2c97350ceae43cec4fd85fb3317fa56f876e004aba7f5ab5fe4766ee6765ef\": rpc error: code = NotFound desc = could not find container \"fc2c97350ceae43cec4fd85fb3317fa56f876e004aba7f5ab5fe4766ee6765ef\": container with ID starting with fc2c97350ceae43cec4fd85fb3317fa56f876e004aba7f5ab5fe4766ee6765ef not found: ID does not exist" Feb 23 17:52:01 crc kubenswrapper[4724]: I0223 17:52:01.072032 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24ea9ad1-e07f-43c2-a841-42f927a66a79-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"24ea9ad1-e07f-43c2-a841-42f927a66a79\") " pod="openstack/ceilometer-0" Feb 23 17:52:01 crc kubenswrapper[4724]: I0223 17:52:01.072104 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/24ea9ad1-e07f-43c2-a841-42f927a66a79-log-httpd\") pod \"ceilometer-0\" (UID: \"24ea9ad1-e07f-43c2-a841-42f927a66a79\") " pod="openstack/ceilometer-0" Feb 23 17:52:01 crc kubenswrapper[4724]: I0223 17:52:01.072203 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/24ea9ad1-e07f-43c2-a841-42f927a66a79-run-httpd\") pod \"ceilometer-0\" (UID: \"24ea9ad1-e07f-43c2-a841-42f927a66a79\") " pod="openstack/ceilometer-0" Feb 23 17:52:01 crc kubenswrapper[4724]: I0223 17:52:01.072265 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24ea9ad1-e07f-43c2-a841-42f927a66a79-scripts\") pod \"ceilometer-0\" (UID: \"24ea9ad1-e07f-43c2-a841-42f927a66a79\") " pod="openstack/ceilometer-0" Feb 23 17:52:01 crc kubenswrapper[4724]: I0223 17:52:01.072307 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/24ea9ad1-e07f-43c2-a841-42f927a66a79-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"24ea9ad1-e07f-43c2-a841-42f927a66a79\") " pod="openstack/ceilometer-0" Feb 23 17:52:01 crc kubenswrapper[4724]: I0223 17:52:01.072424 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/24ea9ad1-e07f-43c2-a841-42f927a66a79-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"24ea9ad1-e07f-43c2-a841-42f927a66a79\") " pod="openstack/ceilometer-0" Feb 23 17:52:01 crc kubenswrapper[4724]: I0223 17:52:01.072529 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv85n\" (UniqueName: \"kubernetes.io/projected/24ea9ad1-e07f-43c2-a841-42f927a66a79-kube-api-access-sv85n\") pod \"ceilometer-0\" (UID: \"24ea9ad1-e07f-43c2-a841-42f927a66a79\") " pod="openstack/ceilometer-0" Feb 23 17:52:01 crc kubenswrapper[4724]: I0223 17:52:01.072588 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24ea9ad1-e07f-43c2-a841-42f927a66a79-config-data\") pod \"ceilometer-0\" (UID: \"24ea9ad1-e07f-43c2-a841-42f927a66a79\") " pod="openstack/ceilometer-0" Feb 23 17:52:01 crc kubenswrapper[4724]: I0223 17:52:01.174434 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/24ea9ad1-e07f-43c2-a841-42f927a66a79-run-httpd\") pod \"ceilometer-0\" (UID: \"24ea9ad1-e07f-43c2-a841-42f927a66a79\") " pod="openstack/ceilometer-0" Feb 23 17:52:01 crc kubenswrapper[4724]: I0223 17:52:01.175203 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24ea9ad1-e07f-43c2-a841-42f927a66a79-scripts\") pod \"ceilometer-0\" (UID: \"24ea9ad1-e07f-43c2-a841-42f927a66a79\") " pod="openstack/ceilometer-0" Feb 23 17:52:01 crc kubenswrapper[4724]: I0223 17:52:01.175806 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/24ea9ad1-e07f-43c2-a841-42f927a66a79-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"24ea9ad1-e07f-43c2-a841-42f927a66a79\") " pod="openstack/ceilometer-0" Feb 23 17:52:01 crc kubenswrapper[4724]: I0223 17:52:01.175944 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/24ea9ad1-e07f-43c2-a841-42f927a66a79-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"24ea9ad1-e07f-43c2-a841-42f927a66a79\") " pod="openstack/ceilometer-0" Feb 23 17:52:01 crc kubenswrapper[4724]: I0223 17:52:01.176056 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sv85n\" (UniqueName: \"kubernetes.io/projected/24ea9ad1-e07f-43c2-a841-42f927a66a79-kube-api-access-sv85n\") pod \"ceilometer-0\" (UID: \"24ea9ad1-e07f-43c2-a841-42f927a66a79\") " pod="openstack/ceilometer-0" Feb 23 17:52:01 crc kubenswrapper[4724]: I0223 17:52:01.176149 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24ea9ad1-e07f-43c2-a841-42f927a66a79-config-data\") pod \"ceilometer-0\" (UID: \"24ea9ad1-e07f-43c2-a841-42f927a66a79\") " pod="openstack/ceilometer-0" Feb 23 17:52:01 crc kubenswrapper[4724]: I0223 17:52:01.175138 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/24ea9ad1-e07f-43c2-a841-42f927a66a79-run-httpd\") pod \"ceilometer-0\" (UID: \"24ea9ad1-e07f-43c2-a841-42f927a66a79\") " pod="openstack/ceilometer-0" Feb 23 17:52:01 crc kubenswrapper[4724]: I0223 17:52:01.176252 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24ea9ad1-e07f-43c2-a841-42f927a66a79-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"24ea9ad1-e07f-43c2-a841-42f927a66a79\") " pod="openstack/ceilometer-0" Feb 23 17:52:01 crc kubenswrapper[4724]: I0223 17:52:01.176478 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/24ea9ad1-e07f-43c2-a841-42f927a66a79-log-httpd\") pod \"ceilometer-0\" (UID: \"24ea9ad1-e07f-43c2-a841-42f927a66a79\") " pod="openstack/ceilometer-0" Feb 23 17:52:01 crc kubenswrapper[4724]: I0223 17:52:01.176996 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/24ea9ad1-e07f-43c2-a841-42f927a66a79-log-httpd\") pod \"ceilometer-0\" (UID: \"24ea9ad1-e07f-43c2-a841-42f927a66a79\") " pod="openstack/ceilometer-0" Feb 23 17:52:01 crc kubenswrapper[4724]: I0223 17:52:01.180043 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/24ea9ad1-e07f-43c2-a841-42f927a66a79-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"24ea9ad1-e07f-43c2-a841-42f927a66a79\") " pod="openstack/ceilometer-0" Feb 23 17:52:01 crc kubenswrapper[4724]: I0223 17:52:01.180968 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24ea9ad1-e07f-43c2-a841-42f927a66a79-scripts\") pod \"ceilometer-0\" (UID: \"24ea9ad1-e07f-43c2-a841-42f927a66a79\") " pod="openstack/ceilometer-0" Feb 23 17:52:01 crc kubenswrapper[4724]: I0223 17:52:01.182097 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/24ea9ad1-e07f-43c2-a841-42f927a66a79-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"24ea9ad1-e07f-43c2-a841-42f927a66a79\") " pod="openstack/ceilometer-0" Feb 23 17:52:01 crc kubenswrapper[4724]: I0223 17:52:01.182189 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24ea9ad1-e07f-43c2-a841-42f927a66a79-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"24ea9ad1-e07f-43c2-a841-42f927a66a79\") " pod="openstack/ceilometer-0" Feb 23 17:52:01 crc kubenswrapper[4724]: I0223 17:52:01.184704 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24ea9ad1-e07f-43c2-a841-42f927a66a79-config-data\") pod \"ceilometer-0\" (UID: \"24ea9ad1-e07f-43c2-a841-42f927a66a79\") " pod="openstack/ceilometer-0" Feb 23 17:52:01 crc kubenswrapper[4724]: I0223 17:52:01.191788 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sv85n\" (UniqueName: \"kubernetes.io/projected/24ea9ad1-e07f-43c2-a841-42f927a66a79-kube-api-access-sv85n\") pod \"ceilometer-0\" (UID: \"24ea9ad1-e07f-43c2-a841-42f927a66a79\") " pod="openstack/ceilometer-0" Feb 23 17:52:01 crc kubenswrapper[4724]: I0223 17:52:01.329082 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 17:52:01 crc kubenswrapper[4724]: I0223 17:52:01.819952 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 17:52:01 crc kubenswrapper[4724]: I0223 17:52:01.936687 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"24ea9ad1-e07f-43c2-a841-42f927a66a79","Type":"ContainerStarted","Data":"d1d5b4b00e7995f69e4da0df7800c266202e426038c8261134725ba4d1f241d1"} Feb 23 17:52:02 crc kubenswrapper[4724]: I0223 17:52:02.976916 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46766c09-b7dd-4263-8e07-089095bb5cac" path="/var/lib/kubelet/pods/46766c09-b7dd-4263-8e07-089095bb5cac/volumes" Feb 23 17:52:03 crc kubenswrapper[4724]: I0223 17:52:03.964107 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"24ea9ad1-e07f-43c2-a841-42f927a66a79","Type":"ContainerStarted","Data":"ef003a993d34d6613236bfdbeb21e5e033d020f84c061ea09e141f49792636ed"} Feb 23 17:52:03 crc kubenswrapper[4724]: I0223 17:52:03.964694 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"24ea9ad1-e07f-43c2-a841-42f927a66a79","Type":"ContainerStarted","Data":"5b9e9a4e2153a82771bc918a63b040e0fc6c611218cb5e11a15a2f53df9ad6c6"} Feb 23 17:52:03 crc kubenswrapper[4724]: I0223 17:52:03.964709 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"24ea9ad1-e07f-43c2-a841-42f927a66a79","Type":"ContainerStarted","Data":"e3084ee3ca11700628fe2ae49dcc4fbc6f8bd95e40d7e949bf98f45a39be1914"} Feb 23 17:52:05 crc kubenswrapper[4724]: I0223 17:52:05.057382 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 23 17:52:05 crc kubenswrapper[4724]: I0223 17:52:05.062961 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 23 17:52:05 crc kubenswrapper[4724]: I0223 17:52:05.063495 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 23 17:52:05 crc kubenswrapper[4724]: I0223 17:52:05.842919 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 23 17:52:05 crc kubenswrapper[4724]: I0223 17:52:05.990897 4724 generic.go:334] "Generic (PLEG): container finished" podID="aaff144a-3786-4d70-af6d-266870e4e6d2" containerID="9d5fdbc41e70ae93eb242c198bff176337f54ec1f7ec7799daa55d8471c1d811" exitCode=137 Feb 23 17:52:05 crc kubenswrapper[4724]: I0223 17:52:05.992633 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"aaff144a-3786-4d70-af6d-266870e4e6d2","Type":"ContainerDied","Data":"9d5fdbc41e70ae93eb242c198bff176337f54ec1f7ec7799daa55d8471c1d811"} Feb 23 17:52:05 crc kubenswrapper[4724]: I0223 17:52:05.992662 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"aaff144a-3786-4d70-af6d-266870e4e6d2","Type":"ContainerDied","Data":"64c0ddd05252330da59140a062bb40eaad0bfb53ad42cc7261b8505d211dad64"} Feb 23 17:52:05 crc kubenswrapper[4724]: I0223 17:52:05.992765 4724 scope.go:117] "RemoveContainer" containerID="9d5fdbc41e70ae93eb242c198bff176337f54ec1f7ec7799daa55d8471c1d811" Feb 23 17:52:05 crc kubenswrapper[4724]: I0223 17:52:05.992738 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 23 17:52:05 crc kubenswrapper[4724]: I0223 17:52:05.996927 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaff144a-3786-4d70-af6d-266870e4e6d2-config-data\") pod \"aaff144a-3786-4d70-af6d-266870e4e6d2\" (UID: \"aaff144a-3786-4d70-af6d-266870e4e6d2\") " Feb 23 17:52:05 crc kubenswrapper[4724]: I0223 17:52:05.997196 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaff144a-3786-4d70-af6d-266870e4e6d2-combined-ca-bundle\") pod \"aaff144a-3786-4d70-af6d-266870e4e6d2\" (UID: \"aaff144a-3786-4d70-af6d-266870e4e6d2\") " Feb 23 17:52:05 crc kubenswrapper[4724]: I0223 17:52:05.997343 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94hqz\" (UniqueName: \"kubernetes.io/projected/aaff144a-3786-4d70-af6d-266870e4e6d2-kube-api-access-94hqz\") pod \"aaff144a-3786-4d70-af6d-266870e4e6d2\" (UID: \"aaff144a-3786-4d70-af6d-266870e4e6d2\") " Feb 23 17:52:05 crc kubenswrapper[4724]: I0223 17:52:05.999013 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 23 17:52:06 crc kubenswrapper[4724]: I0223 17:52:06.002975 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aaff144a-3786-4d70-af6d-266870e4e6d2-kube-api-access-94hqz" (OuterVolumeSpecName: "kube-api-access-94hqz") pod "aaff144a-3786-4d70-af6d-266870e4e6d2" (UID: "aaff144a-3786-4d70-af6d-266870e4e6d2"). InnerVolumeSpecName "kube-api-access-94hqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:52:06 crc kubenswrapper[4724]: I0223 17:52:06.043476 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaff144a-3786-4d70-af6d-266870e4e6d2-config-data" (OuterVolumeSpecName: "config-data") pod "aaff144a-3786-4d70-af6d-266870e4e6d2" (UID: "aaff144a-3786-4d70-af6d-266870e4e6d2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:52:06 crc kubenswrapper[4724]: I0223 17:52:06.044495 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaff144a-3786-4d70-af6d-266870e4e6d2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aaff144a-3786-4d70-af6d-266870e4e6d2" (UID: "aaff144a-3786-4d70-af6d-266870e4e6d2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:52:06 crc kubenswrapper[4724]: I0223 17:52:06.088839 4724 scope.go:117] "RemoveContainer" containerID="9d5fdbc41e70ae93eb242c198bff176337f54ec1f7ec7799daa55d8471c1d811" Feb 23 17:52:06 crc kubenswrapper[4724]: E0223 17:52:06.092417 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d5fdbc41e70ae93eb242c198bff176337f54ec1f7ec7799daa55d8471c1d811\": container with ID starting with 9d5fdbc41e70ae93eb242c198bff176337f54ec1f7ec7799daa55d8471c1d811 not found: ID does not exist" containerID="9d5fdbc41e70ae93eb242c198bff176337f54ec1f7ec7799daa55d8471c1d811" Feb 23 17:52:06 crc kubenswrapper[4724]: I0223 17:52:06.092456 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d5fdbc41e70ae93eb242c198bff176337f54ec1f7ec7799daa55d8471c1d811"} err="failed to get container status \"9d5fdbc41e70ae93eb242c198bff176337f54ec1f7ec7799daa55d8471c1d811\": rpc error: code = NotFound desc = could not find container \"9d5fdbc41e70ae93eb242c198bff176337f54ec1f7ec7799daa55d8471c1d811\": container with ID starting with 9d5fdbc41e70ae93eb242c198bff176337f54ec1f7ec7799daa55d8471c1d811 not found: ID does not exist" Feb 23 17:52:06 crc kubenswrapper[4724]: I0223 17:52:06.099484 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94hqz\" (UniqueName: \"kubernetes.io/projected/aaff144a-3786-4d70-af6d-266870e4e6d2-kube-api-access-94hqz\") on node \"crc\" DevicePath \"\"" Feb 23 17:52:06 crc kubenswrapper[4724]: I0223 17:52:06.099517 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaff144a-3786-4d70-af6d-266870e4e6d2-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 17:52:06 crc kubenswrapper[4724]: I0223 17:52:06.099530 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaff144a-3786-4d70-af6d-266870e4e6d2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 17:52:06 crc kubenswrapper[4724]: I0223 17:52:06.324876 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 23 17:52:06 crc kubenswrapper[4724]: I0223 17:52:06.337285 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 23 17:52:06 crc kubenswrapper[4724]: I0223 17:52:06.375709 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 23 17:52:06 crc kubenswrapper[4724]: E0223 17:52:06.386322 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaff144a-3786-4d70-af6d-266870e4e6d2" containerName="nova-cell1-novncproxy-novncproxy" Feb 23 17:52:06 crc kubenswrapper[4724]: I0223 17:52:06.386425 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaff144a-3786-4d70-af6d-266870e4e6d2" containerName="nova-cell1-novncproxy-novncproxy" Feb 23 17:52:06 crc kubenswrapper[4724]: I0223 17:52:06.387888 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaff144a-3786-4d70-af6d-266870e4e6d2" containerName="nova-cell1-novncproxy-novncproxy" Feb 23 17:52:06 crc kubenswrapper[4724]: I0223 17:52:06.388911 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 23 17:52:06 crc kubenswrapper[4724]: I0223 17:52:06.390766 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 23 17:52:06 crc kubenswrapper[4724]: I0223 17:52:06.394119 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 23 17:52:06 crc kubenswrapper[4724]: I0223 17:52:06.394819 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 23 17:52:06 crc kubenswrapper[4724]: I0223 17:52:06.395160 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 23 17:52:06 crc kubenswrapper[4724]: I0223 17:52:06.512291 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/31f78c36-4f54-425d-87a6-3b0c7093a06c-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"31f78c36-4f54-425d-87a6-3b0c7093a06c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 17:52:06 crc kubenswrapper[4724]: I0223 17:52:06.512367 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmng5\" (UniqueName: \"kubernetes.io/projected/31f78c36-4f54-425d-87a6-3b0c7093a06c-kube-api-access-rmng5\") pod \"nova-cell1-novncproxy-0\" (UID: \"31f78c36-4f54-425d-87a6-3b0c7093a06c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 17:52:06 crc kubenswrapper[4724]: I0223 17:52:06.512443 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31f78c36-4f54-425d-87a6-3b0c7093a06c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"31f78c36-4f54-425d-87a6-3b0c7093a06c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 17:52:06 crc kubenswrapper[4724]: I0223 17:52:06.512699 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/31f78c36-4f54-425d-87a6-3b0c7093a06c-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"31f78c36-4f54-425d-87a6-3b0c7093a06c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 17:52:06 crc kubenswrapper[4724]: I0223 17:52:06.512789 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31f78c36-4f54-425d-87a6-3b0c7093a06c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"31f78c36-4f54-425d-87a6-3b0c7093a06c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 17:52:06 crc kubenswrapper[4724]: I0223 17:52:06.615551 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/31f78c36-4f54-425d-87a6-3b0c7093a06c-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"31f78c36-4f54-425d-87a6-3b0c7093a06c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 17:52:06 crc kubenswrapper[4724]: I0223 17:52:06.615616 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31f78c36-4f54-425d-87a6-3b0c7093a06c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"31f78c36-4f54-425d-87a6-3b0c7093a06c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 17:52:06 crc kubenswrapper[4724]: I0223 17:52:06.615738 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/31f78c36-4f54-425d-87a6-3b0c7093a06c-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"31f78c36-4f54-425d-87a6-3b0c7093a06c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 17:52:06 crc kubenswrapper[4724]: I0223 17:52:06.615777 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmng5\" (UniqueName: \"kubernetes.io/projected/31f78c36-4f54-425d-87a6-3b0c7093a06c-kube-api-access-rmng5\") pod \"nova-cell1-novncproxy-0\" (UID: \"31f78c36-4f54-425d-87a6-3b0c7093a06c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 17:52:06 crc kubenswrapper[4724]: I0223 17:52:06.615836 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31f78c36-4f54-425d-87a6-3b0c7093a06c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"31f78c36-4f54-425d-87a6-3b0c7093a06c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 17:52:06 crc kubenswrapper[4724]: I0223 17:52:06.621672 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/31f78c36-4f54-425d-87a6-3b0c7093a06c-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"31f78c36-4f54-425d-87a6-3b0c7093a06c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 17:52:06 crc kubenswrapper[4724]: I0223 17:52:06.628306 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31f78c36-4f54-425d-87a6-3b0c7093a06c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"31f78c36-4f54-425d-87a6-3b0c7093a06c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 17:52:06 crc kubenswrapper[4724]: I0223 17:52:06.634866 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/31f78c36-4f54-425d-87a6-3b0c7093a06c-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"31f78c36-4f54-425d-87a6-3b0c7093a06c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 17:52:06 crc kubenswrapper[4724]: I0223 17:52:06.635465 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmng5\" (UniqueName: \"kubernetes.io/projected/31f78c36-4f54-425d-87a6-3b0c7093a06c-kube-api-access-rmng5\") pod \"nova-cell1-novncproxy-0\" (UID: \"31f78c36-4f54-425d-87a6-3b0c7093a06c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 17:52:06 crc kubenswrapper[4724]: I0223 17:52:06.636901 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31f78c36-4f54-425d-87a6-3b0c7093a06c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"31f78c36-4f54-425d-87a6-3b0c7093a06c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 17:52:06 crc kubenswrapper[4724]: I0223 17:52:06.714203 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 23 17:52:06 crc kubenswrapper[4724]: I0223 17:52:06.964680 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aaff144a-3786-4d70-af6d-266870e4e6d2" path="/var/lib/kubelet/pods/aaff144a-3786-4d70-af6d-266870e4e6d2/volumes" Feb 23 17:52:07 crc kubenswrapper[4724]: I0223 17:52:07.009191 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"24ea9ad1-e07f-43c2-a841-42f927a66a79","Type":"ContainerStarted","Data":"658c57f3bd89c344d0075bd72ad4b5b24b3d44fc349016bc25cb1ee5545106ef"} Feb 23 17:52:07 crc kubenswrapper[4724]: I0223 17:52:07.009756 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 23 17:52:07 crc kubenswrapper[4724]: I0223 17:52:07.033796 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.944369186 podStartE2EDuration="7.033780061s" podCreationTimestamp="2026-02-23 17:52:00 +0000 UTC" firstStartedPulling="2026-02-23 17:52:01.819175967 +0000 UTC m=+1277.635375567" lastFinishedPulling="2026-02-23 17:52:05.908586842 +0000 UTC m=+1281.724786442" observedRunningTime="2026-02-23 17:52:07.025997071 +0000 UTC m=+1282.842196671" watchObservedRunningTime="2026-02-23 17:52:07.033780061 +0000 UTC m=+1282.849979661" Feb 23 17:52:07 crc kubenswrapper[4724]: W0223 17:52:07.203718 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31f78c36_4f54_425d_87a6_3b0c7093a06c.slice/crio-4d42b608b17bdbd153276e64de5ef1a6376384926e83655fb83f98f417b52f4c WatchSource:0}: Error finding container 4d42b608b17bdbd153276e64de5ef1a6376384926e83655fb83f98f417b52f4c: Status 404 returned error can't find the container with id 4d42b608b17bdbd153276e64de5ef1a6376384926e83655fb83f98f417b52f4c Feb 23 17:52:07 crc kubenswrapper[4724]: I0223 17:52:07.209975 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 23 17:52:07 crc kubenswrapper[4724]: I0223 17:52:07.276432 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 23 17:52:07 crc kubenswrapper[4724]: I0223 17:52:07.488427 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 23 17:52:07 crc kubenswrapper[4724]: I0223 17:52:07.488936 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 23 17:52:07 crc kubenswrapper[4724]: I0223 17:52:07.502657 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 23 17:52:07 crc kubenswrapper[4724]: I0223 17:52:07.505436 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 23 17:52:08 crc kubenswrapper[4724]: I0223 17:52:08.032116 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"31f78c36-4f54-425d-87a6-3b0c7093a06c","Type":"ContainerStarted","Data":"ce9f03b80bdc4ff12a3f2bc6bf7e24a8690cef2f7daf9720f63e25b78e584041"} Feb 23 17:52:08 crc kubenswrapper[4724]: I0223 17:52:08.032492 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"31f78c36-4f54-425d-87a6-3b0c7093a06c","Type":"ContainerStarted","Data":"4d42b608b17bdbd153276e64de5ef1a6376384926e83655fb83f98f417b52f4c"} Feb 23 17:52:08 crc kubenswrapper[4724]: I0223 17:52:08.033641 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 23 17:52:08 crc kubenswrapper[4724]: I0223 17:52:08.045484 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 23 17:52:08 crc kubenswrapper[4724]: I0223 17:52:08.062759 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.062740195 podStartE2EDuration="2.062740195s" podCreationTimestamp="2026-02-23 17:52:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:52:08.053447619 +0000 UTC m=+1283.869647229" watchObservedRunningTime="2026-02-23 17:52:08.062740195 +0000 UTC m=+1283.878939805" Feb 23 17:52:08 crc kubenswrapper[4724]: I0223 17:52:08.238280 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5678c8f4f-9w6qj"] Feb 23 17:52:08 crc kubenswrapper[4724]: I0223 17:52:08.241708 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5678c8f4f-9w6qj" Feb 23 17:52:08 crc kubenswrapper[4724]: I0223 17:52:08.274176 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5678c8f4f-9w6qj"] Feb 23 17:52:08 crc kubenswrapper[4724]: I0223 17:52:08.350217 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37950574-5957-4f62-8d9e-0decba9e87e0-dns-svc\") pod \"dnsmasq-dns-5678c8f4f-9w6qj\" (UID: \"37950574-5957-4f62-8d9e-0decba9e87e0\") " pod="openstack/dnsmasq-dns-5678c8f4f-9w6qj" Feb 23 17:52:08 crc kubenswrapper[4724]: I0223 17:52:08.350264 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/37950574-5957-4f62-8d9e-0decba9e87e0-ovsdbserver-sb\") pod \"dnsmasq-dns-5678c8f4f-9w6qj\" (UID: \"37950574-5957-4f62-8d9e-0decba9e87e0\") " pod="openstack/dnsmasq-dns-5678c8f4f-9w6qj" Feb 23 17:52:08 crc kubenswrapper[4724]: I0223 17:52:08.350305 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37950574-5957-4f62-8d9e-0decba9e87e0-ovsdbserver-nb\") pod \"dnsmasq-dns-5678c8f4f-9w6qj\" (UID: \"37950574-5957-4f62-8d9e-0decba9e87e0\") " pod="openstack/dnsmasq-dns-5678c8f4f-9w6qj" Feb 23 17:52:08 crc kubenswrapper[4724]: I0223 17:52:08.350333 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37950574-5957-4f62-8d9e-0decba9e87e0-config\") pod \"dnsmasq-dns-5678c8f4f-9w6qj\" (UID: \"37950574-5957-4f62-8d9e-0decba9e87e0\") " pod="openstack/dnsmasq-dns-5678c8f4f-9w6qj" Feb 23 17:52:08 crc kubenswrapper[4724]: I0223 17:52:08.350416 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjdk6\" (UniqueName: \"kubernetes.io/projected/37950574-5957-4f62-8d9e-0decba9e87e0-kube-api-access-qjdk6\") pod \"dnsmasq-dns-5678c8f4f-9w6qj\" (UID: \"37950574-5957-4f62-8d9e-0decba9e87e0\") " pod="openstack/dnsmasq-dns-5678c8f4f-9w6qj" Feb 23 17:52:08 crc kubenswrapper[4724]: I0223 17:52:08.350463 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/37950574-5957-4f62-8d9e-0decba9e87e0-dns-swift-storage-0\") pod \"dnsmasq-dns-5678c8f4f-9w6qj\" (UID: \"37950574-5957-4f62-8d9e-0decba9e87e0\") " pod="openstack/dnsmasq-dns-5678c8f4f-9w6qj" Feb 23 17:52:08 crc kubenswrapper[4724]: I0223 17:52:08.452678 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37950574-5957-4f62-8d9e-0decba9e87e0-ovsdbserver-nb\") pod \"dnsmasq-dns-5678c8f4f-9w6qj\" (UID: \"37950574-5957-4f62-8d9e-0decba9e87e0\") " pod="openstack/dnsmasq-dns-5678c8f4f-9w6qj" Feb 23 17:52:08 crc kubenswrapper[4724]: I0223 17:52:08.452773 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37950574-5957-4f62-8d9e-0decba9e87e0-config\") pod \"dnsmasq-dns-5678c8f4f-9w6qj\" (UID: \"37950574-5957-4f62-8d9e-0decba9e87e0\") " pod="openstack/dnsmasq-dns-5678c8f4f-9w6qj" Feb 23 17:52:08 crc kubenswrapper[4724]: I0223 17:52:08.452891 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjdk6\" (UniqueName: \"kubernetes.io/projected/37950574-5957-4f62-8d9e-0decba9e87e0-kube-api-access-qjdk6\") pod \"dnsmasq-dns-5678c8f4f-9w6qj\" (UID: \"37950574-5957-4f62-8d9e-0decba9e87e0\") " pod="openstack/dnsmasq-dns-5678c8f4f-9w6qj" Feb 23 17:52:08 crc kubenswrapper[4724]: I0223 17:52:08.452973 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/37950574-5957-4f62-8d9e-0decba9e87e0-dns-swift-storage-0\") pod \"dnsmasq-dns-5678c8f4f-9w6qj\" (UID: \"37950574-5957-4f62-8d9e-0decba9e87e0\") " pod="openstack/dnsmasq-dns-5678c8f4f-9w6qj" Feb 23 17:52:08 crc kubenswrapper[4724]: I0223 17:52:08.453697 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37950574-5957-4f62-8d9e-0decba9e87e0-ovsdbserver-nb\") pod \"dnsmasq-dns-5678c8f4f-9w6qj\" (UID: \"37950574-5957-4f62-8d9e-0decba9e87e0\") " pod="openstack/dnsmasq-dns-5678c8f4f-9w6qj" Feb 23 17:52:08 crc kubenswrapper[4724]: I0223 17:52:08.453724 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37950574-5957-4f62-8d9e-0decba9e87e0-config\") pod \"dnsmasq-dns-5678c8f4f-9w6qj\" (UID: \"37950574-5957-4f62-8d9e-0decba9e87e0\") " pod="openstack/dnsmasq-dns-5678c8f4f-9w6qj" Feb 23 17:52:08 crc kubenswrapper[4724]: I0223 17:52:08.453858 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/37950574-5957-4f62-8d9e-0decba9e87e0-dns-swift-storage-0\") pod \"dnsmasq-dns-5678c8f4f-9w6qj\" (UID: \"37950574-5957-4f62-8d9e-0decba9e87e0\") " pod="openstack/dnsmasq-dns-5678c8f4f-9w6qj" Feb 23 17:52:08 crc kubenswrapper[4724]: I0223 17:52:08.453924 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37950574-5957-4f62-8d9e-0decba9e87e0-dns-svc\") pod \"dnsmasq-dns-5678c8f4f-9w6qj\" (UID: \"37950574-5957-4f62-8d9e-0decba9e87e0\") " pod="openstack/dnsmasq-dns-5678c8f4f-9w6qj" Feb 23 17:52:08 crc kubenswrapper[4724]: I0223 17:52:08.454585 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37950574-5957-4f62-8d9e-0decba9e87e0-dns-svc\") pod \"dnsmasq-dns-5678c8f4f-9w6qj\" (UID: \"37950574-5957-4f62-8d9e-0decba9e87e0\") " pod="openstack/dnsmasq-dns-5678c8f4f-9w6qj" Feb 23 17:52:08 crc kubenswrapper[4724]: I0223 17:52:08.453954 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/37950574-5957-4f62-8d9e-0decba9e87e0-ovsdbserver-sb\") pod \"dnsmasq-dns-5678c8f4f-9w6qj\" (UID: \"37950574-5957-4f62-8d9e-0decba9e87e0\") " pod="openstack/dnsmasq-dns-5678c8f4f-9w6qj" Feb 23 17:52:08 crc kubenswrapper[4724]: I0223 17:52:08.455193 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/37950574-5957-4f62-8d9e-0decba9e87e0-ovsdbserver-sb\") pod \"dnsmasq-dns-5678c8f4f-9w6qj\" (UID: \"37950574-5957-4f62-8d9e-0decba9e87e0\") " pod="openstack/dnsmasq-dns-5678c8f4f-9w6qj" Feb 23 17:52:08 crc kubenswrapper[4724]: I0223 17:52:08.476862 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjdk6\" (UniqueName: \"kubernetes.io/projected/37950574-5957-4f62-8d9e-0decba9e87e0-kube-api-access-qjdk6\") pod \"dnsmasq-dns-5678c8f4f-9w6qj\" (UID: \"37950574-5957-4f62-8d9e-0decba9e87e0\") " pod="openstack/dnsmasq-dns-5678c8f4f-9w6qj" Feb 23 17:52:08 crc kubenswrapper[4724]: I0223 17:52:08.606504 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5678c8f4f-9w6qj" Feb 23 17:52:09 crc kubenswrapper[4724]: I0223 17:52:09.112013 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5678c8f4f-9w6qj"] Feb 23 17:52:10 crc kubenswrapper[4724]: I0223 17:52:10.050541 4724 generic.go:334] "Generic (PLEG): container finished" podID="37950574-5957-4f62-8d9e-0decba9e87e0" containerID="cc5a3bdfa9c194b82e07c356e1743ade1d8e175045bb13984af6360afe1542ff" exitCode=0 Feb 23 17:52:10 crc kubenswrapper[4724]: I0223 17:52:10.050664 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5678c8f4f-9w6qj" event={"ID":"37950574-5957-4f62-8d9e-0decba9e87e0","Type":"ContainerDied","Data":"cc5a3bdfa9c194b82e07c356e1743ade1d8e175045bb13984af6360afe1542ff"} Feb 23 17:52:10 crc kubenswrapper[4724]: I0223 17:52:10.050975 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5678c8f4f-9w6qj" event={"ID":"37950574-5957-4f62-8d9e-0decba9e87e0","Type":"ContainerStarted","Data":"3a3ff76e8df08a3bcece1c4ec7f3f6b30196e7872b9bd1582fa6717bf353b3aa"} Feb 23 17:52:10 crc kubenswrapper[4724]: I0223 17:52:10.998970 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 23 17:52:11 crc kubenswrapper[4724]: I0223 17:52:11.063618 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5678c8f4f-9w6qj" event={"ID":"37950574-5957-4f62-8d9e-0decba9e87e0","Type":"ContainerStarted","Data":"08a832f9716fe9855ad6d5d3c3385eea03be36a9d532df9cefeb4ace3db98013"} Feb 23 17:52:11 crc kubenswrapper[4724]: I0223 17:52:11.063696 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="261b32f3-e185-41ff-bd81-f6c6a5baea04" containerName="nova-api-log" containerID="cri-o://730af69de6073de9614d07a449de7dc7d3b37f3bfc710e74d6f106f87df75c13" gracePeriod=30 Feb 23 17:52:11 crc kubenswrapper[4724]: I0223 17:52:11.063796 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="261b32f3-e185-41ff-bd81-f6c6a5baea04" containerName="nova-api-api" containerID="cri-o://52d70ef2072bf7a24af0c68548af254c5d855ffe788c6d59e7d89e825836face" gracePeriod=30 Feb 23 17:52:11 crc kubenswrapper[4724]: I0223 17:52:11.063981 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5678c8f4f-9w6qj" Feb 23 17:52:11 crc kubenswrapper[4724]: I0223 17:52:11.089884 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5678c8f4f-9w6qj" podStartSLOduration=3.089867656 podStartE2EDuration="3.089867656s" podCreationTimestamp="2026-02-23 17:52:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:52:11.085102586 +0000 UTC m=+1286.901302186" watchObservedRunningTime="2026-02-23 17:52:11.089867656 +0000 UTC m=+1286.906067256" Feb 23 17:52:11 crc kubenswrapper[4724]: I0223 17:52:11.715306 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 23 17:52:11 crc kubenswrapper[4724]: I0223 17:52:11.980381 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 17:52:11 crc kubenswrapper[4724]: I0223 17:52:11.981005 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="24ea9ad1-e07f-43c2-a841-42f927a66a79" containerName="ceilometer-central-agent" containerID="cri-o://ef003a993d34d6613236bfdbeb21e5e033d020f84c061ea09e141f49792636ed" gracePeriod=30 Feb 23 17:52:11 crc kubenswrapper[4724]: I0223 17:52:11.981498 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="24ea9ad1-e07f-43c2-a841-42f927a66a79" containerName="proxy-httpd" containerID="cri-o://658c57f3bd89c344d0075bd72ad4b5b24b3d44fc349016bc25cb1ee5545106ef" gracePeriod=30 Feb 23 17:52:11 crc kubenswrapper[4724]: I0223 17:52:11.981558 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="24ea9ad1-e07f-43c2-a841-42f927a66a79" containerName="sg-core" containerID="cri-o://5b9e9a4e2153a82771bc918a63b040e0fc6c611218cb5e11a15a2f53df9ad6c6" gracePeriod=30 Feb 23 17:52:11 crc kubenswrapper[4724]: I0223 17:52:11.981648 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="24ea9ad1-e07f-43c2-a841-42f927a66a79" containerName="ceilometer-notification-agent" containerID="cri-o://e3084ee3ca11700628fe2ae49dcc4fbc6f8bd95e40d7e949bf98f45a39be1914" gracePeriod=30 Feb 23 17:52:12 crc kubenswrapper[4724]: I0223 17:52:12.076263 4724 generic.go:334] "Generic (PLEG): container finished" podID="261b32f3-e185-41ff-bd81-f6c6a5baea04" containerID="52d70ef2072bf7a24af0c68548af254c5d855ffe788c6d59e7d89e825836face" exitCode=0 Feb 23 17:52:12 crc kubenswrapper[4724]: I0223 17:52:12.076300 4724 generic.go:334] "Generic (PLEG): container finished" podID="261b32f3-e185-41ff-bd81-f6c6a5baea04" containerID="730af69de6073de9614d07a449de7dc7d3b37f3bfc710e74d6f106f87df75c13" exitCode=143 Feb 23 17:52:12 crc kubenswrapper[4724]: I0223 17:52:12.076345 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"261b32f3-e185-41ff-bd81-f6c6a5baea04","Type":"ContainerDied","Data":"52d70ef2072bf7a24af0c68548af254c5d855ffe788c6d59e7d89e825836face"} Feb 23 17:52:12 crc kubenswrapper[4724]: I0223 17:52:12.076438 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"261b32f3-e185-41ff-bd81-f6c6a5baea04","Type":"ContainerDied","Data":"730af69de6073de9614d07a449de7dc7d3b37f3bfc710e74d6f106f87df75c13"} Feb 23 17:52:12 crc kubenswrapper[4724]: I0223 17:52:12.418180 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 17:52:12 crc kubenswrapper[4724]: I0223 17:52:12.536329 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/261b32f3-e185-41ff-bd81-f6c6a5baea04-combined-ca-bundle\") pod \"261b32f3-e185-41ff-bd81-f6c6a5baea04\" (UID: \"261b32f3-e185-41ff-bd81-f6c6a5baea04\") " Feb 23 17:52:12 crc kubenswrapper[4724]: I0223 17:52:12.536390 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/261b32f3-e185-41ff-bd81-f6c6a5baea04-logs\") pod \"261b32f3-e185-41ff-bd81-f6c6a5baea04\" (UID: \"261b32f3-e185-41ff-bd81-f6c6a5baea04\") " Feb 23 17:52:12 crc kubenswrapper[4724]: I0223 17:52:12.536432 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xb2sz\" (UniqueName: \"kubernetes.io/projected/261b32f3-e185-41ff-bd81-f6c6a5baea04-kube-api-access-xb2sz\") pod \"261b32f3-e185-41ff-bd81-f6c6a5baea04\" (UID: \"261b32f3-e185-41ff-bd81-f6c6a5baea04\") " Feb 23 17:52:12 crc kubenswrapper[4724]: I0223 17:52:12.536629 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/261b32f3-e185-41ff-bd81-f6c6a5baea04-config-data\") pod \"261b32f3-e185-41ff-bd81-f6c6a5baea04\" (UID: \"261b32f3-e185-41ff-bd81-f6c6a5baea04\") " Feb 23 17:52:12 crc kubenswrapper[4724]: I0223 17:52:12.537410 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/261b32f3-e185-41ff-bd81-f6c6a5baea04-logs" (OuterVolumeSpecName: "logs") pod "261b32f3-e185-41ff-bd81-f6c6a5baea04" (UID: "261b32f3-e185-41ff-bd81-f6c6a5baea04"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:52:12 crc kubenswrapper[4724]: I0223 17:52:12.542784 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/261b32f3-e185-41ff-bd81-f6c6a5baea04-kube-api-access-xb2sz" (OuterVolumeSpecName: "kube-api-access-xb2sz") pod "261b32f3-e185-41ff-bd81-f6c6a5baea04" (UID: "261b32f3-e185-41ff-bd81-f6c6a5baea04"). InnerVolumeSpecName "kube-api-access-xb2sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:52:12 crc kubenswrapper[4724]: I0223 17:52:12.578347 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/261b32f3-e185-41ff-bd81-f6c6a5baea04-config-data" (OuterVolumeSpecName: "config-data") pod "261b32f3-e185-41ff-bd81-f6c6a5baea04" (UID: "261b32f3-e185-41ff-bd81-f6c6a5baea04"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:52:12 crc kubenswrapper[4724]: I0223 17:52:12.594753 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/261b32f3-e185-41ff-bd81-f6c6a5baea04-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "261b32f3-e185-41ff-bd81-f6c6a5baea04" (UID: "261b32f3-e185-41ff-bd81-f6c6a5baea04"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:52:12 crc kubenswrapper[4724]: I0223 17:52:12.638724 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/261b32f3-e185-41ff-bd81-f6c6a5baea04-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 17:52:12 crc kubenswrapper[4724]: I0223 17:52:12.638750 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/261b32f3-e185-41ff-bd81-f6c6a5baea04-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 17:52:12 crc kubenswrapper[4724]: I0223 17:52:12.638759 4724 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/261b32f3-e185-41ff-bd81-f6c6a5baea04-logs\") on node \"crc\" DevicePath \"\"" Feb 23 17:52:12 crc kubenswrapper[4724]: I0223 17:52:12.638767 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xb2sz\" (UniqueName: \"kubernetes.io/projected/261b32f3-e185-41ff-bd81-f6c6a5baea04-kube-api-access-xb2sz\") on node \"crc\" DevicePath \"\"" Feb 23 17:52:13 crc kubenswrapper[4724]: I0223 17:52:13.086857 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"261b32f3-e185-41ff-bd81-f6c6a5baea04","Type":"ContainerDied","Data":"5d13d0c51589a1840f27c9046ea19ad9686b3b02ab33c6bd98099f3d0627fa8b"} Feb 23 17:52:13 crc kubenswrapper[4724]: I0223 17:52:13.087140 4724 scope.go:117] "RemoveContainer" containerID="52d70ef2072bf7a24af0c68548af254c5d855ffe788c6d59e7d89e825836face" Feb 23 17:52:13 crc kubenswrapper[4724]: I0223 17:52:13.086913 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 17:52:13 crc kubenswrapper[4724]: I0223 17:52:13.090901 4724 generic.go:334] "Generic (PLEG): container finished" podID="24ea9ad1-e07f-43c2-a841-42f927a66a79" containerID="658c57f3bd89c344d0075bd72ad4b5b24b3d44fc349016bc25cb1ee5545106ef" exitCode=0 Feb 23 17:52:13 crc kubenswrapper[4724]: I0223 17:52:13.091023 4724 generic.go:334] "Generic (PLEG): container finished" podID="24ea9ad1-e07f-43c2-a841-42f927a66a79" containerID="5b9e9a4e2153a82771bc918a63b040e0fc6c611218cb5e11a15a2f53df9ad6c6" exitCode=2 Feb 23 17:52:13 crc kubenswrapper[4724]: I0223 17:52:13.091105 4724 generic.go:334] "Generic (PLEG): container finished" podID="24ea9ad1-e07f-43c2-a841-42f927a66a79" containerID="ef003a993d34d6613236bfdbeb21e5e033d020f84c061ea09e141f49792636ed" exitCode=0 Feb 23 17:52:13 crc kubenswrapper[4724]: I0223 17:52:13.090950 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"24ea9ad1-e07f-43c2-a841-42f927a66a79","Type":"ContainerDied","Data":"658c57f3bd89c344d0075bd72ad4b5b24b3d44fc349016bc25cb1ee5545106ef"} Feb 23 17:52:13 crc kubenswrapper[4724]: I0223 17:52:13.091332 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"24ea9ad1-e07f-43c2-a841-42f927a66a79","Type":"ContainerDied","Data":"5b9e9a4e2153a82771bc918a63b040e0fc6c611218cb5e11a15a2f53df9ad6c6"} Feb 23 17:52:13 crc kubenswrapper[4724]: I0223 17:52:13.091594 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"24ea9ad1-e07f-43c2-a841-42f927a66a79","Type":"ContainerDied","Data":"ef003a993d34d6613236bfdbeb21e5e033d020f84c061ea09e141f49792636ed"} Feb 23 17:52:13 crc kubenswrapper[4724]: I0223 17:52:13.112503 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 23 17:52:13 crc kubenswrapper[4724]: E0223 17:52:13.122184 4724 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24ea9ad1_e07f_43c2_a841_42f927a66a79.slice/crio-conmon-ef003a993d34d6613236bfdbeb21e5e033d020f84c061ea09e141f49792636ed.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24ea9ad1_e07f_43c2_a841_42f927a66a79.slice/crio-ef003a993d34d6613236bfdbeb21e5e033d020f84c061ea09e141f49792636ed.scope\": RecentStats: unable to find data in memory cache]" Feb 23 17:52:13 crc kubenswrapper[4724]: I0223 17:52:13.125342 4724 scope.go:117] "RemoveContainer" containerID="730af69de6073de9614d07a449de7dc7d3b37f3bfc710e74d6f106f87df75c13" Feb 23 17:52:13 crc kubenswrapper[4724]: I0223 17:52:13.127617 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 23 17:52:13 crc kubenswrapper[4724]: I0223 17:52:13.148471 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 23 17:52:13 crc kubenswrapper[4724]: E0223 17:52:13.148925 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="261b32f3-e185-41ff-bd81-f6c6a5baea04" containerName="nova-api-api" Feb 23 17:52:13 crc kubenswrapper[4724]: I0223 17:52:13.148944 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="261b32f3-e185-41ff-bd81-f6c6a5baea04" containerName="nova-api-api" Feb 23 17:52:13 crc kubenswrapper[4724]: E0223 17:52:13.148959 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="261b32f3-e185-41ff-bd81-f6c6a5baea04" containerName="nova-api-log" Feb 23 17:52:13 crc kubenswrapper[4724]: I0223 17:52:13.148966 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="261b32f3-e185-41ff-bd81-f6c6a5baea04" containerName="nova-api-log" Feb 23 17:52:13 crc kubenswrapper[4724]: I0223 17:52:13.149180 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="261b32f3-e185-41ff-bd81-f6c6a5baea04" containerName="nova-api-api" Feb 23 17:52:13 crc kubenswrapper[4724]: I0223 17:52:13.149200 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="261b32f3-e185-41ff-bd81-f6c6a5baea04" containerName="nova-api-log" Feb 23 17:52:13 crc kubenswrapper[4724]: I0223 17:52:13.151968 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 17:52:13 crc kubenswrapper[4724]: I0223 17:52:13.153209 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 23 17:52:13 crc kubenswrapper[4724]: I0223 17:52:13.157258 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 23 17:52:13 crc kubenswrapper[4724]: I0223 17:52:13.157682 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 23 17:52:13 crc kubenswrapper[4724]: I0223 17:52:13.158460 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 23 17:52:13 crc kubenswrapper[4724]: I0223 17:52:13.273191 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2cf1f00-6743-4d49-a79e-4dc0977b2145-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c2cf1f00-6743-4d49-a79e-4dc0977b2145\") " pod="openstack/nova-api-0" Feb 23 17:52:13 crc kubenswrapper[4724]: I0223 17:52:13.273236 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4wnr\" (UniqueName: \"kubernetes.io/projected/c2cf1f00-6743-4d49-a79e-4dc0977b2145-kube-api-access-w4wnr\") pod \"nova-api-0\" (UID: \"c2cf1f00-6743-4d49-a79e-4dc0977b2145\") " pod="openstack/nova-api-0" Feb 23 17:52:13 crc kubenswrapper[4724]: I0223 17:52:13.273273 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2cf1f00-6743-4d49-a79e-4dc0977b2145-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c2cf1f00-6743-4d49-a79e-4dc0977b2145\") " pod="openstack/nova-api-0" Feb 23 17:52:13 crc kubenswrapper[4724]: I0223 17:52:13.273382 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2cf1f00-6743-4d49-a79e-4dc0977b2145-public-tls-certs\") pod \"nova-api-0\" (UID: \"c2cf1f00-6743-4d49-a79e-4dc0977b2145\") " pod="openstack/nova-api-0" Feb 23 17:52:13 crc kubenswrapper[4724]: I0223 17:52:13.273458 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2cf1f00-6743-4d49-a79e-4dc0977b2145-config-data\") pod \"nova-api-0\" (UID: \"c2cf1f00-6743-4d49-a79e-4dc0977b2145\") " pod="openstack/nova-api-0" Feb 23 17:52:13 crc kubenswrapper[4724]: I0223 17:52:13.273573 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2cf1f00-6743-4d49-a79e-4dc0977b2145-logs\") pod \"nova-api-0\" (UID: \"c2cf1f00-6743-4d49-a79e-4dc0977b2145\") " pod="openstack/nova-api-0" Feb 23 17:52:13 crc kubenswrapper[4724]: I0223 17:52:13.375869 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2cf1f00-6743-4d49-a79e-4dc0977b2145-public-tls-certs\") pod \"nova-api-0\" (UID: \"c2cf1f00-6743-4d49-a79e-4dc0977b2145\") " pod="openstack/nova-api-0" Feb 23 17:52:13 crc kubenswrapper[4724]: I0223 17:52:13.375957 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2cf1f00-6743-4d49-a79e-4dc0977b2145-config-data\") pod \"nova-api-0\" (UID: \"c2cf1f00-6743-4d49-a79e-4dc0977b2145\") " pod="openstack/nova-api-0" Feb 23 17:52:13 crc kubenswrapper[4724]: I0223 17:52:13.376007 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2cf1f00-6743-4d49-a79e-4dc0977b2145-logs\") pod \"nova-api-0\" (UID: \"c2cf1f00-6743-4d49-a79e-4dc0977b2145\") " pod="openstack/nova-api-0" Feb 23 17:52:13 crc kubenswrapper[4724]: I0223 17:52:13.376132 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2cf1f00-6743-4d49-a79e-4dc0977b2145-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c2cf1f00-6743-4d49-a79e-4dc0977b2145\") " pod="openstack/nova-api-0" Feb 23 17:52:13 crc kubenswrapper[4724]: I0223 17:52:13.376162 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4wnr\" (UniqueName: \"kubernetes.io/projected/c2cf1f00-6743-4d49-a79e-4dc0977b2145-kube-api-access-w4wnr\") pod \"nova-api-0\" (UID: \"c2cf1f00-6743-4d49-a79e-4dc0977b2145\") " pod="openstack/nova-api-0" Feb 23 17:52:13 crc kubenswrapper[4724]: I0223 17:52:13.376206 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2cf1f00-6743-4d49-a79e-4dc0977b2145-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c2cf1f00-6743-4d49-a79e-4dc0977b2145\") " pod="openstack/nova-api-0" Feb 23 17:52:13 crc kubenswrapper[4724]: I0223 17:52:13.376988 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2cf1f00-6743-4d49-a79e-4dc0977b2145-logs\") pod \"nova-api-0\" (UID: \"c2cf1f00-6743-4d49-a79e-4dc0977b2145\") " pod="openstack/nova-api-0" Feb 23 17:52:13 crc kubenswrapper[4724]: I0223 17:52:13.382164 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2cf1f00-6743-4d49-a79e-4dc0977b2145-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c2cf1f00-6743-4d49-a79e-4dc0977b2145\") " pod="openstack/nova-api-0" Feb 23 17:52:13 crc kubenswrapper[4724]: I0223 17:52:13.382519 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2cf1f00-6743-4d49-a79e-4dc0977b2145-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c2cf1f00-6743-4d49-a79e-4dc0977b2145\") " pod="openstack/nova-api-0" Feb 23 17:52:13 crc kubenswrapper[4724]: I0223 17:52:13.382803 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2cf1f00-6743-4d49-a79e-4dc0977b2145-config-data\") pod \"nova-api-0\" (UID: \"c2cf1f00-6743-4d49-a79e-4dc0977b2145\") " pod="openstack/nova-api-0" Feb 23 17:52:13 crc kubenswrapper[4724]: I0223 17:52:13.384893 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2cf1f00-6743-4d49-a79e-4dc0977b2145-public-tls-certs\") pod \"nova-api-0\" (UID: \"c2cf1f00-6743-4d49-a79e-4dc0977b2145\") " pod="openstack/nova-api-0" Feb 23 17:52:13 crc kubenswrapper[4724]: I0223 17:52:13.393921 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4wnr\" (UniqueName: \"kubernetes.io/projected/c2cf1f00-6743-4d49-a79e-4dc0977b2145-kube-api-access-w4wnr\") pod \"nova-api-0\" (UID: \"c2cf1f00-6743-4d49-a79e-4dc0977b2145\") " pod="openstack/nova-api-0" Feb 23 17:52:13 crc kubenswrapper[4724]: I0223 17:52:13.474049 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 17:52:13 crc kubenswrapper[4724]: I0223 17:52:13.965163 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 23 17:52:13 crc kubenswrapper[4724]: W0223 17:52:13.967820 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc2cf1f00_6743_4d49_a79e_4dc0977b2145.slice/crio-08c8f3d898b151d1af6cedc577f41e0122e44c9f32a3aec9167e32bf03150b4b WatchSource:0}: Error finding container 08c8f3d898b151d1af6cedc577f41e0122e44c9f32a3aec9167e32bf03150b4b: Status 404 returned error can't find the container with id 08c8f3d898b151d1af6cedc577f41e0122e44c9f32a3aec9167e32bf03150b4b Feb 23 17:52:14 crc kubenswrapper[4724]: I0223 17:52:14.105708 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c2cf1f00-6743-4d49-a79e-4dc0977b2145","Type":"ContainerStarted","Data":"08c8f3d898b151d1af6cedc577f41e0122e44c9f32a3aec9167e32bf03150b4b"} Feb 23 17:52:14 crc kubenswrapper[4724]: I0223 17:52:14.471250 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 17:52:14 crc kubenswrapper[4724]: I0223 17:52:14.512493 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/24ea9ad1-e07f-43c2-a841-42f927a66a79-run-httpd\") pod \"24ea9ad1-e07f-43c2-a841-42f927a66a79\" (UID: \"24ea9ad1-e07f-43c2-a841-42f927a66a79\") " Feb 23 17:52:14 crc kubenswrapper[4724]: I0223 17:52:14.512548 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24ea9ad1-e07f-43c2-a841-42f927a66a79-config-data\") pod \"24ea9ad1-e07f-43c2-a841-42f927a66a79\" (UID: \"24ea9ad1-e07f-43c2-a841-42f927a66a79\") " Feb 23 17:52:14 crc kubenswrapper[4724]: I0223 17:52:14.512591 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24ea9ad1-e07f-43c2-a841-42f927a66a79-combined-ca-bundle\") pod \"24ea9ad1-e07f-43c2-a841-42f927a66a79\" (UID: \"24ea9ad1-e07f-43c2-a841-42f927a66a79\") " Feb 23 17:52:14 crc kubenswrapper[4724]: I0223 17:52:14.512618 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/24ea9ad1-e07f-43c2-a841-42f927a66a79-sg-core-conf-yaml\") pod \"24ea9ad1-e07f-43c2-a841-42f927a66a79\" (UID: \"24ea9ad1-e07f-43c2-a841-42f927a66a79\") " Feb 23 17:52:14 crc kubenswrapper[4724]: I0223 17:52:14.512652 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/24ea9ad1-e07f-43c2-a841-42f927a66a79-log-httpd\") pod \"24ea9ad1-e07f-43c2-a841-42f927a66a79\" (UID: \"24ea9ad1-e07f-43c2-a841-42f927a66a79\") " Feb 23 17:52:14 crc kubenswrapper[4724]: I0223 17:52:14.512672 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24ea9ad1-e07f-43c2-a841-42f927a66a79-scripts\") pod \"24ea9ad1-e07f-43c2-a841-42f927a66a79\" (UID: \"24ea9ad1-e07f-43c2-a841-42f927a66a79\") " Feb 23 17:52:14 crc kubenswrapper[4724]: I0223 17:52:14.512719 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sv85n\" (UniqueName: \"kubernetes.io/projected/24ea9ad1-e07f-43c2-a841-42f927a66a79-kube-api-access-sv85n\") pod \"24ea9ad1-e07f-43c2-a841-42f927a66a79\" (UID: \"24ea9ad1-e07f-43c2-a841-42f927a66a79\") " Feb 23 17:52:14 crc kubenswrapper[4724]: I0223 17:52:14.512770 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/24ea9ad1-e07f-43c2-a841-42f927a66a79-ceilometer-tls-certs\") pod \"24ea9ad1-e07f-43c2-a841-42f927a66a79\" (UID: \"24ea9ad1-e07f-43c2-a841-42f927a66a79\") " Feb 23 17:52:14 crc kubenswrapper[4724]: I0223 17:52:14.513690 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24ea9ad1-e07f-43c2-a841-42f927a66a79-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "24ea9ad1-e07f-43c2-a841-42f927a66a79" (UID: "24ea9ad1-e07f-43c2-a841-42f927a66a79"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:52:14 crc kubenswrapper[4724]: I0223 17:52:14.514065 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24ea9ad1-e07f-43c2-a841-42f927a66a79-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "24ea9ad1-e07f-43c2-a841-42f927a66a79" (UID: "24ea9ad1-e07f-43c2-a841-42f927a66a79"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:52:14 crc kubenswrapper[4724]: I0223 17:52:14.520693 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24ea9ad1-e07f-43c2-a841-42f927a66a79-scripts" (OuterVolumeSpecName: "scripts") pod "24ea9ad1-e07f-43c2-a841-42f927a66a79" (UID: "24ea9ad1-e07f-43c2-a841-42f927a66a79"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:52:14 crc kubenswrapper[4724]: I0223 17:52:14.526808 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24ea9ad1-e07f-43c2-a841-42f927a66a79-kube-api-access-sv85n" (OuterVolumeSpecName: "kube-api-access-sv85n") pod "24ea9ad1-e07f-43c2-a841-42f927a66a79" (UID: "24ea9ad1-e07f-43c2-a841-42f927a66a79"). InnerVolumeSpecName "kube-api-access-sv85n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:52:14 crc kubenswrapper[4724]: I0223 17:52:14.577180 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24ea9ad1-e07f-43c2-a841-42f927a66a79-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "24ea9ad1-e07f-43c2-a841-42f927a66a79" (UID: "24ea9ad1-e07f-43c2-a841-42f927a66a79"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:52:14 crc kubenswrapper[4724]: I0223 17:52:14.600586 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24ea9ad1-e07f-43c2-a841-42f927a66a79-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "24ea9ad1-e07f-43c2-a841-42f927a66a79" (UID: "24ea9ad1-e07f-43c2-a841-42f927a66a79"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:52:14 crc kubenswrapper[4724]: I0223 17:52:14.608972 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24ea9ad1-e07f-43c2-a841-42f927a66a79-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "24ea9ad1-e07f-43c2-a841-42f927a66a79" (UID: "24ea9ad1-e07f-43c2-a841-42f927a66a79"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:52:14 crc kubenswrapper[4724]: I0223 17:52:14.615253 4724 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/24ea9ad1-e07f-43c2-a841-42f927a66a79-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 17:52:14 crc kubenswrapper[4724]: I0223 17:52:14.615291 4724 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/24ea9ad1-e07f-43c2-a841-42f927a66a79-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 17:52:14 crc kubenswrapper[4724]: I0223 17:52:14.615304 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24ea9ad1-e07f-43c2-a841-42f927a66a79-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 17:52:14 crc kubenswrapper[4724]: I0223 17:52:14.615315 4724 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/24ea9ad1-e07f-43c2-a841-42f927a66a79-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 23 17:52:14 crc kubenswrapper[4724]: I0223 17:52:14.615329 4724 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/24ea9ad1-e07f-43c2-a841-42f927a66a79-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 17:52:14 crc kubenswrapper[4724]: I0223 17:52:14.615340 4724 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24ea9ad1-e07f-43c2-a841-42f927a66a79-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 17:52:14 crc kubenswrapper[4724]: I0223 17:52:14.615351 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sv85n\" (UniqueName: \"kubernetes.io/projected/24ea9ad1-e07f-43c2-a841-42f927a66a79-kube-api-access-sv85n\") on node \"crc\" DevicePath \"\"" Feb 23 17:52:14 crc kubenswrapper[4724]: I0223 17:52:14.643021 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24ea9ad1-e07f-43c2-a841-42f927a66a79-config-data" (OuterVolumeSpecName: "config-data") pod "24ea9ad1-e07f-43c2-a841-42f927a66a79" (UID: "24ea9ad1-e07f-43c2-a841-42f927a66a79"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:52:14 crc kubenswrapper[4724]: I0223 17:52:14.717985 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24ea9ad1-e07f-43c2-a841-42f927a66a79-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 17:52:14 crc kubenswrapper[4724]: I0223 17:52:14.965781 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="261b32f3-e185-41ff-bd81-f6c6a5baea04" path="/var/lib/kubelet/pods/261b32f3-e185-41ff-bd81-f6c6a5baea04/volumes" Feb 23 17:52:15 crc kubenswrapper[4724]: I0223 17:52:15.119438 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c2cf1f00-6743-4d49-a79e-4dc0977b2145","Type":"ContainerStarted","Data":"f8442da9b4d527679e023beef9c5ecc1305609850d4e84fa8098a032d511a82c"} Feb 23 17:52:15 crc kubenswrapper[4724]: I0223 17:52:15.119489 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c2cf1f00-6743-4d49-a79e-4dc0977b2145","Type":"ContainerStarted","Data":"96d70aaebeb594e6e2254a9c1f8d156fbb2b9e98acc191a49aeb45f92bfb5148"} Feb 23 17:52:15 crc kubenswrapper[4724]: I0223 17:52:15.122336 4724 generic.go:334] "Generic (PLEG): container finished" podID="24ea9ad1-e07f-43c2-a841-42f927a66a79" containerID="e3084ee3ca11700628fe2ae49dcc4fbc6f8bd95e40d7e949bf98f45a39be1914" exitCode=0 Feb 23 17:52:15 crc kubenswrapper[4724]: I0223 17:52:15.122391 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 17:52:15 crc kubenswrapper[4724]: I0223 17:52:15.122406 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"24ea9ad1-e07f-43c2-a841-42f927a66a79","Type":"ContainerDied","Data":"e3084ee3ca11700628fe2ae49dcc4fbc6f8bd95e40d7e949bf98f45a39be1914"} Feb 23 17:52:15 crc kubenswrapper[4724]: I0223 17:52:15.122470 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"24ea9ad1-e07f-43c2-a841-42f927a66a79","Type":"ContainerDied","Data":"d1d5b4b00e7995f69e4da0df7800c266202e426038c8261134725ba4d1f241d1"} Feb 23 17:52:15 crc kubenswrapper[4724]: I0223 17:52:15.122492 4724 scope.go:117] "RemoveContainer" containerID="658c57f3bd89c344d0075bd72ad4b5b24b3d44fc349016bc25cb1ee5545106ef" Feb 23 17:52:15 crc kubenswrapper[4724]: I0223 17:52:15.156325 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.156300476 podStartE2EDuration="2.156300476s" podCreationTimestamp="2026-02-23 17:52:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:52:15.140036843 +0000 UTC m=+1290.956236453" watchObservedRunningTime="2026-02-23 17:52:15.156300476 +0000 UTC m=+1290.972500076" Feb 23 17:52:15 crc kubenswrapper[4724]: I0223 17:52:15.165855 4724 scope.go:117] "RemoveContainer" containerID="5b9e9a4e2153a82771bc918a63b040e0fc6c611218cb5e11a15a2f53df9ad6c6" Feb 23 17:52:15 crc kubenswrapper[4724]: I0223 17:52:15.185482 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 17:52:15 crc kubenswrapper[4724]: I0223 17:52:15.209949 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 23 17:52:15 crc kubenswrapper[4724]: I0223 17:52:15.211624 4724 scope.go:117] "RemoveContainer" containerID="e3084ee3ca11700628fe2ae49dcc4fbc6f8bd95e40d7e949bf98f45a39be1914" Feb 23 17:52:15 crc kubenswrapper[4724]: I0223 17:52:15.220900 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 23 17:52:15 crc kubenswrapper[4724]: E0223 17:52:15.221358 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24ea9ad1-e07f-43c2-a841-42f927a66a79" containerName="ceilometer-notification-agent" Feb 23 17:52:15 crc kubenswrapper[4724]: I0223 17:52:15.221380 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="24ea9ad1-e07f-43c2-a841-42f927a66a79" containerName="ceilometer-notification-agent" Feb 23 17:52:15 crc kubenswrapper[4724]: E0223 17:52:15.221391 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24ea9ad1-e07f-43c2-a841-42f927a66a79" containerName="ceilometer-central-agent" Feb 23 17:52:15 crc kubenswrapper[4724]: I0223 17:52:15.221402 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="24ea9ad1-e07f-43c2-a841-42f927a66a79" containerName="ceilometer-central-agent" Feb 23 17:52:15 crc kubenswrapper[4724]: E0223 17:52:15.221413 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24ea9ad1-e07f-43c2-a841-42f927a66a79" containerName="sg-core" Feb 23 17:52:15 crc kubenswrapper[4724]: I0223 17:52:15.221435 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="24ea9ad1-e07f-43c2-a841-42f927a66a79" containerName="sg-core" Feb 23 17:52:15 crc kubenswrapper[4724]: E0223 17:52:15.221446 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24ea9ad1-e07f-43c2-a841-42f927a66a79" containerName="proxy-httpd" Feb 23 17:52:15 crc kubenswrapper[4724]: I0223 17:52:15.221451 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="24ea9ad1-e07f-43c2-a841-42f927a66a79" containerName="proxy-httpd" Feb 23 17:52:15 crc kubenswrapper[4724]: I0223 17:52:15.221632 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="24ea9ad1-e07f-43c2-a841-42f927a66a79" containerName="ceilometer-central-agent" Feb 23 17:52:15 crc kubenswrapper[4724]: I0223 17:52:15.221647 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="24ea9ad1-e07f-43c2-a841-42f927a66a79" containerName="ceilometer-notification-agent" Feb 23 17:52:15 crc kubenswrapper[4724]: I0223 17:52:15.221675 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="24ea9ad1-e07f-43c2-a841-42f927a66a79" containerName="proxy-httpd" Feb 23 17:52:15 crc kubenswrapper[4724]: I0223 17:52:15.221689 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="24ea9ad1-e07f-43c2-a841-42f927a66a79" containerName="sg-core" Feb 23 17:52:15 crc kubenswrapper[4724]: I0223 17:52:15.223591 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 17:52:15 crc kubenswrapper[4724]: I0223 17:52:15.227993 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 23 17:52:15 crc kubenswrapper[4724]: I0223 17:52:15.228239 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 23 17:52:15 crc kubenswrapper[4724]: I0223 17:52:15.228400 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 23 17:52:15 crc kubenswrapper[4724]: I0223 17:52:15.229144 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 17:52:15 crc kubenswrapper[4724]: I0223 17:52:15.273206 4724 scope.go:117] "RemoveContainer" containerID="ef003a993d34d6613236bfdbeb21e5e033d020f84c061ea09e141f49792636ed" Feb 23 17:52:15 crc kubenswrapper[4724]: I0223 17:52:15.292542 4724 scope.go:117] "RemoveContainer" containerID="658c57f3bd89c344d0075bd72ad4b5b24b3d44fc349016bc25cb1ee5545106ef" Feb 23 17:52:15 crc kubenswrapper[4724]: E0223 17:52:15.293164 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"658c57f3bd89c344d0075bd72ad4b5b24b3d44fc349016bc25cb1ee5545106ef\": container with ID starting with 658c57f3bd89c344d0075bd72ad4b5b24b3d44fc349016bc25cb1ee5545106ef not found: ID does not exist" containerID="658c57f3bd89c344d0075bd72ad4b5b24b3d44fc349016bc25cb1ee5545106ef" Feb 23 17:52:15 crc kubenswrapper[4724]: I0223 17:52:15.293204 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"658c57f3bd89c344d0075bd72ad4b5b24b3d44fc349016bc25cb1ee5545106ef"} err="failed to get container status \"658c57f3bd89c344d0075bd72ad4b5b24b3d44fc349016bc25cb1ee5545106ef\": rpc error: code = NotFound desc = could not find container \"658c57f3bd89c344d0075bd72ad4b5b24b3d44fc349016bc25cb1ee5545106ef\": container with ID starting with 658c57f3bd89c344d0075bd72ad4b5b24b3d44fc349016bc25cb1ee5545106ef not found: ID does not exist" Feb 23 17:52:15 crc kubenswrapper[4724]: I0223 17:52:15.293228 4724 scope.go:117] "RemoveContainer" containerID="5b9e9a4e2153a82771bc918a63b040e0fc6c611218cb5e11a15a2f53df9ad6c6" Feb 23 17:52:15 crc kubenswrapper[4724]: E0223 17:52:15.293629 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b9e9a4e2153a82771bc918a63b040e0fc6c611218cb5e11a15a2f53df9ad6c6\": container with ID starting with 5b9e9a4e2153a82771bc918a63b040e0fc6c611218cb5e11a15a2f53df9ad6c6 not found: ID does not exist" containerID="5b9e9a4e2153a82771bc918a63b040e0fc6c611218cb5e11a15a2f53df9ad6c6" Feb 23 17:52:15 crc kubenswrapper[4724]: I0223 17:52:15.293663 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b9e9a4e2153a82771bc918a63b040e0fc6c611218cb5e11a15a2f53df9ad6c6"} err="failed to get container status \"5b9e9a4e2153a82771bc918a63b040e0fc6c611218cb5e11a15a2f53df9ad6c6\": rpc error: code = NotFound desc = could not find container \"5b9e9a4e2153a82771bc918a63b040e0fc6c611218cb5e11a15a2f53df9ad6c6\": container with ID starting with 5b9e9a4e2153a82771bc918a63b040e0fc6c611218cb5e11a15a2f53df9ad6c6 not found: ID does not exist" Feb 23 17:52:15 crc kubenswrapper[4724]: I0223 17:52:15.293686 4724 scope.go:117] "RemoveContainer" containerID="e3084ee3ca11700628fe2ae49dcc4fbc6f8bd95e40d7e949bf98f45a39be1914" Feb 23 17:52:15 crc kubenswrapper[4724]: E0223 17:52:15.293941 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3084ee3ca11700628fe2ae49dcc4fbc6f8bd95e40d7e949bf98f45a39be1914\": container with ID starting with e3084ee3ca11700628fe2ae49dcc4fbc6f8bd95e40d7e949bf98f45a39be1914 not found: ID does not exist" containerID="e3084ee3ca11700628fe2ae49dcc4fbc6f8bd95e40d7e949bf98f45a39be1914" Feb 23 17:52:15 crc kubenswrapper[4724]: I0223 17:52:15.293964 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3084ee3ca11700628fe2ae49dcc4fbc6f8bd95e40d7e949bf98f45a39be1914"} err="failed to get container status \"e3084ee3ca11700628fe2ae49dcc4fbc6f8bd95e40d7e949bf98f45a39be1914\": rpc error: code = NotFound desc = could not find container \"e3084ee3ca11700628fe2ae49dcc4fbc6f8bd95e40d7e949bf98f45a39be1914\": container with ID starting with e3084ee3ca11700628fe2ae49dcc4fbc6f8bd95e40d7e949bf98f45a39be1914 not found: ID does not exist" Feb 23 17:52:15 crc kubenswrapper[4724]: I0223 17:52:15.293976 4724 scope.go:117] "RemoveContainer" containerID="ef003a993d34d6613236bfdbeb21e5e033d020f84c061ea09e141f49792636ed" Feb 23 17:52:15 crc kubenswrapper[4724]: E0223 17:52:15.294269 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef003a993d34d6613236bfdbeb21e5e033d020f84c061ea09e141f49792636ed\": container with ID starting with ef003a993d34d6613236bfdbeb21e5e033d020f84c061ea09e141f49792636ed not found: ID does not exist" containerID="ef003a993d34d6613236bfdbeb21e5e033d020f84c061ea09e141f49792636ed" Feb 23 17:52:15 crc kubenswrapper[4724]: I0223 17:52:15.294304 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef003a993d34d6613236bfdbeb21e5e033d020f84c061ea09e141f49792636ed"} err="failed to get container status \"ef003a993d34d6613236bfdbeb21e5e033d020f84c061ea09e141f49792636ed\": rpc error: code = NotFound desc = could not find container \"ef003a993d34d6613236bfdbeb21e5e033d020f84c061ea09e141f49792636ed\": container with ID starting with ef003a993d34d6613236bfdbeb21e5e033d020f84c061ea09e141f49792636ed not found: ID does not exist" Feb 23 17:52:15 crc kubenswrapper[4724]: I0223 17:52:15.328726 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ed30198-318f-476e-83b7-e93ab4c5625d-run-httpd\") pod \"ceilometer-0\" (UID: \"2ed30198-318f-476e-83b7-e93ab4c5625d\") " pod="openstack/ceilometer-0" Feb 23 17:52:15 crc kubenswrapper[4724]: I0223 17:52:15.328784 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvcmd\" (UniqueName: \"kubernetes.io/projected/2ed30198-318f-476e-83b7-e93ab4c5625d-kube-api-access-qvcmd\") pod \"ceilometer-0\" (UID: \"2ed30198-318f-476e-83b7-e93ab4c5625d\") " pod="openstack/ceilometer-0" Feb 23 17:52:15 crc kubenswrapper[4724]: I0223 17:52:15.328906 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ed30198-318f-476e-83b7-e93ab4c5625d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2ed30198-318f-476e-83b7-e93ab4c5625d\") " pod="openstack/ceilometer-0" Feb 23 17:52:15 crc kubenswrapper[4724]: I0223 17:52:15.328953 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ed30198-318f-476e-83b7-e93ab4c5625d-log-httpd\") pod \"ceilometer-0\" (UID: \"2ed30198-318f-476e-83b7-e93ab4c5625d\") " pod="openstack/ceilometer-0" Feb 23 17:52:15 crc kubenswrapper[4724]: I0223 17:52:15.328984 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ed30198-318f-476e-83b7-e93ab4c5625d-config-data\") pod \"ceilometer-0\" (UID: \"2ed30198-318f-476e-83b7-e93ab4c5625d\") " pod="openstack/ceilometer-0" Feb 23 17:52:15 crc kubenswrapper[4724]: I0223 17:52:15.329062 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2ed30198-318f-476e-83b7-e93ab4c5625d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2ed30198-318f-476e-83b7-e93ab4c5625d\") " pod="openstack/ceilometer-0" Feb 23 17:52:15 crc kubenswrapper[4724]: I0223 17:52:15.329098 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ed30198-318f-476e-83b7-e93ab4c5625d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2ed30198-318f-476e-83b7-e93ab4c5625d\") " pod="openstack/ceilometer-0" Feb 23 17:52:15 crc kubenswrapper[4724]: I0223 17:52:15.329120 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ed30198-318f-476e-83b7-e93ab4c5625d-scripts\") pod \"ceilometer-0\" (UID: \"2ed30198-318f-476e-83b7-e93ab4c5625d\") " pod="openstack/ceilometer-0" Feb 23 17:52:15 crc kubenswrapper[4724]: I0223 17:52:15.430292 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2ed30198-318f-476e-83b7-e93ab4c5625d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2ed30198-318f-476e-83b7-e93ab4c5625d\") " pod="openstack/ceilometer-0" Feb 23 17:52:15 crc kubenswrapper[4724]: I0223 17:52:15.430342 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ed30198-318f-476e-83b7-e93ab4c5625d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2ed30198-318f-476e-83b7-e93ab4c5625d\") " pod="openstack/ceilometer-0" Feb 23 17:52:15 crc kubenswrapper[4724]: I0223 17:52:15.430358 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ed30198-318f-476e-83b7-e93ab4c5625d-scripts\") pod \"ceilometer-0\" (UID: \"2ed30198-318f-476e-83b7-e93ab4c5625d\") " pod="openstack/ceilometer-0" Feb 23 17:52:15 crc kubenswrapper[4724]: I0223 17:52:15.430382 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ed30198-318f-476e-83b7-e93ab4c5625d-run-httpd\") pod \"ceilometer-0\" (UID: \"2ed30198-318f-476e-83b7-e93ab4c5625d\") " pod="openstack/ceilometer-0" Feb 23 17:52:15 crc kubenswrapper[4724]: I0223 17:52:15.430406 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvcmd\" (UniqueName: \"kubernetes.io/projected/2ed30198-318f-476e-83b7-e93ab4c5625d-kube-api-access-qvcmd\") pod \"ceilometer-0\" (UID: \"2ed30198-318f-476e-83b7-e93ab4c5625d\") " pod="openstack/ceilometer-0" Feb 23 17:52:15 crc kubenswrapper[4724]: I0223 17:52:15.430503 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ed30198-318f-476e-83b7-e93ab4c5625d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2ed30198-318f-476e-83b7-e93ab4c5625d\") " pod="openstack/ceilometer-0" Feb 23 17:52:15 crc kubenswrapper[4724]: I0223 17:52:15.430534 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ed30198-318f-476e-83b7-e93ab4c5625d-log-httpd\") pod \"ceilometer-0\" (UID: \"2ed30198-318f-476e-83b7-e93ab4c5625d\") " pod="openstack/ceilometer-0" Feb 23 17:52:15 crc kubenswrapper[4724]: I0223 17:52:15.430554 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ed30198-318f-476e-83b7-e93ab4c5625d-config-data\") pod \"ceilometer-0\" (UID: \"2ed30198-318f-476e-83b7-e93ab4c5625d\") " pod="openstack/ceilometer-0" Feb 23 17:52:15 crc kubenswrapper[4724]: I0223 17:52:15.431084 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ed30198-318f-476e-83b7-e93ab4c5625d-log-httpd\") pod \"ceilometer-0\" (UID: \"2ed30198-318f-476e-83b7-e93ab4c5625d\") " pod="openstack/ceilometer-0" Feb 23 17:52:15 crc kubenswrapper[4724]: I0223 17:52:15.431382 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ed30198-318f-476e-83b7-e93ab4c5625d-run-httpd\") pod \"ceilometer-0\" (UID: \"2ed30198-318f-476e-83b7-e93ab4c5625d\") " pod="openstack/ceilometer-0" Feb 23 17:52:15 crc kubenswrapper[4724]: I0223 17:52:15.436173 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2ed30198-318f-476e-83b7-e93ab4c5625d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2ed30198-318f-476e-83b7-e93ab4c5625d\") " pod="openstack/ceilometer-0" Feb 23 17:52:15 crc kubenswrapper[4724]: I0223 17:52:15.437038 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ed30198-318f-476e-83b7-e93ab4c5625d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2ed30198-318f-476e-83b7-e93ab4c5625d\") " pod="openstack/ceilometer-0" Feb 23 17:52:15 crc kubenswrapper[4724]: I0223 17:52:15.437119 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ed30198-318f-476e-83b7-e93ab4c5625d-scripts\") pod \"ceilometer-0\" (UID: \"2ed30198-318f-476e-83b7-e93ab4c5625d\") " pod="openstack/ceilometer-0" Feb 23 17:52:15 crc kubenswrapper[4724]: I0223 17:52:15.437344 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ed30198-318f-476e-83b7-e93ab4c5625d-config-data\") pod \"ceilometer-0\" (UID: \"2ed30198-318f-476e-83b7-e93ab4c5625d\") " pod="openstack/ceilometer-0" Feb 23 17:52:15 crc kubenswrapper[4724]: I0223 17:52:15.437878 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ed30198-318f-476e-83b7-e93ab4c5625d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2ed30198-318f-476e-83b7-e93ab4c5625d\") " pod="openstack/ceilometer-0" Feb 23 17:52:15 crc kubenswrapper[4724]: I0223 17:52:15.448361 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvcmd\" (UniqueName: \"kubernetes.io/projected/2ed30198-318f-476e-83b7-e93ab4c5625d-kube-api-access-qvcmd\") pod \"ceilometer-0\" (UID: \"2ed30198-318f-476e-83b7-e93ab4c5625d\") " pod="openstack/ceilometer-0" Feb 23 17:52:15 crc kubenswrapper[4724]: I0223 17:52:15.570532 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 17:52:16 crc kubenswrapper[4724]: I0223 17:52:16.031040 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 17:52:16 crc kubenswrapper[4724]: W0223 17:52:16.040448 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ed30198_318f_476e_83b7_e93ab4c5625d.slice/crio-2745a46ffd174b98cc19b8161e9ed47cc1b2af8bfb906a8d3245531471f7d83b WatchSource:0}: Error finding container 2745a46ffd174b98cc19b8161e9ed47cc1b2af8bfb906a8d3245531471f7d83b: Status 404 returned error can't find the container with id 2745a46ffd174b98cc19b8161e9ed47cc1b2af8bfb906a8d3245531471f7d83b Feb 23 17:52:16 crc kubenswrapper[4724]: I0223 17:52:16.131963 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ed30198-318f-476e-83b7-e93ab4c5625d","Type":"ContainerStarted","Data":"2745a46ffd174b98cc19b8161e9ed47cc1b2af8bfb906a8d3245531471f7d83b"} Feb 23 17:52:16 crc kubenswrapper[4724]: I0223 17:52:16.714612 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 23 17:52:16 crc kubenswrapper[4724]: I0223 17:52:16.736281 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 23 17:52:16 crc kubenswrapper[4724]: I0223 17:52:16.964568 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24ea9ad1-e07f-43c2-a841-42f927a66a79" path="/var/lib/kubelet/pods/24ea9ad1-e07f-43c2-a841-42f927a66a79/volumes" Feb 23 17:52:17 crc kubenswrapper[4724]: I0223 17:52:17.158245 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ed30198-318f-476e-83b7-e93ab4c5625d","Type":"ContainerStarted","Data":"a1fa4300c7bfe4f30d9d56a0508de8b7901716be6313c62c6928b1ff87832870"} Feb 23 17:52:17 crc kubenswrapper[4724]: I0223 17:52:17.158718 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ed30198-318f-476e-83b7-e93ab4c5625d","Type":"ContainerStarted","Data":"fec95b8a842e0d717c816b01843995813ed9827eb696a3007ce7e6f9bc9c8f2f"} Feb 23 17:52:17 crc kubenswrapper[4724]: I0223 17:52:17.176888 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 23 17:52:17 crc kubenswrapper[4724]: I0223 17:52:17.338001 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-v8dph"] Feb 23 17:52:17 crc kubenswrapper[4724]: I0223 17:52:17.339426 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-v8dph" Feb 23 17:52:17 crc kubenswrapper[4724]: I0223 17:52:17.341200 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 23 17:52:17 crc kubenswrapper[4724]: I0223 17:52:17.345187 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 23 17:52:17 crc kubenswrapper[4724]: I0223 17:52:17.350340 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-v8dph"] Feb 23 17:52:17 crc kubenswrapper[4724]: I0223 17:52:17.466542 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f112391-decc-4aa2-a230-699a0015c306-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-v8dph\" (UID: \"8f112391-decc-4aa2-a230-699a0015c306\") " pod="openstack/nova-cell1-cell-mapping-v8dph" Feb 23 17:52:17 crc kubenswrapper[4724]: I0223 17:52:17.466813 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f112391-decc-4aa2-a230-699a0015c306-scripts\") pod \"nova-cell1-cell-mapping-v8dph\" (UID: \"8f112391-decc-4aa2-a230-699a0015c306\") " pod="openstack/nova-cell1-cell-mapping-v8dph" Feb 23 17:52:17 crc kubenswrapper[4724]: I0223 17:52:17.467026 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f112391-decc-4aa2-a230-699a0015c306-config-data\") pod \"nova-cell1-cell-mapping-v8dph\" (UID: \"8f112391-decc-4aa2-a230-699a0015c306\") " pod="openstack/nova-cell1-cell-mapping-v8dph" Feb 23 17:52:17 crc kubenswrapper[4724]: I0223 17:52:17.467093 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcngk\" (UniqueName: \"kubernetes.io/projected/8f112391-decc-4aa2-a230-699a0015c306-kube-api-access-vcngk\") pod \"nova-cell1-cell-mapping-v8dph\" (UID: \"8f112391-decc-4aa2-a230-699a0015c306\") " pod="openstack/nova-cell1-cell-mapping-v8dph" Feb 23 17:52:17 crc kubenswrapper[4724]: I0223 17:52:17.568925 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f112391-decc-4aa2-a230-699a0015c306-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-v8dph\" (UID: \"8f112391-decc-4aa2-a230-699a0015c306\") " pod="openstack/nova-cell1-cell-mapping-v8dph" Feb 23 17:52:17 crc kubenswrapper[4724]: I0223 17:52:17.569001 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f112391-decc-4aa2-a230-699a0015c306-scripts\") pod \"nova-cell1-cell-mapping-v8dph\" (UID: \"8f112391-decc-4aa2-a230-699a0015c306\") " pod="openstack/nova-cell1-cell-mapping-v8dph" Feb 23 17:52:17 crc kubenswrapper[4724]: I0223 17:52:17.569062 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f112391-decc-4aa2-a230-699a0015c306-config-data\") pod \"nova-cell1-cell-mapping-v8dph\" (UID: \"8f112391-decc-4aa2-a230-699a0015c306\") " pod="openstack/nova-cell1-cell-mapping-v8dph" Feb 23 17:52:17 crc kubenswrapper[4724]: I0223 17:52:17.569086 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcngk\" (UniqueName: \"kubernetes.io/projected/8f112391-decc-4aa2-a230-699a0015c306-kube-api-access-vcngk\") pod \"nova-cell1-cell-mapping-v8dph\" (UID: \"8f112391-decc-4aa2-a230-699a0015c306\") " pod="openstack/nova-cell1-cell-mapping-v8dph" Feb 23 17:52:17 crc kubenswrapper[4724]: I0223 17:52:17.575026 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f112391-decc-4aa2-a230-699a0015c306-scripts\") pod \"nova-cell1-cell-mapping-v8dph\" (UID: \"8f112391-decc-4aa2-a230-699a0015c306\") " pod="openstack/nova-cell1-cell-mapping-v8dph" Feb 23 17:52:17 crc kubenswrapper[4724]: I0223 17:52:17.575053 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f112391-decc-4aa2-a230-699a0015c306-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-v8dph\" (UID: \"8f112391-decc-4aa2-a230-699a0015c306\") " pod="openstack/nova-cell1-cell-mapping-v8dph" Feb 23 17:52:17 crc kubenswrapper[4724]: I0223 17:52:17.575181 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f112391-decc-4aa2-a230-699a0015c306-config-data\") pod \"nova-cell1-cell-mapping-v8dph\" (UID: \"8f112391-decc-4aa2-a230-699a0015c306\") " pod="openstack/nova-cell1-cell-mapping-v8dph" Feb 23 17:52:17 crc kubenswrapper[4724]: I0223 17:52:17.597452 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcngk\" (UniqueName: \"kubernetes.io/projected/8f112391-decc-4aa2-a230-699a0015c306-kube-api-access-vcngk\") pod \"nova-cell1-cell-mapping-v8dph\" (UID: \"8f112391-decc-4aa2-a230-699a0015c306\") " pod="openstack/nova-cell1-cell-mapping-v8dph" Feb 23 17:52:17 crc kubenswrapper[4724]: I0223 17:52:17.664156 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-v8dph" Feb 23 17:52:18 crc kubenswrapper[4724]: W0223 17:52:18.113783 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f112391_decc_4aa2_a230_699a0015c306.slice/crio-ba25b549c98b41879171cf759402c43eba5bdc8a841d1c4d9f5f423fa049e51b WatchSource:0}: Error finding container ba25b549c98b41879171cf759402c43eba5bdc8a841d1c4d9f5f423fa049e51b: Status 404 returned error can't find the container with id ba25b549c98b41879171cf759402c43eba5bdc8a841d1c4d9f5f423fa049e51b Feb 23 17:52:18 crc kubenswrapper[4724]: I0223 17:52:18.115572 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-v8dph"] Feb 23 17:52:18 crc kubenswrapper[4724]: I0223 17:52:18.173529 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ed30198-318f-476e-83b7-e93ab4c5625d","Type":"ContainerStarted","Data":"e59b82b837ac036317e08bec68a5ba2e86a811f52bd0b7378b57821c0e491793"} Feb 23 17:52:18 crc kubenswrapper[4724]: I0223 17:52:18.175986 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-v8dph" event={"ID":"8f112391-decc-4aa2-a230-699a0015c306","Type":"ContainerStarted","Data":"ba25b549c98b41879171cf759402c43eba5bdc8a841d1c4d9f5f423fa049e51b"} Feb 23 17:52:18 crc kubenswrapper[4724]: I0223 17:52:18.609205 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5678c8f4f-9w6qj" Feb 23 17:52:18 crc kubenswrapper[4724]: I0223 17:52:18.730665 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc99f56d9-tp8hh"] Feb 23 17:52:18 crc kubenswrapper[4724]: I0223 17:52:18.730958 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bc99f56d9-tp8hh" podUID="c546e0ba-a0ef-44b7-a810-e405f8bca93e" containerName="dnsmasq-dns" containerID="cri-o://1fe1cc8de80d03f4c774cbc8279a2802d878d46a140e6309a3274349cd326acf" gracePeriod=10 Feb 23 17:52:19 crc kubenswrapper[4724]: I0223 17:52:19.191372 4724 generic.go:334] "Generic (PLEG): container finished" podID="c546e0ba-a0ef-44b7-a810-e405f8bca93e" containerID="1fe1cc8de80d03f4c774cbc8279a2802d878d46a140e6309a3274349cd326acf" exitCode=0 Feb 23 17:52:19 crc kubenswrapper[4724]: I0223 17:52:19.191533 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc99f56d9-tp8hh" event={"ID":"c546e0ba-a0ef-44b7-a810-e405f8bca93e","Type":"ContainerDied","Data":"1fe1cc8de80d03f4c774cbc8279a2802d878d46a140e6309a3274349cd326acf"} Feb 23 17:52:19 crc kubenswrapper[4724]: I0223 17:52:19.191928 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc99f56d9-tp8hh" event={"ID":"c546e0ba-a0ef-44b7-a810-e405f8bca93e","Type":"ContainerDied","Data":"1958cfb610345bae61b9d16606be1dd199a238fff7300767000b1cb4afc988fe"} Feb 23 17:52:19 crc kubenswrapper[4724]: I0223 17:52:19.191968 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1958cfb610345bae61b9d16606be1dd199a238fff7300767000b1cb4afc988fe" Feb 23 17:52:19 crc kubenswrapper[4724]: I0223 17:52:19.197936 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-v8dph" event={"ID":"8f112391-decc-4aa2-a230-699a0015c306","Type":"ContainerStarted","Data":"191490c7659fe0b2d6221d27ac21a7ad4e46db8bc840b8e8bfe775a2251f5c71"} Feb 23 17:52:19 crc kubenswrapper[4724]: I0223 17:52:19.214758 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-v8dph" podStartSLOduration=2.214739875 podStartE2EDuration="2.214739875s" podCreationTimestamp="2026-02-23 17:52:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:52:19.210759348 +0000 UTC m=+1295.026958948" watchObservedRunningTime="2026-02-23 17:52:19.214739875 +0000 UTC m=+1295.030939475" Feb 23 17:52:19 crc kubenswrapper[4724]: I0223 17:52:19.292383 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc99f56d9-tp8hh" Feb 23 17:52:19 crc kubenswrapper[4724]: I0223 17:52:19.417064 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxwbr\" (UniqueName: \"kubernetes.io/projected/c546e0ba-a0ef-44b7-a810-e405f8bca93e-kube-api-access-cxwbr\") pod \"c546e0ba-a0ef-44b7-a810-e405f8bca93e\" (UID: \"c546e0ba-a0ef-44b7-a810-e405f8bca93e\") " Feb 23 17:52:19 crc kubenswrapper[4724]: I0223 17:52:19.417517 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c546e0ba-a0ef-44b7-a810-e405f8bca93e-ovsdbserver-sb\") pod \"c546e0ba-a0ef-44b7-a810-e405f8bca93e\" (UID: \"c546e0ba-a0ef-44b7-a810-e405f8bca93e\") " Feb 23 17:52:19 crc kubenswrapper[4724]: I0223 17:52:19.417583 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c546e0ba-a0ef-44b7-a810-e405f8bca93e-config\") pod \"c546e0ba-a0ef-44b7-a810-e405f8bca93e\" (UID: \"c546e0ba-a0ef-44b7-a810-e405f8bca93e\") " Feb 23 17:52:19 crc kubenswrapper[4724]: I0223 17:52:19.417636 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c546e0ba-a0ef-44b7-a810-e405f8bca93e-dns-swift-storage-0\") pod \"c546e0ba-a0ef-44b7-a810-e405f8bca93e\" (UID: \"c546e0ba-a0ef-44b7-a810-e405f8bca93e\") " Feb 23 17:52:19 crc kubenswrapper[4724]: I0223 17:52:19.417713 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c546e0ba-a0ef-44b7-a810-e405f8bca93e-ovsdbserver-nb\") pod \"c546e0ba-a0ef-44b7-a810-e405f8bca93e\" (UID: \"c546e0ba-a0ef-44b7-a810-e405f8bca93e\") " Feb 23 17:52:19 crc kubenswrapper[4724]: I0223 17:52:19.417763 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c546e0ba-a0ef-44b7-a810-e405f8bca93e-dns-svc\") pod \"c546e0ba-a0ef-44b7-a810-e405f8bca93e\" (UID: \"c546e0ba-a0ef-44b7-a810-e405f8bca93e\") " Feb 23 17:52:19 crc kubenswrapper[4724]: I0223 17:52:19.452051 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c546e0ba-a0ef-44b7-a810-e405f8bca93e-kube-api-access-cxwbr" (OuterVolumeSpecName: "kube-api-access-cxwbr") pod "c546e0ba-a0ef-44b7-a810-e405f8bca93e" (UID: "c546e0ba-a0ef-44b7-a810-e405f8bca93e"). InnerVolumeSpecName "kube-api-access-cxwbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:52:19 crc kubenswrapper[4724]: I0223 17:52:19.520377 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxwbr\" (UniqueName: \"kubernetes.io/projected/c546e0ba-a0ef-44b7-a810-e405f8bca93e-kube-api-access-cxwbr\") on node \"crc\" DevicePath \"\"" Feb 23 17:52:19 crc kubenswrapper[4724]: I0223 17:52:19.534075 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c546e0ba-a0ef-44b7-a810-e405f8bca93e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c546e0ba-a0ef-44b7-a810-e405f8bca93e" (UID: "c546e0ba-a0ef-44b7-a810-e405f8bca93e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:52:19 crc kubenswrapper[4724]: I0223 17:52:19.622001 4724 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c546e0ba-a0ef-44b7-a810-e405f8bca93e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 23 17:52:19 crc kubenswrapper[4724]: I0223 17:52:19.651200 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c546e0ba-a0ef-44b7-a810-e405f8bca93e-config" (OuterVolumeSpecName: "config") pod "c546e0ba-a0ef-44b7-a810-e405f8bca93e" (UID: "c546e0ba-a0ef-44b7-a810-e405f8bca93e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:52:19 crc kubenswrapper[4724]: I0223 17:52:19.651314 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c546e0ba-a0ef-44b7-a810-e405f8bca93e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c546e0ba-a0ef-44b7-a810-e405f8bca93e" (UID: "c546e0ba-a0ef-44b7-a810-e405f8bca93e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:52:19 crc kubenswrapper[4724]: I0223 17:52:19.657679 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c546e0ba-a0ef-44b7-a810-e405f8bca93e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c546e0ba-a0ef-44b7-a810-e405f8bca93e" (UID: "c546e0ba-a0ef-44b7-a810-e405f8bca93e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:52:19 crc kubenswrapper[4724]: I0223 17:52:19.667709 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c546e0ba-a0ef-44b7-a810-e405f8bca93e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c546e0ba-a0ef-44b7-a810-e405f8bca93e" (UID: "c546e0ba-a0ef-44b7-a810-e405f8bca93e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:52:19 crc kubenswrapper[4724]: I0223 17:52:19.724331 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c546e0ba-a0ef-44b7-a810-e405f8bca93e-config\") on node \"crc\" DevicePath \"\"" Feb 23 17:52:19 crc kubenswrapper[4724]: I0223 17:52:19.724423 4724 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c546e0ba-a0ef-44b7-a810-e405f8bca93e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 23 17:52:19 crc kubenswrapper[4724]: I0223 17:52:19.724438 4724 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c546e0ba-a0ef-44b7-a810-e405f8bca93e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 17:52:19 crc kubenswrapper[4724]: I0223 17:52:19.724448 4724 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c546e0ba-a0ef-44b7-a810-e405f8bca93e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 17:52:20 crc kubenswrapper[4724]: I0223 17:52:20.214040 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc99f56d9-tp8hh" Feb 23 17:52:20 crc kubenswrapper[4724]: I0223 17:52:20.214080 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ed30198-318f-476e-83b7-e93ab4c5625d","Type":"ContainerStarted","Data":"172f2c31c429dceb21734c2699effef10f12e95b91ecd502b8e9682d8e81ccf6"} Feb 23 17:52:20 crc kubenswrapper[4724]: I0223 17:52:20.254509 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.225684201 podStartE2EDuration="5.254482785s" podCreationTimestamp="2026-02-23 17:52:15 +0000 UTC" firstStartedPulling="2026-02-23 17:52:16.044207264 +0000 UTC m=+1291.860406854" lastFinishedPulling="2026-02-23 17:52:19.073005848 +0000 UTC m=+1294.889205438" observedRunningTime="2026-02-23 17:52:20.254164877 +0000 UTC m=+1296.070364477" watchObservedRunningTime="2026-02-23 17:52:20.254482785 +0000 UTC m=+1296.070682425" Feb 23 17:52:20 crc kubenswrapper[4724]: I0223 17:52:20.304924 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc99f56d9-tp8hh"] Feb 23 17:52:20 crc kubenswrapper[4724]: I0223 17:52:20.314190 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bc99f56d9-tp8hh"] Feb 23 17:52:20 crc kubenswrapper[4724]: I0223 17:52:20.986014 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c546e0ba-a0ef-44b7-a810-e405f8bca93e" path="/var/lib/kubelet/pods/c546e0ba-a0ef-44b7-a810-e405f8bca93e/volumes" Feb 23 17:52:21 crc kubenswrapper[4724]: I0223 17:52:21.230121 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 23 17:52:23 crc kubenswrapper[4724]: I0223 17:52:23.475429 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 23 17:52:23 crc kubenswrapper[4724]: I0223 17:52:23.475477 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 23 17:52:24 crc kubenswrapper[4724]: I0223 17:52:24.289643 4724 generic.go:334] "Generic (PLEG): container finished" podID="8f112391-decc-4aa2-a230-699a0015c306" containerID="191490c7659fe0b2d6221d27ac21a7ad4e46db8bc840b8e8bfe775a2251f5c71" exitCode=0 Feb 23 17:52:24 crc kubenswrapper[4724]: I0223 17:52:24.289773 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-v8dph" event={"ID":"8f112391-decc-4aa2-a230-699a0015c306","Type":"ContainerDied","Data":"191490c7659fe0b2d6221d27ac21a7ad4e46db8bc840b8e8bfe775a2251f5c71"} Feb 23 17:52:24 crc kubenswrapper[4724]: I0223 17:52:24.493534 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c2cf1f00-6743-4d49-a79e-4dc0977b2145" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.226:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 17:52:24 crc kubenswrapper[4724]: I0223 17:52:24.493560 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c2cf1f00-6743-4d49-a79e-4dc0977b2145" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.226:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 17:52:25 crc kubenswrapper[4724]: I0223 17:52:25.698201 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-v8dph" Feb 23 17:52:25 crc kubenswrapper[4724]: I0223 17:52:25.841812 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f112391-decc-4aa2-a230-699a0015c306-config-data\") pod \"8f112391-decc-4aa2-a230-699a0015c306\" (UID: \"8f112391-decc-4aa2-a230-699a0015c306\") " Feb 23 17:52:25 crc kubenswrapper[4724]: I0223 17:52:25.842187 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f112391-decc-4aa2-a230-699a0015c306-combined-ca-bundle\") pod \"8f112391-decc-4aa2-a230-699a0015c306\" (UID: \"8f112391-decc-4aa2-a230-699a0015c306\") " Feb 23 17:52:25 crc kubenswrapper[4724]: I0223 17:52:25.842424 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f112391-decc-4aa2-a230-699a0015c306-scripts\") pod \"8f112391-decc-4aa2-a230-699a0015c306\" (UID: \"8f112391-decc-4aa2-a230-699a0015c306\") " Feb 23 17:52:25 crc kubenswrapper[4724]: I0223 17:52:25.842578 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcngk\" (UniqueName: \"kubernetes.io/projected/8f112391-decc-4aa2-a230-699a0015c306-kube-api-access-vcngk\") pod \"8f112391-decc-4aa2-a230-699a0015c306\" (UID: \"8f112391-decc-4aa2-a230-699a0015c306\") " Feb 23 17:52:25 crc kubenswrapper[4724]: I0223 17:52:25.849686 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f112391-decc-4aa2-a230-699a0015c306-scripts" (OuterVolumeSpecName: "scripts") pod "8f112391-decc-4aa2-a230-699a0015c306" (UID: "8f112391-decc-4aa2-a230-699a0015c306"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:52:25 crc kubenswrapper[4724]: I0223 17:52:25.849757 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f112391-decc-4aa2-a230-699a0015c306-kube-api-access-vcngk" (OuterVolumeSpecName: "kube-api-access-vcngk") pod "8f112391-decc-4aa2-a230-699a0015c306" (UID: "8f112391-decc-4aa2-a230-699a0015c306"). InnerVolumeSpecName "kube-api-access-vcngk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:52:25 crc kubenswrapper[4724]: I0223 17:52:25.872623 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f112391-decc-4aa2-a230-699a0015c306-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f112391-decc-4aa2-a230-699a0015c306" (UID: "8f112391-decc-4aa2-a230-699a0015c306"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:52:25 crc kubenswrapper[4724]: I0223 17:52:25.873564 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f112391-decc-4aa2-a230-699a0015c306-config-data" (OuterVolumeSpecName: "config-data") pod "8f112391-decc-4aa2-a230-699a0015c306" (UID: "8f112391-decc-4aa2-a230-699a0015c306"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:52:25 crc kubenswrapper[4724]: I0223 17:52:25.944951 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcngk\" (UniqueName: \"kubernetes.io/projected/8f112391-decc-4aa2-a230-699a0015c306-kube-api-access-vcngk\") on node \"crc\" DevicePath \"\"" Feb 23 17:52:25 crc kubenswrapper[4724]: I0223 17:52:25.945177 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f112391-decc-4aa2-a230-699a0015c306-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 17:52:25 crc kubenswrapper[4724]: I0223 17:52:25.945270 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f112391-decc-4aa2-a230-699a0015c306-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 17:52:25 crc kubenswrapper[4724]: I0223 17:52:25.945345 4724 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f112391-decc-4aa2-a230-699a0015c306-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 17:52:26 crc kubenswrapper[4724]: I0223 17:52:26.317456 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-v8dph" event={"ID":"8f112391-decc-4aa2-a230-699a0015c306","Type":"ContainerDied","Data":"ba25b549c98b41879171cf759402c43eba5bdc8a841d1c4d9f5f423fa049e51b"} Feb 23 17:52:26 crc kubenswrapper[4724]: I0223 17:52:26.317881 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba25b549c98b41879171cf759402c43eba5bdc8a841d1c4d9f5f423fa049e51b" Feb 23 17:52:26 crc kubenswrapper[4724]: I0223 17:52:26.317545 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-v8dph" Feb 23 17:52:26 crc kubenswrapper[4724]: I0223 17:52:26.491754 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 17:52:26 crc kubenswrapper[4724]: I0223 17:52:26.492005 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="9d2f1a31-7f08-451d-962d-88ee8fd7f246" containerName="nova-scheduler-scheduler" containerID="cri-o://49aa0036214b5ead073c6b65b357033059d6c529826df11f2dd7d4b85177798a" gracePeriod=30 Feb 23 17:52:26 crc kubenswrapper[4724]: I0223 17:52:26.508169 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 23 17:52:26 crc kubenswrapper[4724]: I0223 17:52:26.508422 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c2cf1f00-6743-4d49-a79e-4dc0977b2145" containerName="nova-api-log" containerID="cri-o://f8442da9b4d527679e023beef9c5ecc1305609850d4e84fa8098a032d511a82c" gracePeriod=30 Feb 23 17:52:26 crc kubenswrapper[4724]: I0223 17:52:26.508581 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c2cf1f00-6743-4d49-a79e-4dc0977b2145" containerName="nova-api-api" containerID="cri-o://96d70aaebeb594e6e2254a9c1f8d156fbb2b9e98acc191a49aeb45f92bfb5148" gracePeriod=30 Feb 23 17:52:26 crc kubenswrapper[4724]: I0223 17:52:26.528951 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 17:52:26 crc kubenswrapper[4724]: I0223 17:52:26.529360 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0b3714b0-4281-4cf0-be57-789820a25116" containerName="nova-metadata-log" containerID="cri-o://9a51b8a304a68102b316f2aa35c02397f86a4b236ec2c4ec7cc8d3a134a06003" gracePeriod=30 Feb 23 17:52:26 crc kubenswrapper[4724]: I0223 17:52:26.529889 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0b3714b0-4281-4cf0-be57-789820a25116" containerName="nova-metadata-metadata" containerID="cri-o://792a0d4d43c661958eda7724bb846bc7d555e66ea15dabb0504ba4b891b4e8b4" gracePeriod=30 Feb 23 17:52:27 crc kubenswrapper[4724]: I0223 17:52:27.329818 4724 generic.go:334] "Generic (PLEG): container finished" podID="0b3714b0-4281-4cf0-be57-789820a25116" containerID="9a51b8a304a68102b316f2aa35c02397f86a4b236ec2c4ec7cc8d3a134a06003" exitCode=143 Feb 23 17:52:27 crc kubenswrapper[4724]: I0223 17:52:27.329905 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0b3714b0-4281-4cf0-be57-789820a25116","Type":"ContainerDied","Data":"9a51b8a304a68102b316f2aa35c02397f86a4b236ec2c4ec7cc8d3a134a06003"} Feb 23 17:52:27 crc kubenswrapper[4724]: I0223 17:52:27.331917 4724 generic.go:334] "Generic (PLEG): container finished" podID="c2cf1f00-6743-4d49-a79e-4dc0977b2145" containerID="f8442da9b4d527679e023beef9c5ecc1305609850d4e84fa8098a032d511a82c" exitCode=143 Feb 23 17:52:27 crc kubenswrapper[4724]: I0223 17:52:27.331954 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c2cf1f00-6743-4d49-a79e-4dc0977b2145","Type":"ContainerDied","Data":"f8442da9b4d527679e023beef9c5ecc1305609850d4e84fa8098a032d511a82c"} Feb 23 17:52:27 crc kubenswrapper[4724]: I0223 17:52:27.751993 4724 patch_prober.go:28] interesting pod/machine-config-daemon-rw78r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 17:52:27 crc kubenswrapper[4724]: I0223 17:52:27.752335 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 17:52:27 crc kubenswrapper[4724]: I0223 17:52:27.940110 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 23 17:52:27 crc kubenswrapper[4724]: I0223 17:52:27.955496 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.086730 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b3714b0-4281-4cf0-be57-789820a25116-nova-metadata-tls-certs\") pod \"0b3714b0-4281-4cf0-be57-789820a25116\" (UID: \"0b3714b0-4281-4cf0-be57-789820a25116\") " Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.086807 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8z4cz\" (UniqueName: \"kubernetes.io/projected/9d2f1a31-7f08-451d-962d-88ee8fd7f246-kube-api-access-8z4cz\") pod \"9d2f1a31-7f08-451d-962d-88ee8fd7f246\" (UID: \"9d2f1a31-7f08-451d-962d-88ee8fd7f246\") " Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.086841 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d2f1a31-7f08-451d-962d-88ee8fd7f246-config-data\") pod \"9d2f1a31-7f08-451d-962d-88ee8fd7f246\" (UID: \"9d2f1a31-7f08-451d-962d-88ee8fd7f246\") " Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.086921 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b3714b0-4281-4cf0-be57-789820a25116-logs\") pod \"0b3714b0-4281-4cf0-be57-789820a25116\" (UID: \"0b3714b0-4281-4cf0-be57-789820a25116\") " Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.086951 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d2f1a31-7f08-451d-962d-88ee8fd7f246-combined-ca-bundle\") pod \"9d2f1a31-7f08-451d-962d-88ee8fd7f246\" (UID: \"9d2f1a31-7f08-451d-962d-88ee8fd7f246\") " Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.087014 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58l72\" (UniqueName: \"kubernetes.io/projected/0b3714b0-4281-4cf0-be57-789820a25116-kube-api-access-58l72\") pod \"0b3714b0-4281-4cf0-be57-789820a25116\" (UID: \"0b3714b0-4281-4cf0-be57-789820a25116\") " Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.087052 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b3714b0-4281-4cf0-be57-789820a25116-combined-ca-bundle\") pod \"0b3714b0-4281-4cf0-be57-789820a25116\" (UID: \"0b3714b0-4281-4cf0-be57-789820a25116\") " Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.087079 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b3714b0-4281-4cf0-be57-789820a25116-config-data\") pod \"0b3714b0-4281-4cf0-be57-789820a25116\" (UID: \"0b3714b0-4281-4cf0-be57-789820a25116\") " Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.092882 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b3714b0-4281-4cf0-be57-789820a25116-logs" (OuterVolumeSpecName: "logs") pod "0b3714b0-4281-4cf0-be57-789820a25116" (UID: "0b3714b0-4281-4cf0-be57-789820a25116"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.104461 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b3714b0-4281-4cf0-be57-789820a25116-kube-api-access-58l72" (OuterVolumeSpecName: "kube-api-access-58l72") pod "0b3714b0-4281-4cf0-be57-789820a25116" (UID: "0b3714b0-4281-4cf0-be57-789820a25116"). InnerVolumeSpecName "kube-api-access-58l72". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.106834 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d2f1a31-7f08-451d-962d-88ee8fd7f246-kube-api-access-8z4cz" (OuterVolumeSpecName: "kube-api-access-8z4cz") pod "9d2f1a31-7f08-451d-962d-88ee8fd7f246" (UID: "9d2f1a31-7f08-451d-962d-88ee8fd7f246"). InnerVolumeSpecName "kube-api-access-8z4cz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.127611 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b3714b0-4281-4cf0-be57-789820a25116-config-data" (OuterVolumeSpecName: "config-data") pod "0b3714b0-4281-4cf0-be57-789820a25116" (UID: "0b3714b0-4281-4cf0-be57-789820a25116"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.133276 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d2f1a31-7f08-451d-962d-88ee8fd7f246-config-data" (OuterVolumeSpecName: "config-data") pod "9d2f1a31-7f08-451d-962d-88ee8fd7f246" (UID: "9d2f1a31-7f08-451d-962d-88ee8fd7f246"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.134412 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b3714b0-4281-4cf0-be57-789820a25116-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b3714b0-4281-4cf0-be57-789820a25116" (UID: "0b3714b0-4281-4cf0-be57-789820a25116"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.154059 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d2f1a31-7f08-451d-962d-88ee8fd7f246-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9d2f1a31-7f08-451d-962d-88ee8fd7f246" (UID: "9d2f1a31-7f08-451d-962d-88ee8fd7f246"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.163892 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b3714b0-4281-4cf0-be57-789820a25116-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "0b3714b0-4281-4cf0-be57-789820a25116" (UID: "0b3714b0-4281-4cf0-be57-789820a25116"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.173792 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.194277 4724 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b3714b0-4281-4cf0-be57-789820a25116-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.194879 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8z4cz\" (UniqueName: \"kubernetes.io/projected/9d2f1a31-7f08-451d-962d-88ee8fd7f246-kube-api-access-8z4cz\") on node \"crc\" DevicePath \"\"" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.194949 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d2f1a31-7f08-451d-962d-88ee8fd7f246-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.195007 4724 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b3714b0-4281-4cf0-be57-789820a25116-logs\") on node \"crc\" DevicePath \"\"" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.195061 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d2f1a31-7f08-451d-962d-88ee8fd7f246-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.195115 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58l72\" (UniqueName: \"kubernetes.io/projected/0b3714b0-4281-4cf0-be57-789820a25116-kube-api-access-58l72\") on node \"crc\" DevicePath \"\"" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.195170 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b3714b0-4281-4cf0-be57-789820a25116-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.195228 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b3714b0-4281-4cf0-be57-789820a25116-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.296333 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2cf1f00-6743-4d49-a79e-4dc0977b2145-config-data\") pod \"c2cf1f00-6743-4d49-a79e-4dc0977b2145\" (UID: \"c2cf1f00-6743-4d49-a79e-4dc0977b2145\") " Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.296420 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2cf1f00-6743-4d49-a79e-4dc0977b2145-logs\") pod \"c2cf1f00-6743-4d49-a79e-4dc0977b2145\" (UID: \"c2cf1f00-6743-4d49-a79e-4dc0977b2145\") " Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.296547 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2cf1f00-6743-4d49-a79e-4dc0977b2145-internal-tls-certs\") pod \"c2cf1f00-6743-4d49-a79e-4dc0977b2145\" (UID: \"c2cf1f00-6743-4d49-a79e-4dc0977b2145\") " Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.296639 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2cf1f00-6743-4d49-a79e-4dc0977b2145-public-tls-certs\") pod \"c2cf1f00-6743-4d49-a79e-4dc0977b2145\" (UID: \"c2cf1f00-6743-4d49-a79e-4dc0977b2145\") " Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.296712 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2cf1f00-6743-4d49-a79e-4dc0977b2145-combined-ca-bundle\") pod \"c2cf1f00-6743-4d49-a79e-4dc0977b2145\" (UID: \"c2cf1f00-6743-4d49-a79e-4dc0977b2145\") " Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.296796 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4wnr\" (UniqueName: \"kubernetes.io/projected/c2cf1f00-6743-4d49-a79e-4dc0977b2145-kube-api-access-w4wnr\") pod \"c2cf1f00-6743-4d49-a79e-4dc0977b2145\" (UID: \"c2cf1f00-6743-4d49-a79e-4dc0977b2145\") " Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.296921 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2cf1f00-6743-4d49-a79e-4dc0977b2145-logs" (OuterVolumeSpecName: "logs") pod "c2cf1f00-6743-4d49-a79e-4dc0977b2145" (UID: "c2cf1f00-6743-4d49-a79e-4dc0977b2145"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.297293 4724 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2cf1f00-6743-4d49-a79e-4dc0977b2145-logs\") on node \"crc\" DevicePath \"\"" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.300658 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2cf1f00-6743-4d49-a79e-4dc0977b2145-kube-api-access-w4wnr" (OuterVolumeSpecName: "kube-api-access-w4wnr") pod "c2cf1f00-6743-4d49-a79e-4dc0977b2145" (UID: "c2cf1f00-6743-4d49-a79e-4dc0977b2145"). InnerVolumeSpecName "kube-api-access-w4wnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.324119 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2cf1f00-6743-4d49-a79e-4dc0977b2145-config-data" (OuterVolumeSpecName: "config-data") pod "c2cf1f00-6743-4d49-a79e-4dc0977b2145" (UID: "c2cf1f00-6743-4d49-a79e-4dc0977b2145"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.325621 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2cf1f00-6743-4d49-a79e-4dc0977b2145-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c2cf1f00-6743-4d49-a79e-4dc0977b2145" (UID: "c2cf1f00-6743-4d49-a79e-4dc0977b2145"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.340617 4724 generic.go:334] "Generic (PLEG): container finished" podID="9d2f1a31-7f08-451d-962d-88ee8fd7f246" containerID="49aa0036214b5ead073c6b65b357033059d6c529826df11f2dd7d4b85177798a" exitCode=0 Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.340690 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9d2f1a31-7f08-451d-962d-88ee8fd7f246","Type":"ContainerDied","Data":"49aa0036214b5ead073c6b65b357033059d6c529826df11f2dd7d4b85177798a"} Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.340725 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9d2f1a31-7f08-451d-962d-88ee8fd7f246","Type":"ContainerDied","Data":"781596aaefb4dd95f00e5e80d58a51d1008a1868d2cfb0826d2ec3df1d9439ff"} Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.340746 4724 scope.go:117] "RemoveContainer" containerID="49aa0036214b5ead073c6b65b357033059d6c529826df11f2dd7d4b85177798a" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.340877 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.349078 4724 generic.go:334] "Generic (PLEG): container finished" podID="c2cf1f00-6743-4d49-a79e-4dc0977b2145" containerID="96d70aaebeb594e6e2254a9c1f8d156fbb2b9e98acc191a49aeb45f92bfb5148" exitCode=0 Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.349159 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c2cf1f00-6743-4d49-a79e-4dc0977b2145","Type":"ContainerDied","Data":"96d70aaebeb594e6e2254a9c1f8d156fbb2b9e98acc191a49aeb45f92bfb5148"} Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.349171 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.349188 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c2cf1f00-6743-4d49-a79e-4dc0977b2145","Type":"ContainerDied","Data":"08c8f3d898b151d1af6cedc577f41e0122e44c9f32a3aec9167e32bf03150b4b"} Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.354299 4724 generic.go:334] "Generic (PLEG): container finished" podID="0b3714b0-4281-4cf0-be57-789820a25116" containerID="792a0d4d43c661958eda7724bb846bc7d555e66ea15dabb0504ba4b891b4e8b4" exitCode=0 Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.354343 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0b3714b0-4281-4cf0-be57-789820a25116","Type":"ContainerDied","Data":"792a0d4d43c661958eda7724bb846bc7d555e66ea15dabb0504ba4b891b4e8b4"} Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.354368 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.354370 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0b3714b0-4281-4cf0-be57-789820a25116","Type":"ContainerDied","Data":"6e599623739e2360c620e636c930b00bd74642869deac683d561b43f74b9a545"} Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.356314 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2cf1f00-6743-4d49-a79e-4dc0977b2145-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c2cf1f00-6743-4d49-a79e-4dc0977b2145" (UID: "c2cf1f00-6743-4d49-a79e-4dc0977b2145"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.356818 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2cf1f00-6743-4d49-a79e-4dc0977b2145-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c2cf1f00-6743-4d49-a79e-4dc0977b2145" (UID: "c2cf1f00-6743-4d49-a79e-4dc0977b2145"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.373104 4724 scope.go:117] "RemoveContainer" containerID="49aa0036214b5ead073c6b65b357033059d6c529826df11f2dd7d4b85177798a" Feb 23 17:52:28 crc kubenswrapper[4724]: E0223 17:52:28.373591 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49aa0036214b5ead073c6b65b357033059d6c529826df11f2dd7d4b85177798a\": container with ID starting with 49aa0036214b5ead073c6b65b357033059d6c529826df11f2dd7d4b85177798a not found: ID does not exist" containerID="49aa0036214b5ead073c6b65b357033059d6c529826df11f2dd7d4b85177798a" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.373647 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49aa0036214b5ead073c6b65b357033059d6c529826df11f2dd7d4b85177798a"} err="failed to get container status \"49aa0036214b5ead073c6b65b357033059d6c529826df11f2dd7d4b85177798a\": rpc error: code = NotFound desc = could not find container \"49aa0036214b5ead073c6b65b357033059d6c529826df11f2dd7d4b85177798a\": container with ID starting with 49aa0036214b5ead073c6b65b357033059d6c529826df11f2dd7d4b85177798a not found: ID does not exist" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.373668 4724 scope.go:117] "RemoveContainer" containerID="96d70aaebeb594e6e2254a9c1f8d156fbb2b9e98acc191a49aeb45f92bfb5148" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.396785 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.402034 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4wnr\" (UniqueName: \"kubernetes.io/projected/c2cf1f00-6743-4d49-a79e-4dc0977b2145-kube-api-access-w4wnr\") on node \"crc\" DevicePath \"\"" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.402068 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2cf1f00-6743-4d49-a79e-4dc0977b2145-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.402079 4724 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2cf1f00-6743-4d49-a79e-4dc0977b2145-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.402090 4724 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2cf1f00-6743-4d49-a79e-4dc0977b2145-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.402098 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2cf1f00-6743-4d49-a79e-4dc0977b2145-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.407959 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.418264 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.418336 4724 scope.go:117] "RemoveContainer" containerID="f8442da9b4d527679e023beef9c5ecc1305609850d4e84fa8098a032d511a82c" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.432271 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.440509 4724 scope.go:117] "RemoveContainer" containerID="96d70aaebeb594e6e2254a9c1f8d156fbb2b9e98acc191a49aeb45f92bfb5148" Feb 23 17:52:28 crc kubenswrapper[4724]: E0223 17:52:28.444842 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96d70aaebeb594e6e2254a9c1f8d156fbb2b9e98acc191a49aeb45f92bfb5148\": container with ID starting with 96d70aaebeb594e6e2254a9c1f8d156fbb2b9e98acc191a49aeb45f92bfb5148 not found: ID does not exist" containerID="96d70aaebeb594e6e2254a9c1f8d156fbb2b9e98acc191a49aeb45f92bfb5148" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.444898 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96d70aaebeb594e6e2254a9c1f8d156fbb2b9e98acc191a49aeb45f92bfb5148"} err="failed to get container status \"96d70aaebeb594e6e2254a9c1f8d156fbb2b9e98acc191a49aeb45f92bfb5148\": rpc error: code = NotFound desc = could not find container \"96d70aaebeb594e6e2254a9c1f8d156fbb2b9e98acc191a49aeb45f92bfb5148\": container with ID starting with 96d70aaebeb594e6e2254a9c1f8d156fbb2b9e98acc191a49aeb45f92bfb5148 not found: ID does not exist" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.444931 4724 scope.go:117] "RemoveContainer" containerID="f8442da9b4d527679e023beef9c5ecc1305609850d4e84fa8098a032d511a82c" Feb 23 17:52:28 crc kubenswrapper[4724]: E0223 17:52:28.446986 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8442da9b4d527679e023beef9c5ecc1305609850d4e84fa8098a032d511a82c\": container with ID starting with f8442da9b4d527679e023beef9c5ecc1305609850d4e84fa8098a032d511a82c not found: ID does not exist" containerID="f8442da9b4d527679e023beef9c5ecc1305609850d4e84fa8098a032d511a82c" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.447130 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8442da9b4d527679e023beef9c5ecc1305609850d4e84fa8098a032d511a82c"} err="failed to get container status \"f8442da9b4d527679e023beef9c5ecc1305609850d4e84fa8098a032d511a82c\": rpc error: code = NotFound desc = could not find container \"f8442da9b4d527679e023beef9c5ecc1305609850d4e84fa8098a032d511a82c\": container with ID starting with f8442da9b4d527679e023beef9c5ecc1305609850d4e84fa8098a032d511a82c not found: ID does not exist" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.447224 4724 scope.go:117] "RemoveContainer" containerID="792a0d4d43c661958eda7724bb846bc7d555e66ea15dabb0504ba4b891b4e8b4" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.458114 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 23 17:52:28 crc kubenswrapper[4724]: E0223 17:52:28.458744 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b3714b0-4281-4cf0-be57-789820a25116" containerName="nova-metadata-log" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.458850 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b3714b0-4281-4cf0-be57-789820a25116" containerName="nova-metadata-log" Feb 23 17:52:28 crc kubenswrapper[4724]: E0223 17:52:28.458910 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c546e0ba-a0ef-44b7-a810-e405f8bca93e" containerName="init" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.458967 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="c546e0ba-a0ef-44b7-a810-e405f8bca93e" containerName="init" Feb 23 17:52:28 crc kubenswrapper[4724]: E0223 17:52:28.459032 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2cf1f00-6743-4d49-a79e-4dc0977b2145" containerName="nova-api-api" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.459114 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2cf1f00-6743-4d49-a79e-4dc0977b2145" containerName="nova-api-api" Feb 23 17:52:28 crc kubenswrapper[4724]: E0223 17:52:28.459275 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c546e0ba-a0ef-44b7-a810-e405f8bca93e" containerName="dnsmasq-dns" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.459907 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="c546e0ba-a0ef-44b7-a810-e405f8bca93e" containerName="dnsmasq-dns" Feb 23 17:52:28 crc kubenswrapper[4724]: E0223 17:52:28.459994 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d2f1a31-7f08-451d-962d-88ee8fd7f246" containerName="nova-scheduler-scheduler" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.460054 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d2f1a31-7f08-451d-962d-88ee8fd7f246" containerName="nova-scheduler-scheduler" Feb 23 17:52:28 crc kubenswrapper[4724]: E0223 17:52:28.460108 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b3714b0-4281-4cf0-be57-789820a25116" containerName="nova-metadata-metadata" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.460167 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b3714b0-4281-4cf0-be57-789820a25116" containerName="nova-metadata-metadata" Feb 23 17:52:28 crc kubenswrapper[4724]: E0223 17:52:28.460237 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f112391-decc-4aa2-a230-699a0015c306" containerName="nova-manage" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.460291 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f112391-decc-4aa2-a230-699a0015c306" containerName="nova-manage" Feb 23 17:52:28 crc kubenswrapper[4724]: E0223 17:52:28.460350 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2cf1f00-6743-4d49-a79e-4dc0977b2145" containerName="nova-api-log" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.460424 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2cf1f00-6743-4d49-a79e-4dc0977b2145" containerName="nova-api-log" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.460688 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b3714b0-4281-4cf0-be57-789820a25116" containerName="nova-metadata-metadata" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.460759 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b3714b0-4281-4cf0-be57-789820a25116" containerName="nova-metadata-log" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.460826 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2cf1f00-6743-4d49-a79e-4dc0977b2145" containerName="nova-api-api" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.460880 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2cf1f00-6743-4d49-a79e-4dc0977b2145" containerName="nova-api-log" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.460943 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f112391-decc-4aa2-a230-699a0015c306" containerName="nova-manage" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.460999 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d2f1a31-7f08-451d-962d-88ee8fd7f246" containerName="nova-scheduler-scheduler" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.461056 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="c546e0ba-a0ef-44b7-a810-e405f8bca93e" containerName="dnsmasq-dns" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.462251 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.464173 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.465943 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.471813 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.473813 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.476052 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.485167 4724 scope.go:117] "RemoveContainer" containerID="9a51b8a304a68102b316f2aa35c02397f86a4b236ec2c4ec7cc8d3a134a06003" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.495132 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.508443 4724 scope.go:117] "RemoveContainer" containerID="792a0d4d43c661958eda7724bb846bc7d555e66ea15dabb0504ba4b891b4e8b4" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.508553 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 17:52:28 crc kubenswrapper[4724]: E0223 17:52:28.508802 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"792a0d4d43c661958eda7724bb846bc7d555e66ea15dabb0504ba4b891b4e8b4\": container with ID starting with 792a0d4d43c661958eda7724bb846bc7d555e66ea15dabb0504ba4b891b4e8b4 not found: ID does not exist" containerID="792a0d4d43c661958eda7724bb846bc7d555e66ea15dabb0504ba4b891b4e8b4" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.508831 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"792a0d4d43c661958eda7724bb846bc7d555e66ea15dabb0504ba4b891b4e8b4"} err="failed to get container status \"792a0d4d43c661958eda7724bb846bc7d555e66ea15dabb0504ba4b891b4e8b4\": rpc error: code = NotFound desc = could not find container \"792a0d4d43c661958eda7724bb846bc7d555e66ea15dabb0504ba4b891b4e8b4\": container with ID starting with 792a0d4d43c661958eda7724bb846bc7d555e66ea15dabb0504ba4b891b4e8b4 not found: ID does not exist" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.508852 4724 scope.go:117] "RemoveContainer" containerID="9a51b8a304a68102b316f2aa35c02397f86a4b236ec2c4ec7cc8d3a134a06003" Feb 23 17:52:28 crc kubenswrapper[4724]: E0223 17:52:28.509080 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a51b8a304a68102b316f2aa35c02397f86a4b236ec2c4ec7cc8d3a134a06003\": container with ID starting with 9a51b8a304a68102b316f2aa35c02397f86a4b236ec2c4ec7cc8d3a134a06003 not found: ID does not exist" containerID="9a51b8a304a68102b316f2aa35c02397f86a4b236ec2c4ec7cc8d3a134a06003" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.509102 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a51b8a304a68102b316f2aa35c02397f86a4b236ec2c4ec7cc8d3a134a06003"} err="failed to get container status \"9a51b8a304a68102b316f2aa35c02397f86a4b236ec2c4ec7cc8d3a134a06003\": rpc error: code = NotFound desc = could not find container \"9a51b8a304a68102b316f2aa35c02397f86a4b236ec2c4ec7cc8d3a134a06003\": container with ID starting with 9a51b8a304a68102b316f2aa35c02397f86a4b236ec2c4ec7cc8d3a134a06003 not found: ID does not exist" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.605287 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9365a64c-1314-4df5-b7b2-ed56c6d7a358-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9365a64c-1314-4df5-b7b2-ed56c6d7a358\") " pod="openstack/nova-metadata-0" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.605340 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xlqv\" (UniqueName: \"kubernetes.io/projected/9365a64c-1314-4df5-b7b2-ed56c6d7a358-kube-api-access-4xlqv\") pod \"nova-metadata-0\" (UID: \"9365a64c-1314-4df5-b7b2-ed56c6d7a358\") " pod="openstack/nova-metadata-0" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.605374 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05bf40b5-2154-40d2-8714-7e7d24d42786-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"05bf40b5-2154-40d2-8714-7e7d24d42786\") " pod="openstack/nova-scheduler-0" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.605580 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mc4w\" (UniqueName: \"kubernetes.io/projected/05bf40b5-2154-40d2-8714-7e7d24d42786-kube-api-access-6mc4w\") pod \"nova-scheduler-0\" (UID: \"05bf40b5-2154-40d2-8714-7e7d24d42786\") " pod="openstack/nova-scheduler-0" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.605819 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05bf40b5-2154-40d2-8714-7e7d24d42786-config-data\") pod \"nova-scheduler-0\" (UID: \"05bf40b5-2154-40d2-8714-7e7d24d42786\") " pod="openstack/nova-scheduler-0" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.606097 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9365a64c-1314-4df5-b7b2-ed56c6d7a358-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9365a64c-1314-4df5-b7b2-ed56c6d7a358\") " pod="openstack/nova-metadata-0" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.606202 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9365a64c-1314-4df5-b7b2-ed56c6d7a358-logs\") pod \"nova-metadata-0\" (UID: \"9365a64c-1314-4df5-b7b2-ed56c6d7a358\") " pod="openstack/nova-metadata-0" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.606232 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9365a64c-1314-4df5-b7b2-ed56c6d7a358-config-data\") pod \"nova-metadata-0\" (UID: \"9365a64c-1314-4df5-b7b2-ed56c6d7a358\") " pod="openstack/nova-metadata-0" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.683746 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.694013 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.704969 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.707035 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.709286 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05bf40b5-2154-40d2-8714-7e7d24d42786-config-data\") pod \"nova-scheduler-0\" (UID: \"05bf40b5-2154-40d2-8714-7e7d24d42786\") " pod="openstack/nova-scheduler-0" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.709432 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9365a64c-1314-4df5-b7b2-ed56c6d7a358-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9365a64c-1314-4df5-b7b2-ed56c6d7a358\") " pod="openstack/nova-metadata-0" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.709473 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.709490 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9365a64c-1314-4df5-b7b2-ed56c6d7a358-logs\") pod \"nova-metadata-0\" (UID: \"9365a64c-1314-4df5-b7b2-ed56c6d7a358\") " pod="openstack/nova-metadata-0" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.709520 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9365a64c-1314-4df5-b7b2-ed56c6d7a358-config-data\") pod \"nova-metadata-0\" (UID: \"9365a64c-1314-4df5-b7b2-ed56c6d7a358\") " pod="openstack/nova-metadata-0" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.709644 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9365a64c-1314-4df5-b7b2-ed56c6d7a358-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9365a64c-1314-4df5-b7b2-ed56c6d7a358\") " pod="openstack/nova-metadata-0" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.709708 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xlqv\" (UniqueName: \"kubernetes.io/projected/9365a64c-1314-4df5-b7b2-ed56c6d7a358-kube-api-access-4xlqv\") pod \"nova-metadata-0\" (UID: \"9365a64c-1314-4df5-b7b2-ed56c6d7a358\") " pod="openstack/nova-metadata-0" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.709741 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05bf40b5-2154-40d2-8714-7e7d24d42786-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"05bf40b5-2154-40d2-8714-7e7d24d42786\") " pod="openstack/nova-scheduler-0" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.709771 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mc4w\" (UniqueName: \"kubernetes.io/projected/05bf40b5-2154-40d2-8714-7e7d24d42786-kube-api-access-6mc4w\") pod \"nova-scheduler-0\" (UID: \"05bf40b5-2154-40d2-8714-7e7d24d42786\") " pod="openstack/nova-scheduler-0" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.710060 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9365a64c-1314-4df5-b7b2-ed56c6d7a358-logs\") pod \"nova-metadata-0\" (UID: \"9365a64c-1314-4df5-b7b2-ed56c6d7a358\") " pod="openstack/nova-metadata-0" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.709650 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.711772 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.713029 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05bf40b5-2154-40d2-8714-7e7d24d42786-config-data\") pod \"nova-scheduler-0\" (UID: \"05bf40b5-2154-40d2-8714-7e7d24d42786\") " pod="openstack/nova-scheduler-0" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.714100 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9365a64c-1314-4df5-b7b2-ed56c6d7a358-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9365a64c-1314-4df5-b7b2-ed56c6d7a358\") " pod="openstack/nova-metadata-0" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.714307 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9365a64c-1314-4df5-b7b2-ed56c6d7a358-config-data\") pod \"nova-metadata-0\" (UID: \"9365a64c-1314-4df5-b7b2-ed56c6d7a358\") " pod="openstack/nova-metadata-0" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.714573 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9365a64c-1314-4df5-b7b2-ed56c6d7a358-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9365a64c-1314-4df5-b7b2-ed56c6d7a358\") " pod="openstack/nova-metadata-0" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.715523 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05bf40b5-2154-40d2-8714-7e7d24d42786-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"05bf40b5-2154-40d2-8714-7e7d24d42786\") " pod="openstack/nova-scheduler-0" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.723969 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.734244 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mc4w\" (UniqueName: \"kubernetes.io/projected/05bf40b5-2154-40d2-8714-7e7d24d42786-kube-api-access-6mc4w\") pod \"nova-scheduler-0\" (UID: \"05bf40b5-2154-40d2-8714-7e7d24d42786\") " pod="openstack/nova-scheduler-0" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.735349 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xlqv\" (UniqueName: \"kubernetes.io/projected/9365a64c-1314-4df5-b7b2-ed56c6d7a358-kube-api-access-4xlqv\") pod \"nova-metadata-0\" (UID: \"9365a64c-1314-4df5-b7b2-ed56c6d7a358\") " pod="openstack/nova-metadata-0" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.779093 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.791498 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.811588 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2bkd\" (UniqueName: \"kubernetes.io/projected/4bef9e90-cdd6-4eb6-8801-3f7b07bc9363-kube-api-access-x2bkd\") pod \"nova-api-0\" (UID: \"4bef9e90-cdd6-4eb6-8801-3f7b07bc9363\") " pod="openstack/nova-api-0" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.811642 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bef9e90-cdd6-4eb6-8801-3f7b07bc9363-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4bef9e90-cdd6-4eb6-8801-3f7b07bc9363\") " pod="openstack/nova-api-0" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.811694 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bef9e90-cdd6-4eb6-8801-3f7b07bc9363-config-data\") pod \"nova-api-0\" (UID: \"4bef9e90-cdd6-4eb6-8801-3f7b07bc9363\") " pod="openstack/nova-api-0" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.811731 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bef9e90-cdd6-4eb6-8801-3f7b07bc9363-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4bef9e90-cdd6-4eb6-8801-3f7b07bc9363\") " pod="openstack/nova-api-0" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.811986 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bef9e90-cdd6-4eb6-8801-3f7b07bc9363-public-tls-certs\") pod \"nova-api-0\" (UID: \"4bef9e90-cdd6-4eb6-8801-3f7b07bc9363\") " pod="openstack/nova-api-0" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.812031 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4bef9e90-cdd6-4eb6-8801-3f7b07bc9363-logs\") pod \"nova-api-0\" (UID: \"4bef9e90-cdd6-4eb6-8801-3f7b07bc9363\") " pod="openstack/nova-api-0" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.914478 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bef9e90-cdd6-4eb6-8801-3f7b07bc9363-public-tls-certs\") pod \"nova-api-0\" (UID: \"4bef9e90-cdd6-4eb6-8801-3f7b07bc9363\") " pod="openstack/nova-api-0" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.914537 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4bef9e90-cdd6-4eb6-8801-3f7b07bc9363-logs\") pod \"nova-api-0\" (UID: \"4bef9e90-cdd6-4eb6-8801-3f7b07bc9363\") " pod="openstack/nova-api-0" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.914600 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2bkd\" (UniqueName: \"kubernetes.io/projected/4bef9e90-cdd6-4eb6-8801-3f7b07bc9363-kube-api-access-x2bkd\") pod \"nova-api-0\" (UID: \"4bef9e90-cdd6-4eb6-8801-3f7b07bc9363\") " pod="openstack/nova-api-0" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.914634 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bef9e90-cdd6-4eb6-8801-3f7b07bc9363-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4bef9e90-cdd6-4eb6-8801-3f7b07bc9363\") " pod="openstack/nova-api-0" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.914688 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bef9e90-cdd6-4eb6-8801-3f7b07bc9363-config-data\") pod \"nova-api-0\" (UID: \"4bef9e90-cdd6-4eb6-8801-3f7b07bc9363\") " pod="openstack/nova-api-0" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.914719 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bef9e90-cdd6-4eb6-8801-3f7b07bc9363-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4bef9e90-cdd6-4eb6-8801-3f7b07bc9363\") " pod="openstack/nova-api-0" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.916048 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4bef9e90-cdd6-4eb6-8801-3f7b07bc9363-logs\") pod \"nova-api-0\" (UID: \"4bef9e90-cdd6-4eb6-8801-3f7b07bc9363\") " pod="openstack/nova-api-0" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.922135 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bef9e90-cdd6-4eb6-8801-3f7b07bc9363-public-tls-certs\") pod \"nova-api-0\" (UID: \"4bef9e90-cdd6-4eb6-8801-3f7b07bc9363\") " pod="openstack/nova-api-0" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.922276 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bef9e90-cdd6-4eb6-8801-3f7b07bc9363-config-data\") pod \"nova-api-0\" (UID: \"4bef9e90-cdd6-4eb6-8801-3f7b07bc9363\") " pod="openstack/nova-api-0" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.922525 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bef9e90-cdd6-4eb6-8801-3f7b07bc9363-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4bef9e90-cdd6-4eb6-8801-3f7b07bc9363\") " pod="openstack/nova-api-0" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.922808 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bef9e90-cdd6-4eb6-8801-3f7b07bc9363-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4bef9e90-cdd6-4eb6-8801-3f7b07bc9363\") " pod="openstack/nova-api-0" Feb 23 17:52:28 crc kubenswrapper[4724]: I0223 17:52:28.930914 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2bkd\" (UniqueName: \"kubernetes.io/projected/4bef9e90-cdd6-4eb6-8801-3f7b07bc9363-kube-api-access-x2bkd\") pod \"nova-api-0\" (UID: \"4bef9e90-cdd6-4eb6-8801-3f7b07bc9363\") " pod="openstack/nova-api-0" Feb 23 17:52:29 crc kubenswrapper[4724]: I0223 17:52:28.977839 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b3714b0-4281-4cf0-be57-789820a25116" path="/var/lib/kubelet/pods/0b3714b0-4281-4cf0-be57-789820a25116/volumes" Feb 23 17:52:29 crc kubenswrapper[4724]: I0223 17:52:28.978691 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d2f1a31-7f08-451d-962d-88ee8fd7f246" path="/var/lib/kubelet/pods/9d2f1a31-7f08-451d-962d-88ee8fd7f246/volumes" Feb 23 17:52:29 crc kubenswrapper[4724]: I0223 17:52:28.979290 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2cf1f00-6743-4d49-a79e-4dc0977b2145" path="/var/lib/kubelet/pods/c2cf1f00-6743-4d49-a79e-4dc0977b2145/volumes" Feb 23 17:52:29 crc kubenswrapper[4724]: I0223 17:52:29.196810 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 17:52:29 crc kubenswrapper[4724]: I0223 17:52:29.977602 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 17:52:29 crc kubenswrapper[4724]: W0223 17:52:29.978070 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9365a64c_1314_4df5_b7b2_ed56c6d7a358.slice/crio-04b5223794320b41b75a432b99a0553c7dd39a4a4863eff6b682c924a7d9f010 WatchSource:0}: Error finding container 04b5223794320b41b75a432b99a0553c7dd39a4a4863eff6b682c924a7d9f010: Status 404 returned error can't find the container with id 04b5223794320b41b75a432b99a0553c7dd39a4a4863eff6b682c924a7d9f010 Feb 23 17:52:29 crc kubenswrapper[4724]: I0223 17:52:29.987736 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 17:52:30 crc kubenswrapper[4724]: I0223 17:52:30.168583 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 23 17:52:30 crc kubenswrapper[4724]: W0223 17:52:30.171919 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4bef9e90_cdd6_4eb6_8801_3f7b07bc9363.slice/crio-8c3dc2db2820906db6cead51951dad216e07a2891f97a86c9e13b41ebeef994b WatchSource:0}: Error finding container 8c3dc2db2820906db6cead51951dad216e07a2891f97a86c9e13b41ebeef994b: Status 404 returned error can't find the container with id 8c3dc2db2820906db6cead51951dad216e07a2891f97a86c9e13b41ebeef994b Feb 23 17:52:30 crc kubenswrapper[4724]: I0223 17:52:30.383018 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4bef9e90-cdd6-4eb6-8801-3f7b07bc9363","Type":"ContainerStarted","Data":"4dc7784f7c8a5a4a73540e1eb96ecd09ea6767b0154d057812256caa67d30ab1"} Feb 23 17:52:30 crc kubenswrapper[4724]: I0223 17:52:30.383093 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4bef9e90-cdd6-4eb6-8801-3f7b07bc9363","Type":"ContainerStarted","Data":"8c3dc2db2820906db6cead51951dad216e07a2891f97a86c9e13b41ebeef994b"} Feb 23 17:52:30 crc kubenswrapper[4724]: I0223 17:52:30.385244 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9365a64c-1314-4df5-b7b2-ed56c6d7a358","Type":"ContainerStarted","Data":"564566b464925260ee8add77065497fccd75580a9d2353ac1b4b9255eaaaa14f"} Feb 23 17:52:30 crc kubenswrapper[4724]: I0223 17:52:30.385286 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9365a64c-1314-4df5-b7b2-ed56c6d7a358","Type":"ContainerStarted","Data":"8016a179640fc818a7d3bbf5d1ce47e46458cdd5f15d6e71fbb5fdfb45b659d9"} Feb 23 17:52:30 crc kubenswrapper[4724]: I0223 17:52:30.385297 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9365a64c-1314-4df5-b7b2-ed56c6d7a358","Type":"ContainerStarted","Data":"04b5223794320b41b75a432b99a0553c7dd39a4a4863eff6b682c924a7d9f010"} Feb 23 17:52:30 crc kubenswrapper[4724]: I0223 17:52:30.386995 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"05bf40b5-2154-40d2-8714-7e7d24d42786","Type":"ContainerStarted","Data":"d952888cd5b9560081371030c17625407b6ccc023b8c626a35916375ae86468f"} Feb 23 17:52:30 crc kubenswrapper[4724]: I0223 17:52:30.387029 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"05bf40b5-2154-40d2-8714-7e7d24d42786","Type":"ContainerStarted","Data":"da51a6d265ba45cc670f68ce99df3be2e563b783c93d95a5b103be6f3a6e72a4"} Feb 23 17:52:30 crc kubenswrapper[4724]: I0223 17:52:30.408463 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.408441831 podStartE2EDuration="2.408441831s" podCreationTimestamp="2026-02-23 17:52:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:52:30.402302502 +0000 UTC m=+1306.218502102" watchObservedRunningTime="2026-02-23 17:52:30.408441831 +0000 UTC m=+1306.224641431" Feb 23 17:52:30 crc kubenswrapper[4724]: I0223 17:52:30.423926 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.423906696 podStartE2EDuration="2.423906696s" podCreationTimestamp="2026-02-23 17:52:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:52:30.416924997 +0000 UTC m=+1306.233124597" watchObservedRunningTime="2026-02-23 17:52:30.423906696 +0000 UTC m=+1306.240106296" Feb 23 17:52:31 crc kubenswrapper[4724]: I0223 17:52:31.401168 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4bef9e90-cdd6-4eb6-8801-3f7b07bc9363","Type":"ContainerStarted","Data":"28e7891a5c357a9ff7af2e5211ece38fd5bffc327b20e3a5b5e1e612c83b2c70"} Feb 23 17:52:31 crc kubenswrapper[4724]: I0223 17:52:31.427302 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.427284135 podStartE2EDuration="3.427284135s" podCreationTimestamp="2026-02-23 17:52:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:52:31.421414223 +0000 UTC m=+1307.237613823" watchObservedRunningTime="2026-02-23 17:52:31.427284135 +0000 UTC m=+1307.243483735" Feb 23 17:52:33 crc kubenswrapper[4724]: I0223 17:52:33.780037 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 23 17:52:33 crc kubenswrapper[4724]: I0223 17:52:33.780696 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 23 17:52:33 crc kubenswrapper[4724]: I0223 17:52:33.792486 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 23 17:52:38 crc kubenswrapper[4724]: I0223 17:52:38.779334 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 23 17:52:38 crc kubenswrapper[4724]: I0223 17:52:38.779843 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 23 17:52:38 crc kubenswrapper[4724]: I0223 17:52:38.792417 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 23 17:52:38 crc kubenswrapper[4724]: I0223 17:52:38.818562 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 23 17:52:39 crc kubenswrapper[4724]: I0223 17:52:39.198735 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 23 17:52:39 crc kubenswrapper[4724]: I0223 17:52:39.199070 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 23 17:52:39 crc kubenswrapper[4724]: I0223 17:52:39.523976 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 23 17:52:39 crc kubenswrapper[4724]: I0223 17:52:39.793674 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="9365a64c-1314-4df5-b7b2-ed56c6d7a358" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.229:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 17:52:39 crc kubenswrapper[4724]: I0223 17:52:39.793698 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="9365a64c-1314-4df5-b7b2-ed56c6d7a358" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.229:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 23 17:52:40 crc kubenswrapper[4724]: I0223 17:52:40.213600 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4bef9e90-cdd6-4eb6-8801-3f7b07bc9363" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.231:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 17:52:40 crc kubenswrapper[4724]: I0223 17:52:40.213712 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4bef9e90-cdd6-4eb6-8801-3f7b07bc9363" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.231:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 17:52:45 crc kubenswrapper[4724]: I0223 17:52:45.581276 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 23 17:52:48 crc kubenswrapper[4724]: I0223 17:52:48.783709 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 23 17:52:48 crc kubenswrapper[4724]: I0223 17:52:48.784692 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 23 17:52:48 crc kubenswrapper[4724]: I0223 17:52:48.788767 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 23 17:52:49 crc kubenswrapper[4724]: I0223 17:52:49.206667 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 23 17:52:49 crc kubenswrapper[4724]: I0223 17:52:49.207422 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 23 17:52:49 crc kubenswrapper[4724]: I0223 17:52:49.208219 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 23 17:52:49 crc kubenswrapper[4724]: I0223 17:52:49.214795 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 23 17:52:49 crc kubenswrapper[4724]: I0223 17:52:49.619553 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 23 17:52:49 crc kubenswrapper[4724]: I0223 17:52:49.625069 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 23 17:52:49 crc kubenswrapper[4724]: I0223 17:52:49.632813 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 23 17:52:57 crc kubenswrapper[4724]: I0223 17:52:57.528379 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 23 17:52:57 crc kubenswrapper[4724]: I0223 17:52:57.751846 4724 patch_prober.go:28] interesting pod/machine-config-daemon-rw78r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 17:52:57 crc kubenswrapper[4724]: I0223 17:52:57.752197 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 17:52:57 crc kubenswrapper[4724]: I0223 17:52:57.752257 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" Feb 23 17:52:57 crc kubenswrapper[4724]: I0223 17:52:57.753263 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4c3c149666e58c3520418e687c5807bec12f2dc12c5496fde070763093334840"} pod="openshift-machine-config-operator/machine-config-daemon-rw78r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 17:52:57 crc kubenswrapper[4724]: I0223 17:52:57.753341 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" containerID="cri-o://4c3c149666e58c3520418e687c5807bec12f2dc12c5496fde070763093334840" gracePeriod=600 Feb 23 17:52:58 crc kubenswrapper[4724]: I0223 17:52:58.432415 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 23 17:52:58 crc kubenswrapper[4724]: I0223 17:52:58.712931 4724 generic.go:334] "Generic (PLEG): container finished" podID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerID="4c3c149666e58c3520418e687c5807bec12f2dc12c5496fde070763093334840" exitCode=0 Feb 23 17:52:58 crc kubenswrapper[4724]: I0223 17:52:58.713007 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" event={"ID":"a065b197-b354-4d9b-b2e9-7d4882a3d1a2","Type":"ContainerDied","Data":"4c3c149666e58c3520418e687c5807bec12f2dc12c5496fde070763093334840"} Feb 23 17:52:58 crc kubenswrapper[4724]: I0223 17:52:58.714091 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" event={"ID":"a065b197-b354-4d9b-b2e9-7d4882a3d1a2","Type":"ContainerStarted","Data":"f6f8a7efa8383e0b1ed8ac5db72df9df740ff1c95794a0256d6285d176592a6b"} Feb 23 17:52:58 crc kubenswrapper[4724]: I0223 17:52:58.714147 4724 scope.go:117] "RemoveContainer" containerID="9dc23005496a1839d115f25e420d8012af50267d7439025ce701b41626936c3c" Feb 23 17:53:00 crc kubenswrapper[4724]: I0223 17:53:00.746637 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="dd0498b8-b963-4905-a986-13400917ef41" containerName="rabbitmq" containerID="cri-o://ded1c50a90f38c33e0870874825e13a050c3dd69b53c46162f08b6fbf6d19bce" gracePeriod=604797 Feb 23 17:53:01 crc kubenswrapper[4724]: I0223 17:53:01.576855 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="101a4642-f4c0-4f81-9d5a-7b8d95110eb2" containerName="rabbitmq" containerID="cri-o://f64fb5384a6d596397c2db6a237dec438aa71902a5b4c232c2f94f2bbab4a529" gracePeriod=604797 Feb 23 17:53:02 crc kubenswrapper[4724]: I0223 17:53:02.824071 4724 generic.go:334] "Generic (PLEG): container finished" podID="dd0498b8-b963-4905-a986-13400917ef41" containerID="ded1c50a90f38c33e0870874825e13a050c3dd69b53c46162f08b6fbf6d19bce" exitCode=0 Feb 23 17:53:02 crc kubenswrapper[4724]: I0223 17:53:02.824115 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"dd0498b8-b963-4905-a986-13400917ef41","Type":"ContainerDied","Data":"ded1c50a90f38c33e0870874825e13a050c3dd69b53c46162f08b6fbf6d19bce"} Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.242139 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.365039 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dd0498b8-b963-4905-a986-13400917ef41-plugins-conf\") pod \"dd0498b8-b963-4905-a986-13400917ef41\" (UID: \"dd0498b8-b963-4905-a986-13400917ef41\") " Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.365079 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dd0498b8-b963-4905-a986-13400917ef41-server-conf\") pod \"dd0498b8-b963-4905-a986-13400917ef41\" (UID: \"dd0498b8-b963-4905-a986-13400917ef41\") " Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.365107 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"dd0498b8-b963-4905-a986-13400917ef41\" (UID: \"dd0498b8-b963-4905-a986-13400917ef41\") " Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.365127 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dd0498b8-b963-4905-a986-13400917ef41-rabbitmq-tls\") pod \"dd0498b8-b963-4905-a986-13400917ef41\" (UID: \"dd0498b8-b963-4905-a986-13400917ef41\") " Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.365160 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dd0498b8-b963-4905-a986-13400917ef41-rabbitmq-confd\") pod \"dd0498b8-b963-4905-a986-13400917ef41\" (UID: \"dd0498b8-b963-4905-a986-13400917ef41\") " Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.365218 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dd0498b8-b963-4905-a986-13400917ef41-rabbitmq-erlang-cookie\") pod \"dd0498b8-b963-4905-a986-13400917ef41\" (UID: \"dd0498b8-b963-4905-a986-13400917ef41\") " Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.365235 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dd0498b8-b963-4905-a986-13400917ef41-config-data\") pod \"dd0498b8-b963-4905-a986-13400917ef41\" (UID: \"dd0498b8-b963-4905-a986-13400917ef41\") " Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.365291 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mtqg\" (UniqueName: \"kubernetes.io/projected/dd0498b8-b963-4905-a986-13400917ef41-kube-api-access-5mtqg\") pod \"dd0498b8-b963-4905-a986-13400917ef41\" (UID: \"dd0498b8-b963-4905-a986-13400917ef41\") " Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.365318 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dd0498b8-b963-4905-a986-13400917ef41-erlang-cookie-secret\") pod \"dd0498b8-b963-4905-a986-13400917ef41\" (UID: \"dd0498b8-b963-4905-a986-13400917ef41\") " Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.365380 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dd0498b8-b963-4905-a986-13400917ef41-rabbitmq-plugins\") pod \"dd0498b8-b963-4905-a986-13400917ef41\" (UID: \"dd0498b8-b963-4905-a986-13400917ef41\") " Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.365475 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dd0498b8-b963-4905-a986-13400917ef41-pod-info\") pod \"dd0498b8-b963-4905-a986-13400917ef41\" (UID: \"dd0498b8-b963-4905-a986-13400917ef41\") " Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.373262 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd0498b8-b963-4905-a986-13400917ef41-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "dd0498b8-b963-4905-a986-13400917ef41" (UID: "dd0498b8-b963-4905-a986-13400917ef41"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.375871 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd0498b8-b963-4905-a986-13400917ef41-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "dd0498b8-b963-4905-a986-13400917ef41" (UID: "dd0498b8-b963-4905-a986-13400917ef41"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.376130 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd0498b8-b963-4905-a986-13400917ef41-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "dd0498b8-b963-4905-a986-13400917ef41" (UID: "dd0498b8-b963-4905-a986-13400917ef41"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.388530 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd0498b8-b963-4905-a986-13400917ef41-kube-api-access-5mtqg" (OuterVolumeSpecName: "kube-api-access-5mtqg") pod "dd0498b8-b963-4905-a986-13400917ef41" (UID: "dd0498b8-b963-4905-a986-13400917ef41"). InnerVolumeSpecName "kube-api-access-5mtqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.392584 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/dd0498b8-b963-4905-a986-13400917ef41-pod-info" (OuterVolumeSpecName: "pod-info") pod "dd0498b8-b963-4905-a986-13400917ef41" (UID: "dd0498b8-b963-4905-a986-13400917ef41"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.398176 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "persistence") pod "dd0498b8-b963-4905-a986-13400917ef41" (UID: "dd0498b8-b963-4905-a986-13400917ef41"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.399514 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd0498b8-b963-4905-a986-13400917ef41-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "dd0498b8-b963-4905-a986-13400917ef41" (UID: "dd0498b8-b963-4905-a986-13400917ef41"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.411353 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd0498b8-b963-4905-a986-13400917ef41-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "dd0498b8-b963-4905-a986-13400917ef41" (UID: "dd0498b8-b963-4905-a986-13400917ef41"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.435953 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd0498b8-b963-4905-a986-13400917ef41-config-data" (OuterVolumeSpecName: "config-data") pod "dd0498b8-b963-4905-a986-13400917ef41" (UID: "dd0498b8-b963-4905-a986-13400917ef41"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.468291 4724 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dd0498b8-b963-4905-a986-13400917ef41-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.468329 4724 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dd0498b8-b963-4905-a986-13400917ef41-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.468357 4724 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.469760 4724 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dd0498b8-b963-4905-a986-13400917ef41-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.469792 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dd0498b8-b963-4905-a986-13400917ef41-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.469803 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mtqg\" (UniqueName: \"kubernetes.io/projected/dd0498b8-b963-4905-a986-13400917ef41-kube-api-access-5mtqg\") on node \"crc\" DevicePath \"\"" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.469815 4724 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dd0498b8-b963-4905-a986-13400917ef41-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.469826 4724 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dd0498b8-b963-4905-a986-13400917ef41-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.469837 4724 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dd0498b8-b963-4905-a986-13400917ef41-pod-info\") on node \"crc\" DevicePath \"\"" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.480274 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.489444 4724 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.492855 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd0498b8-b963-4905-a986-13400917ef41-server-conf" (OuterVolumeSpecName: "server-conf") pod "dd0498b8-b963-4905-a986-13400917ef41" (UID: "dd0498b8-b963-4905-a986-13400917ef41"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.549349 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd0498b8-b963-4905-a986-13400917ef41-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "dd0498b8-b963-4905-a986-13400917ef41" (UID: "dd0498b8-b963-4905-a986-13400917ef41"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.570904 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/101a4642-f4c0-4f81-9d5a-7b8d95110eb2-rabbitmq-tls\") pod \"101a4642-f4c0-4f81-9d5a-7b8d95110eb2\" (UID: \"101a4642-f4c0-4f81-9d5a-7b8d95110eb2\") " Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.570955 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/101a4642-f4c0-4f81-9d5a-7b8d95110eb2-server-conf\") pod \"101a4642-f4c0-4f81-9d5a-7b8d95110eb2\" (UID: \"101a4642-f4c0-4f81-9d5a-7b8d95110eb2\") " Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.571066 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/101a4642-f4c0-4f81-9d5a-7b8d95110eb2-rabbitmq-erlang-cookie\") pod \"101a4642-f4c0-4f81-9d5a-7b8d95110eb2\" (UID: \"101a4642-f4c0-4f81-9d5a-7b8d95110eb2\") " Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.571112 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/101a4642-f4c0-4f81-9d5a-7b8d95110eb2-config-data\") pod \"101a4642-f4c0-4f81-9d5a-7b8d95110eb2\" (UID: \"101a4642-f4c0-4f81-9d5a-7b8d95110eb2\") " Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.571145 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"101a4642-f4c0-4f81-9d5a-7b8d95110eb2\" (UID: \"101a4642-f4c0-4f81-9d5a-7b8d95110eb2\") " Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.571180 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/101a4642-f4c0-4f81-9d5a-7b8d95110eb2-rabbitmq-plugins\") pod \"101a4642-f4c0-4f81-9d5a-7b8d95110eb2\" (UID: \"101a4642-f4c0-4f81-9d5a-7b8d95110eb2\") " Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.571226 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/101a4642-f4c0-4f81-9d5a-7b8d95110eb2-erlang-cookie-secret\") pod \"101a4642-f4c0-4f81-9d5a-7b8d95110eb2\" (UID: \"101a4642-f4c0-4f81-9d5a-7b8d95110eb2\") " Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.571272 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/101a4642-f4c0-4f81-9d5a-7b8d95110eb2-plugins-conf\") pod \"101a4642-f4c0-4f81-9d5a-7b8d95110eb2\" (UID: \"101a4642-f4c0-4f81-9d5a-7b8d95110eb2\") " Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.571319 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/101a4642-f4c0-4f81-9d5a-7b8d95110eb2-rabbitmq-confd\") pod \"101a4642-f4c0-4f81-9d5a-7b8d95110eb2\" (UID: \"101a4642-f4c0-4f81-9d5a-7b8d95110eb2\") " Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.571353 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/101a4642-f4c0-4f81-9d5a-7b8d95110eb2-pod-info\") pod \"101a4642-f4c0-4f81-9d5a-7b8d95110eb2\" (UID: \"101a4642-f4c0-4f81-9d5a-7b8d95110eb2\") " Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.571484 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wq2xb\" (UniqueName: \"kubernetes.io/projected/101a4642-f4c0-4f81-9d5a-7b8d95110eb2-kube-api-access-wq2xb\") pod \"101a4642-f4c0-4f81-9d5a-7b8d95110eb2\" (UID: \"101a4642-f4c0-4f81-9d5a-7b8d95110eb2\") " Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.571949 4724 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dd0498b8-b963-4905-a986-13400917ef41-server-conf\") on node \"crc\" DevicePath \"\"" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.571966 4724 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.571980 4724 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dd0498b8-b963-4905-a986-13400917ef41-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.575060 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/101a4642-f4c0-4f81-9d5a-7b8d95110eb2-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "101a4642-f4c0-4f81-9d5a-7b8d95110eb2" (UID: "101a4642-f4c0-4f81-9d5a-7b8d95110eb2"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.575223 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/101a4642-f4c0-4f81-9d5a-7b8d95110eb2-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "101a4642-f4c0-4f81-9d5a-7b8d95110eb2" (UID: "101a4642-f4c0-4f81-9d5a-7b8d95110eb2"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.575244 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "persistence") pod "101a4642-f4c0-4f81-9d5a-7b8d95110eb2" (UID: "101a4642-f4c0-4f81-9d5a-7b8d95110eb2"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.577447 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/101a4642-f4c0-4f81-9d5a-7b8d95110eb2-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "101a4642-f4c0-4f81-9d5a-7b8d95110eb2" (UID: "101a4642-f4c0-4f81-9d5a-7b8d95110eb2"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.587586 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/101a4642-f4c0-4f81-9d5a-7b8d95110eb2-kube-api-access-wq2xb" (OuterVolumeSpecName: "kube-api-access-wq2xb") pod "101a4642-f4c0-4f81-9d5a-7b8d95110eb2" (UID: "101a4642-f4c0-4f81-9d5a-7b8d95110eb2"). InnerVolumeSpecName "kube-api-access-wq2xb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.592017 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/101a4642-f4c0-4f81-9d5a-7b8d95110eb2-pod-info" (OuterVolumeSpecName: "pod-info") pod "101a4642-f4c0-4f81-9d5a-7b8d95110eb2" (UID: "101a4642-f4c0-4f81-9d5a-7b8d95110eb2"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.607135 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/101a4642-f4c0-4f81-9d5a-7b8d95110eb2-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "101a4642-f4c0-4f81-9d5a-7b8d95110eb2" (UID: "101a4642-f4c0-4f81-9d5a-7b8d95110eb2"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.614433 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/101a4642-f4c0-4f81-9d5a-7b8d95110eb2-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "101a4642-f4c0-4f81-9d5a-7b8d95110eb2" (UID: "101a4642-f4c0-4f81-9d5a-7b8d95110eb2"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.677091 4724 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/101a4642-f4c0-4f81-9d5a-7b8d95110eb2-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.677139 4724 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.677156 4724 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/101a4642-f4c0-4f81-9d5a-7b8d95110eb2-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.677170 4724 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/101a4642-f4c0-4f81-9d5a-7b8d95110eb2-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.677181 4724 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/101a4642-f4c0-4f81-9d5a-7b8d95110eb2-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.677191 4724 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/101a4642-f4c0-4f81-9d5a-7b8d95110eb2-pod-info\") on node \"crc\" DevicePath \"\"" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.677202 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wq2xb\" (UniqueName: \"kubernetes.io/projected/101a4642-f4c0-4f81-9d5a-7b8d95110eb2-kube-api-access-wq2xb\") on node \"crc\" DevicePath \"\"" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.677214 4724 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/101a4642-f4c0-4f81-9d5a-7b8d95110eb2-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.697099 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/101a4642-f4c0-4f81-9d5a-7b8d95110eb2-config-data" (OuterVolumeSpecName: "config-data") pod "101a4642-f4c0-4f81-9d5a-7b8d95110eb2" (UID: "101a4642-f4c0-4f81-9d5a-7b8d95110eb2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.699136 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/101a4642-f4c0-4f81-9d5a-7b8d95110eb2-server-conf" (OuterVolumeSpecName: "server-conf") pod "101a4642-f4c0-4f81-9d5a-7b8d95110eb2" (UID: "101a4642-f4c0-4f81-9d5a-7b8d95110eb2"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.710120 4724 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.788989 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/101a4642-f4c0-4f81-9d5a-7b8d95110eb2-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.789021 4724 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.789031 4724 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/101a4642-f4c0-4f81-9d5a-7b8d95110eb2-server-conf\") on node \"crc\" DevicePath \"\"" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.859041 4724 generic.go:334] "Generic (PLEG): container finished" podID="101a4642-f4c0-4f81-9d5a-7b8d95110eb2" containerID="f64fb5384a6d596397c2db6a237dec438aa71902a5b4c232c2f94f2bbab4a529" exitCode=0 Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.859103 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.859089 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"101a4642-f4c0-4f81-9d5a-7b8d95110eb2","Type":"ContainerDied","Data":"f64fb5384a6d596397c2db6a237dec438aa71902a5b4c232c2f94f2bbab4a529"} Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.859164 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"101a4642-f4c0-4f81-9d5a-7b8d95110eb2","Type":"ContainerDied","Data":"c906f60d0417ea8d391bd8861d6707719f1bdcfe9a80c923c1852403bf706889"} Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.859182 4724 scope.go:117] "RemoveContainer" containerID="f64fb5384a6d596397c2db6a237dec438aa71902a5b4c232c2f94f2bbab4a529" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.861980 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"dd0498b8-b963-4905-a986-13400917ef41","Type":"ContainerDied","Data":"ed255c1c0ab48d58025725d9eadfae031b53d909557b119d63c1fc643f97dab3"} Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.862079 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.891363 4724 scope.go:117] "RemoveContainer" containerID="36f992eb2a80fa7d9c5dc03c57cc4e0fea68ee7732caaaca5b79a90820bb87b5" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.905508 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/101a4642-f4c0-4f81-9d5a-7b8d95110eb2-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "101a4642-f4c0-4f81-9d5a-7b8d95110eb2" (UID: "101a4642-f4c0-4f81-9d5a-7b8d95110eb2"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.911508 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.928101 4724 scope.go:117] "RemoveContainer" containerID="f64fb5384a6d596397c2db6a237dec438aa71902a5b4c232c2f94f2bbab4a529" Feb 23 17:53:03 crc kubenswrapper[4724]: E0223 17:53:03.928641 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f64fb5384a6d596397c2db6a237dec438aa71902a5b4c232c2f94f2bbab4a529\": container with ID starting with f64fb5384a6d596397c2db6a237dec438aa71902a5b4c232c2f94f2bbab4a529 not found: ID does not exist" containerID="f64fb5384a6d596397c2db6a237dec438aa71902a5b4c232c2f94f2bbab4a529" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.928703 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f64fb5384a6d596397c2db6a237dec438aa71902a5b4c232c2f94f2bbab4a529"} err="failed to get container status \"f64fb5384a6d596397c2db6a237dec438aa71902a5b4c232c2f94f2bbab4a529\": rpc error: code = NotFound desc = could not find container \"f64fb5384a6d596397c2db6a237dec438aa71902a5b4c232c2f94f2bbab4a529\": container with ID starting with f64fb5384a6d596397c2db6a237dec438aa71902a5b4c232c2f94f2bbab4a529 not found: ID does not exist" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.928740 4724 scope.go:117] "RemoveContainer" containerID="36f992eb2a80fa7d9c5dc03c57cc4e0fea68ee7732caaaca5b79a90820bb87b5" Feb 23 17:53:03 crc kubenswrapper[4724]: E0223 17:53:03.929022 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36f992eb2a80fa7d9c5dc03c57cc4e0fea68ee7732caaaca5b79a90820bb87b5\": container with ID starting with 36f992eb2a80fa7d9c5dc03c57cc4e0fea68ee7732caaaca5b79a90820bb87b5 not found: ID does not exist" containerID="36f992eb2a80fa7d9c5dc03c57cc4e0fea68ee7732caaaca5b79a90820bb87b5" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.929058 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36f992eb2a80fa7d9c5dc03c57cc4e0fea68ee7732caaaca5b79a90820bb87b5"} err="failed to get container status \"36f992eb2a80fa7d9c5dc03c57cc4e0fea68ee7732caaaca5b79a90820bb87b5\": rpc error: code = NotFound desc = could not find container \"36f992eb2a80fa7d9c5dc03c57cc4e0fea68ee7732caaaca5b79a90820bb87b5\": container with ID starting with 36f992eb2a80fa7d9c5dc03c57cc4e0fea68ee7732caaaca5b79a90820bb87b5 not found: ID does not exist" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.929070 4724 scope.go:117] "RemoveContainer" containerID="ded1c50a90f38c33e0870874825e13a050c3dd69b53c46162f08b6fbf6d19bce" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.929423 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.941776 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 23 17:53:03 crc kubenswrapper[4724]: E0223 17:53:03.942274 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="101a4642-f4c0-4f81-9d5a-7b8d95110eb2" containerName="setup-container" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.942301 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="101a4642-f4c0-4f81-9d5a-7b8d95110eb2" containerName="setup-container" Feb 23 17:53:03 crc kubenswrapper[4724]: E0223 17:53:03.942318 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd0498b8-b963-4905-a986-13400917ef41" containerName="rabbitmq" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.942326 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd0498b8-b963-4905-a986-13400917ef41" containerName="rabbitmq" Feb 23 17:53:03 crc kubenswrapper[4724]: E0223 17:53:03.942376 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="101a4642-f4c0-4f81-9d5a-7b8d95110eb2" containerName="rabbitmq" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.942405 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="101a4642-f4c0-4f81-9d5a-7b8d95110eb2" containerName="rabbitmq" Feb 23 17:53:03 crc kubenswrapper[4724]: E0223 17:53:03.942417 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd0498b8-b963-4905-a986-13400917ef41" containerName="setup-container" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.942426 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd0498b8-b963-4905-a986-13400917ef41" containerName="setup-container" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.942721 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd0498b8-b963-4905-a986-13400917ef41" containerName="rabbitmq" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.942767 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="101a4642-f4c0-4f81-9d5a-7b8d95110eb2" containerName="rabbitmq" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.944265 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.947727 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.947890 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.948297 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.948551 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.948677 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.948757 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.948912 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-gpzmg" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.955489 4724 scope.go:117] "RemoveContainer" containerID="06bd6ecb286b49b9c2e55b06a2075b277273fffc283ff6e9c4e46883dc206c68" Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.965562 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 23 17:53:03 crc kubenswrapper[4724]: I0223 17:53:03.992810 4724 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/101a4642-f4c0-4f81-9d5a-7b8d95110eb2-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.094593 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1593736a-2034-4811-90f9-90645b954b2c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1593736a-2034-4811-90f9-90645b954b2c\") " pod="openstack/rabbitmq-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.094677 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1593736a-2034-4811-90f9-90645b954b2c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1593736a-2034-4811-90f9-90645b954b2c\") " pod="openstack/rabbitmq-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.094702 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1593736a-2034-4811-90f9-90645b954b2c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1593736a-2034-4811-90f9-90645b954b2c\") " pod="openstack/rabbitmq-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.094738 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1593736a-2034-4811-90f9-90645b954b2c-config-data\") pod \"rabbitmq-server-0\" (UID: \"1593736a-2034-4811-90f9-90645b954b2c\") " pod="openstack/rabbitmq-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.095016 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1593736a-2034-4811-90f9-90645b954b2c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1593736a-2034-4811-90f9-90645b954b2c\") " pod="openstack/rabbitmq-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.095110 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv4dc\" (UniqueName: \"kubernetes.io/projected/1593736a-2034-4811-90f9-90645b954b2c-kube-api-access-gv4dc\") pod \"rabbitmq-server-0\" (UID: \"1593736a-2034-4811-90f9-90645b954b2c\") " pod="openstack/rabbitmq-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.095496 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1593736a-2034-4811-90f9-90645b954b2c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1593736a-2034-4811-90f9-90645b954b2c\") " pod="openstack/rabbitmq-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.095550 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1593736a-2034-4811-90f9-90645b954b2c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1593736a-2034-4811-90f9-90645b954b2c\") " pod="openstack/rabbitmq-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.095627 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1593736a-2034-4811-90f9-90645b954b2c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1593736a-2034-4811-90f9-90645b954b2c\") " pod="openstack/rabbitmq-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.095693 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"1593736a-2034-4811-90f9-90645b954b2c\") " pod="openstack/rabbitmq-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.095795 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1593736a-2034-4811-90f9-90645b954b2c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1593736a-2034-4811-90f9-90645b954b2c\") " pod="openstack/rabbitmq-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.197590 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1593736a-2034-4811-90f9-90645b954b2c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1593736a-2034-4811-90f9-90645b954b2c\") " pod="openstack/rabbitmq-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.197934 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1593736a-2034-4811-90f9-90645b954b2c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1593736a-2034-4811-90f9-90645b954b2c\") " pod="openstack/rabbitmq-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.197979 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1593736a-2034-4811-90f9-90645b954b2c-config-data\") pod \"rabbitmq-server-0\" (UID: \"1593736a-2034-4811-90f9-90645b954b2c\") " pod="openstack/rabbitmq-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.198013 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1593736a-2034-4811-90f9-90645b954b2c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1593736a-2034-4811-90f9-90645b954b2c\") " pod="openstack/rabbitmq-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.198035 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv4dc\" (UniqueName: \"kubernetes.io/projected/1593736a-2034-4811-90f9-90645b954b2c-kube-api-access-gv4dc\") pod \"rabbitmq-server-0\" (UID: \"1593736a-2034-4811-90f9-90645b954b2c\") " pod="openstack/rabbitmq-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.198090 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1593736a-2034-4811-90f9-90645b954b2c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1593736a-2034-4811-90f9-90645b954b2c\") " pod="openstack/rabbitmq-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.198104 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1593736a-2034-4811-90f9-90645b954b2c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1593736a-2034-4811-90f9-90645b954b2c\") " pod="openstack/rabbitmq-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.198134 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1593736a-2034-4811-90f9-90645b954b2c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1593736a-2034-4811-90f9-90645b954b2c\") " pod="openstack/rabbitmq-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.198160 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"1593736a-2034-4811-90f9-90645b954b2c\") " pod="openstack/rabbitmq-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.198187 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1593736a-2034-4811-90f9-90645b954b2c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1593736a-2034-4811-90f9-90645b954b2c\") " pod="openstack/rabbitmq-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.198226 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1593736a-2034-4811-90f9-90645b954b2c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1593736a-2034-4811-90f9-90645b954b2c\") " pod="openstack/rabbitmq-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.199073 4724 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"1593736a-2034-4811-90f9-90645b954b2c\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.200043 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1593736a-2034-4811-90f9-90645b954b2c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1593736a-2034-4811-90f9-90645b954b2c\") " pod="openstack/rabbitmq-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.200075 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1593736a-2034-4811-90f9-90645b954b2c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1593736a-2034-4811-90f9-90645b954b2c\") " pod="openstack/rabbitmq-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.200335 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1593736a-2034-4811-90f9-90645b954b2c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1593736a-2034-4811-90f9-90645b954b2c\") " pod="openstack/rabbitmq-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.200626 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1593736a-2034-4811-90f9-90645b954b2c-config-data\") pod \"rabbitmq-server-0\" (UID: \"1593736a-2034-4811-90f9-90645b954b2c\") " pod="openstack/rabbitmq-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.202016 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1593736a-2034-4811-90f9-90645b954b2c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1593736a-2034-4811-90f9-90645b954b2c\") " pod="openstack/rabbitmq-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.207282 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1593736a-2034-4811-90f9-90645b954b2c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1593736a-2034-4811-90f9-90645b954b2c\") " pod="openstack/rabbitmq-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.208333 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1593736a-2034-4811-90f9-90645b954b2c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1593736a-2034-4811-90f9-90645b954b2c\") " pod="openstack/rabbitmq-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.209260 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1593736a-2034-4811-90f9-90645b954b2c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1593736a-2034-4811-90f9-90645b954b2c\") " pod="openstack/rabbitmq-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.214869 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1593736a-2034-4811-90f9-90645b954b2c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1593736a-2034-4811-90f9-90645b954b2c\") " pod="openstack/rabbitmq-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.223837 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv4dc\" (UniqueName: \"kubernetes.io/projected/1593736a-2034-4811-90f9-90645b954b2c-kube-api-access-gv4dc\") pod \"rabbitmq-server-0\" (UID: \"1593736a-2034-4811-90f9-90645b954b2c\") " pod="openstack/rabbitmq-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.267608 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"1593736a-2034-4811-90f9-90645b954b2c\") " pod="openstack/rabbitmq-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.279993 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.433737 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.446330 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.456177 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.458892 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.464843 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.466346 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-nbsqm" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.466529 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.466691 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.467321 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.467514 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.467673 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.486907 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.615797 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9723ff3a-6da5-46fd-be2a-89693223d4f0-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9723ff3a-6da5-46fd-be2a-89693223d4f0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.615899 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9723ff3a-6da5-46fd-be2a-89693223d4f0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9723ff3a-6da5-46fd-be2a-89693223d4f0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.615931 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9723ff3a-6da5-46fd-be2a-89693223d4f0-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9723ff3a-6da5-46fd-be2a-89693223d4f0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.616013 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9723ff3a-6da5-46fd-be2a-89693223d4f0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9723ff3a-6da5-46fd-be2a-89693223d4f0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.616043 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9723ff3a-6da5-46fd-be2a-89693223d4f0-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9723ff3a-6da5-46fd-be2a-89693223d4f0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.616066 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9723ff3a-6da5-46fd-be2a-89693223d4f0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9723ff3a-6da5-46fd-be2a-89693223d4f0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.616082 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9723ff3a-6da5-46fd-be2a-89693223d4f0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.616101 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9723ff3a-6da5-46fd-be2a-89693223d4f0-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9723ff3a-6da5-46fd-be2a-89693223d4f0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.616121 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9723ff3a-6da5-46fd-be2a-89693223d4f0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9723ff3a-6da5-46fd-be2a-89693223d4f0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.616136 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9723ff3a-6da5-46fd-be2a-89693223d4f0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9723ff3a-6da5-46fd-be2a-89693223d4f0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.616175 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxbvc\" (UniqueName: \"kubernetes.io/projected/9723ff3a-6da5-46fd-be2a-89693223d4f0-kube-api-access-dxbvc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9723ff3a-6da5-46fd-be2a-89693223d4f0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.718866 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9723ff3a-6da5-46fd-be2a-89693223d4f0-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9723ff3a-6da5-46fd-be2a-89693223d4f0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.719016 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9723ff3a-6da5-46fd-be2a-89693223d4f0-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9723ff3a-6da5-46fd-be2a-89693223d4f0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.719044 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9723ff3a-6da5-46fd-be2a-89693223d4f0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9723ff3a-6da5-46fd-be2a-89693223d4f0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.719132 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9723ff3a-6da5-46fd-be2a-89693223d4f0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9723ff3a-6da5-46fd-be2a-89693223d4f0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.719174 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9723ff3a-6da5-46fd-be2a-89693223d4f0-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9723ff3a-6da5-46fd-be2a-89693223d4f0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.719200 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9723ff3a-6da5-46fd-be2a-89693223d4f0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9723ff3a-6da5-46fd-be2a-89693223d4f0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.719235 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9723ff3a-6da5-46fd-be2a-89693223d4f0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.719255 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9723ff3a-6da5-46fd-be2a-89693223d4f0-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9723ff3a-6da5-46fd-be2a-89693223d4f0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.719280 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9723ff3a-6da5-46fd-be2a-89693223d4f0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9723ff3a-6da5-46fd-be2a-89693223d4f0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.719298 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9723ff3a-6da5-46fd-be2a-89693223d4f0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9723ff3a-6da5-46fd-be2a-89693223d4f0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.719340 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxbvc\" (UniqueName: \"kubernetes.io/projected/9723ff3a-6da5-46fd-be2a-89693223d4f0-kube-api-access-dxbvc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9723ff3a-6da5-46fd-be2a-89693223d4f0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.719726 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9723ff3a-6da5-46fd-be2a-89693223d4f0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9723ff3a-6da5-46fd-be2a-89693223d4f0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.719929 4724 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9723ff3a-6da5-46fd-be2a-89693223d4f0\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.720438 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9723ff3a-6da5-46fd-be2a-89693223d4f0-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9723ff3a-6da5-46fd-be2a-89693223d4f0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.720551 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9723ff3a-6da5-46fd-be2a-89693223d4f0-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9723ff3a-6da5-46fd-be2a-89693223d4f0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.720567 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9723ff3a-6da5-46fd-be2a-89693223d4f0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9723ff3a-6da5-46fd-be2a-89693223d4f0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.720814 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9723ff3a-6da5-46fd-be2a-89693223d4f0-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9723ff3a-6da5-46fd-be2a-89693223d4f0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.723479 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9723ff3a-6da5-46fd-be2a-89693223d4f0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9723ff3a-6da5-46fd-be2a-89693223d4f0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.723804 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9723ff3a-6da5-46fd-be2a-89693223d4f0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9723ff3a-6da5-46fd-be2a-89693223d4f0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.724183 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9723ff3a-6da5-46fd-be2a-89693223d4f0-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9723ff3a-6da5-46fd-be2a-89693223d4f0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.741541 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9723ff3a-6da5-46fd-be2a-89693223d4f0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9723ff3a-6da5-46fd-be2a-89693223d4f0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.753179 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxbvc\" (UniqueName: \"kubernetes.io/projected/9723ff3a-6da5-46fd-be2a-89693223d4f0-kube-api-access-dxbvc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9723ff3a-6da5-46fd-be2a-89693223d4f0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.759723 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9723ff3a-6da5-46fd-be2a-89693223d4f0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.803131 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.833367 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.888996 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1593736a-2034-4811-90f9-90645b954b2c","Type":"ContainerStarted","Data":"4434a0cc7551ddaf75b5c81ea669d53d066dccd12724a7f7cba37c0d04bf5d19"} Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.968488 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="101a4642-f4c0-4f81-9d5a-7b8d95110eb2" path="/var/lib/kubelet/pods/101a4642-f4c0-4f81-9d5a-7b8d95110eb2/volumes" Feb 23 17:53:04 crc kubenswrapper[4724]: I0223 17:53:04.971122 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd0498b8-b963-4905-a986-13400917ef41" path="/var/lib/kubelet/pods/dd0498b8-b963-4905-a986-13400917ef41/volumes" Feb 23 17:53:05 crc kubenswrapper[4724]: I0223 17:53:05.345177 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 23 17:53:05 crc kubenswrapper[4724]: I0223 17:53:05.898721 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9723ff3a-6da5-46fd-be2a-89693223d4f0","Type":"ContainerStarted","Data":"1d1cbbb8d9ea61e9fedeea01cbd44dfc4680f79289c1a8dacfa6ddda95416ee0"} Feb 23 17:53:06 crc kubenswrapper[4724]: I0223 17:53:06.911459 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1593736a-2034-4811-90f9-90645b954b2c","Type":"ContainerStarted","Data":"68c7e0bc9895e0f966ccb14f3290ad715578875177c490d2d011c13dbdeac189"} Feb 23 17:53:06 crc kubenswrapper[4724]: I0223 17:53:06.912923 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9723ff3a-6da5-46fd-be2a-89693223d4f0","Type":"ContainerStarted","Data":"65e0f5cbec3e20d70799018301287769f77f01e1aaf5bc01079aeef71e6d0af5"} Feb 23 17:53:13 crc kubenswrapper[4724]: I0223 17:53:13.047940 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-678497f889-p66x2"] Feb 23 17:53:13 crc kubenswrapper[4724]: I0223 17:53:13.050574 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-678497f889-p66x2" Feb 23 17:53:13 crc kubenswrapper[4724]: I0223 17:53:13.052321 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 23 17:53:13 crc kubenswrapper[4724]: I0223 17:53:13.062358 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-678497f889-p66x2"] Feb 23 17:53:13 crc kubenswrapper[4724]: I0223 17:53:13.110869 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54f1ef63-902d-443b-80d3-906c224707f3-config\") pod \"dnsmasq-dns-678497f889-p66x2\" (UID: \"54f1ef63-902d-443b-80d3-906c224707f3\") " pod="openstack/dnsmasq-dns-678497f889-p66x2" Feb 23 17:53:13 crc kubenswrapper[4724]: I0223 17:53:13.110957 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/54f1ef63-902d-443b-80d3-906c224707f3-ovsdbserver-sb\") pod \"dnsmasq-dns-678497f889-p66x2\" (UID: \"54f1ef63-902d-443b-80d3-906c224707f3\") " pod="openstack/dnsmasq-dns-678497f889-p66x2" Feb 23 17:53:13 crc kubenswrapper[4724]: I0223 17:53:13.111022 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b52zq\" (UniqueName: \"kubernetes.io/projected/54f1ef63-902d-443b-80d3-906c224707f3-kube-api-access-b52zq\") pod \"dnsmasq-dns-678497f889-p66x2\" (UID: \"54f1ef63-902d-443b-80d3-906c224707f3\") " pod="openstack/dnsmasq-dns-678497f889-p66x2" Feb 23 17:53:13 crc kubenswrapper[4724]: I0223 17:53:13.111073 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/54f1ef63-902d-443b-80d3-906c224707f3-dns-svc\") pod \"dnsmasq-dns-678497f889-p66x2\" (UID: \"54f1ef63-902d-443b-80d3-906c224707f3\") " pod="openstack/dnsmasq-dns-678497f889-p66x2" Feb 23 17:53:13 crc kubenswrapper[4724]: I0223 17:53:13.111113 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/54f1ef63-902d-443b-80d3-906c224707f3-openstack-edpm-ipam\") pod \"dnsmasq-dns-678497f889-p66x2\" (UID: \"54f1ef63-902d-443b-80d3-906c224707f3\") " pod="openstack/dnsmasq-dns-678497f889-p66x2" Feb 23 17:53:13 crc kubenswrapper[4724]: I0223 17:53:13.111165 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/54f1ef63-902d-443b-80d3-906c224707f3-ovsdbserver-nb\") pod \"dnsmasq-dns-678497f889-p66x2\" (UID: \"54f1ef63-902d-443b-80d3-906c224707f3\") " pod="openstack/dnsmasq-dns-678497f889-p66x2" Feb 23 17:53:13 crc kubenswrapper[4724]: I0223 17:53:13.111202 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/54f1ef63-902d-443b-80d3-906c224707f3-dns-swift-storage-0\") pod \"dnsmasq-dns-678497f889-p66x2\" (UID: \"54f1ef63-902d-443b-80d3-906c224707f3\") " pod="openstack/dnsmasq-dns-678497f889-p66x2" Feb 23 17:53:13 crc kubenswrapper[4724]: I0223 17:53:13.212513 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/54f1ef63-902d-443b-80d3-906c224707f3-dns-svc\") pod \"dnsmasq-dns-678497f889-p66x2\" (UID: \"54f1ef63-902d-443b-80d3-906c224707f3\") " pod="openstack/dnsmasq-dns-678497f889-p66x2" Feb 23 17:53:13 crc kubenswrapper[4724]: I0223 17:53:13.212593 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/54f1ef63-902d-443b-80d3-906c224707f3-openstack-edpm-ipam\") pod \"dnsmasq-dns-678497f889-p66x2\" (UID: \"54f1ef63-902d-443b-80d3-906c224707f3\") " pod="openstack/dnsmasq-dns-678497f889-p66x2" Feb 23 17:53:13 crc kubenswrapper[4724]: I0223 17:53:13.212648 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/54f1ef63-902d-443b-80d3-906c224707f3-ovsdbserver-nb\") pod \"dnsmasq-dns-678497f889-p66x2\" (UID: \"54f1ef63-902d-443b-80d3-906c224707f3\") " pod="openstack/dnsmasq-dns-678497f889-p66x2" Feb 23 17:53:13 crc kubenswrapper[4724]: I0223 17:53:13.212668 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/54f1ef63-902d-443b-80d3-906c224707f3-dns-swift-storage-0\") pod \"dnsmasq-dns-678497f889-p66x2\" (UID: \"54f1ef63-902d-443b-80d3-906c224707f3\") " pod="openstack/dnsmasq-dns-678497f889-p66x2" Feb 23 17:53:13 crc kubenswrapper[4724]: I0223 17:53:13.212872 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54f1ef63-902d-443b-80d3-906c224707f3-config\") pod \"dnsmasq-dns-678497f889-p66x2\" (UID: \"54f1ef63-902d-443b-80d3-906c224707f3\") " pod="openstack/dnsmasq-dns-678497f889-p66x2" Feb 23 17:53:13 crc kubenswrapper[4724]: I0223 17:53:13.213667 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/54f1ef63-902d-443b-80d3-906c224707f3-ovsdbserver-nb\") pod \"dnsmasq-dns-678497f889-p66x2\" (UID: \"54f1ef63-902d-443b-80d3-906c224707f3\") " pod="openstack/dnsmasq-dns-678497f889-p66x2" Feb 23 17:53:13 crc kubenswrapper[4724]: I0223 17:53:13.213782 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/54f1ef63-902d-443b-80d3-906c224707f3-openstack-edpm-ipam\") pod \"dnsmasq-dns-678497f889-p66x2\" (UID: \"54f1ef63-902d-443b-80d3-906c224707f3\") " pod="openstack/dnsmasq-dns-678497f889-p66x2" Feb 23 17:53:13 crc kubenswrapper[4724]: I0223 17:53:13.213856 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/54f1ef63-902d-443b-80d3-906c224707f3-ovsdbserver-sb\") pod \"dnsmasq-dns-678497f889-p66x2\" (UID: \"54f1ef63-902d-443b-80d3-906c224707f3\") " pod="openstack/dnsmasq-dns-678497f889-p66x2" Feb 23 17:53:13 crc kubenswrapper[4724]: I0223 17:53:13.213907 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b52zq\" (UniqueName: \"kubernetes.io/projected/54f1ef63-902d-443b-80d3-906c224707f3-kube-api-access-b52zq\") pod \"dnsmasq-dns-678497f889-p66x2\" (UID: \"54f1ef63-902d-443b-80d3-906c224707f3\") " pod="openstack/dnsmasq-dns-678497f889-p66x2" Feb 23 17:53:13 crc kubenswrapper[4724]: I0223 17:53:13.213945 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54f1ef63-902d-443b-80d3-906c224707f3-config\") pod \"dnsmasq-dns-678497f889-p66x2\" (UID: \"54f1ef63-902d-443b-80d3-906c224707f3\") " pod="openstack/dnsmasq-dns-678497f889-p66x2" Feb 23 17:53:13 crc kubenswrapper[4724]: I0223 17:53:13.213988 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/54f1ef63-902d-443b-80d3-906c224707f3-dns-swift-storage-0\") pod \"dnsmasq-dns-678497f889-p66x2\" (UID: \"54f1ef63-902d-443b-80d3-906c224707f3\") " pod="openstack/dnsmasq-dns-678497f889-p66x2" Feb 23 17:53:13 crc kubenswrapper[4724]: I0223 17:53:13.214060 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/54f1ef63-902d-443b-80d3-906c224707f3-dns-svc\") pod \"dnsmasq-dns-678497f889-p66x2\" (UID: \"54f1ef63-902d-443b-80d3-906c224707f3\") " pod="openstack/dnsmasq-dns-678497f889-p66x2" Feb 23 17:53:13 crc kubenswrapper[4724]: I0223 17:53:13.214496 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/54f1ef63-902d-443b-80d3-906c224707f3-ovsdbserver-sb\") pod \"dnsmasq-dns-678497f889-p66x2\" (UID: \"54f1ef63-902d-443b-80d3-906c224707f3\") " pod="openstack/dnsmasq-dns-678497f889-p66x2" Feb 23 17:53:13 crc kubenswrapper[4724]: I0223 17:53:13.242027 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b52zq\" (UniqueName: \"kubernetes.io/projected/54f1ef63-902d-443b-80d3-906c224707f3-kube-api-access-b52zq\") pod \"dnsmasq-dns-678497f889-p66x2\" (UID: \"54f1ef63-902d-443b-80d3-906c224707f3\") " pod="openstack/dnsmasq-dns-678497f889-p66x2" Feb 23 17:53:13 crc kubenswrapper[4724]: I0223 17:53:13.370515 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-678497f889-p66x2" Feb 23 17:53:13 crc kubenswrapper[4724]: I0223 17:53:13.858516 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-678497f889-p66x2"] Feb 23 17:53:13 crc kubenswrapper[4724]: I0223 17:53:13.994094 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-678497f889-p66x2" event={"ID":"54f1ef63-902d-443b-80d3-906c224707f3","Type":"ContainerStarted","Data":"d38fea50a4b90307ae5a22369a0f92e85c1a18a156d15168048b9b30024b4c50"} Feb 23 17:53:15 crc kubenswrapper[4724]: I0223 17:53:15.003848 4724 generic.go:334] "Generic (PLEG): container finished" podID="54f1ef63-902d-443b-80d3-906c224707f3" containerID="546d63522d197a2401b404150ba5c8281ac8a7c10c3281869f7de2e04e093c93" exitCode=0 Feb 23 17:53:15 crc kubenswrapper[4724]: I0223 17:53:15.003953 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-678497f889-p66x2" event={"ID":"54f1ef63-902d-443b-80d3-906c224707f3","Type":"ContainerDied","Data":"546d63522d197a2401b404150ba5c8281ac8a7c10c3281869f7de2e04e093c93"} Feb 23 17:53:16 crc kubenswrapper[4724]: I0223 17:53:16.014746 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-678497f889-p66x2" event={"ID":"54f1ef63-902d-443b-80d3-906c224707f3","Type":"ContainerStarted","Data":"68de5db5ef2f69ab467a48af6de0dfc8c9ae6c690548d85d5ba3948e002a2926"} Feb 23 17:53:16 crc kubenswrapper[4724]: I0223 17:53:16.015726 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-678497f889-p66x2" Feb 23 17:53:16 crc kubenswrapper[4724]: I0223 17:53:16.038144 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-678497f889-p66x2" podStartSLOduration=3.038128977 podStartE2EDuration="3.038128977s" podCreationTimestamp="2026-02-23 17:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:53:16.030756068 +0000 UTC m=+1351.846955668" watchObservedRunningTime="2026-02-23 17:53:16.038128977 +0000 UTC m=+1351.854328577" Feb 23 17:53:23 crc kubenswrapper[4724]: I0223 17:53:23.372574 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-678497f889-p66x2" Feb 23 17:53:23 crc kubenswrapper[4724]: I0223 17:53:23.466350 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5678c8f4f-9w6qj"] Feb 23 17:53:23 crc kubenswrapper[4724]: I0223 17:53:23.466646 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5678c8f4f-9w6qj" podUID="37950574-5957-4f62-8d9e-0decba9e87e0" containerName="dnsmasq-dns" containerID="cri-o://08a832f9716fe9855ad6d5d3c3385eea03be36a9d532df9cefeb4ace3db98013" gracePeriod=10 Feb 23 17:53:23 crc kubenswrapper[4724]: I0223 17:53:23.575189 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-69644d8897-p4mmz"] Feb 23 17:53:23 crc kubenswrapper[4724]: I0223 17:53:23.576951 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69644d8897-p4mmz" Feb 23 17:53:23 crc kubenswrapper[4724]: I0223 17:53:23.608035 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5678c8f4f-9w6qj" podUID="37950574-5957-4f62-8d9e-0decba9e87e0" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.225:5353: connect: connection refused" Feb 23 17:53:23 crc kubenswrapper[4724]: I0223 17:53:23.626976 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69644d8897-p4mmz"] Feb 23 17:53:23 crc kubenswrapper[4724]: I0223 17:53:23.640571 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f47e5d73-be56-42e3-b23e-1710cfab9733-dns-svc\") pod \"dnsmasq-dns-69644d8897-p4mmz\" (UID: \"f47e5d73-be56-42e3-b23e-1710cfab9733\") " pod="openstack/dnsmasq-dns-69644d8897-p4mmz" Feb 23 17:53:23 crc kubenswrapper[4724]: I0223 17:53:23.640638 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f47e5d73-be56-42e3-b23e-1710cfab9733-openstack-edpm-ipam\") pod \"dnsmasq-dns-69644d8897-p4mmz\" (UID: \"f47e5d73-be56-42e3-b23e-1710cfab9733\") " pod="openstack/dnsmasq-dns-69644d8897-p4mmz" Feb 23 17:53:23 crc kubenswrapper[4724]: I0223 17:53:23.640660 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f47e5d73-be56-42e3-b23e-1710cfab9733-ovsdbserver-sb\") pod \"dnsmasq-dns-69644d8897-p4mmz\" (UID: \"f47e5d73-be56-42e3-b23e-1710cfab9733\") " pod="openstack/dnsmasq-dns-69644d8897-p4mmz" Feb 23 17:53:23 crc kubenswrapper[4724]: I0223 17:53:23.640733 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvrf7\" (UniqueName: \"kubernetes.io/projected/f47e5d73-be56-42e3-b23e-1710cfab9733-kube-api-access-fvrf7\") pod \"dnsmasq-dns-69644d8897-p4mmz\" (UID: \"f47e5d73-be56-42e3-b23e-1710cfab9733\") " pod="openstack/dnsmasq-dns-69644d8897-p4mmz" Feb 23 17:53:23 crc kubenswrapper[4724]: I0223 17:53:23.640774 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f47e5d73-be56-42e3-b23e-1710cfab9733-ovsdbserver-nb\") pod \"dnsmasq-dns-69644d8897-p4mmz\" (UID: \"f47e5d73-be56-42e3-b23e-1710cfab9733\") " pod="openstack/dnsmasq-dns-69644d8897-p4mmz" Feb 23 17:53:23 crc kubenswrapper[4724]: I0223 17:53:23.640798 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f47e5d73-be56-42e3-b23e-1710cfab9733-config\") pod \"dnsmasq-dns-69644d8897-p4mmz\" (UID: \"f47e5d73-be56-42e3-b23e-1710cfab9733\") " pod="openstack/dnsmasq-dns-69644d8897-p4mmz" Feb 23 17:53:23 crc kubenswrapper[4724]: I0223 17:53:23.640828 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f47e5d73-be56-42e3-b23e-1710cfab9733-dns-swift-storage-0\") pod \"dnsmasq-dns-69644d8897-p4mmz\" (UID: \"f47e5d73-be56-42e3-b23e-1710cfab9733\") " pod="openstack/dnsmasq-dns-69644d8897-p4mmz" Feb 23 17:53:23 crc kubenswrapper[4724]: I0223 17:53:23.742896 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f47e5d73-be56-42e3-b23e-1710cfab9733-ovsdbserver-nb\") pod \"dnsmasq-dns-69644d8897-p4mmz\" (UID: \"f47e5d73-be56-42e3-b23e-1710cfab9733\") " pod="openstack/dnsmasq-dns-69644d8897-p4mmz" Feb 23 17:53:23 crc kubenswrapper[4724]: I0223 17:53:23.743261 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f47e5d73-be56-42e3-b23e-1710cfab9733-config\") pod \"dnsmasq-dns-69644d8897-p4mmz\" (UID: \"f47e5d73-be56-42e3-b23e-1710cfab9733\") " pod="openstack/dnsmasq-dns-69644d8897-p4mmz" Feb 23 17:53:23 crc kubenswrapper[4724]: I0223 17:53:23.743310 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f47e5d73-be56-42e3-b23e-1710cfab9733-dns-swift-storage-0\") pod \"dnsmasq-dns-69644d8897-p4mmz\" (UID: \"f47e5d73-be56-42e3-b23e-1710cfab9733\") " pod="openstack/dnsmasq-dns-69644d8897-p4mmz" Feb 23 17:53:23 crc kubenswrapper[4724]: I0223 17:53:23.743359 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f47e5d73-be56-42e3-b23e-1710cfab9733-dns-svc\") pod \"dnsmasq-dns-69644d8897-p4mmz\" (UID: \"f47e5d73-be56-42e3-b23e-1710cfab9733\") " pod="openstack/dnsmasq-dns-69644d8897-p4mmz" Feb 23 17:53:23 crc kubenswrapper[4724]: I0223 17:53:23.743426 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f47e5d73-be56-42e3-b23e-1710cfab9733-openstack-edpm-ipam\") pod \"dnsmasq-dns-69644d8897-p4mmz\" (UID: \"f47e5d73-be56-42e3-b23e-1710cfab9733\") " pod="openstack/dnsmasq-dns-69644d8897-p4mmz" Feb 23 17:53:23 crc kubenswrapper[4724]: I0223 17:53:23.743454 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f47e5d73-be56-42e3-b23e-1710cfab9733-ovsdbserver-sb\") pod \"dnsmasq-dns-69644d8897-p4mmz\" (UID: \"f47e5d73-be56-42e3-b23e-1710cfab9733\") " pod="openstack/dnsmasq-dns-69644d8897-p4mmz" Feb 23 17:53:23 crc kubenswrapper[4724]: I0223 17:53:23.743556 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvrf7\" (UniqueName: \"kubernetes.io/projected/f47e5d73-be56-42e3-b23e-1710cfab9733-kube-api-access-fvrf7\") pod \"dnsmasq-dns-69644d8897-p4mmz\" (UID: \"f47e5d73-be56-42e3-b23e-1710cfab9733\") " pod="openstack/dnsmasq-dns-69644d8897-p4mmz" Feb 23 17:53:23 crc kubenswrapper[4724]: I0223 17:53:23.743908 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f47e5d73-be56-42e3-b23e-1710cfab9733-ovsdbserver-nb\") pod \"dnsmasq-dns-69644d8897-p4mmz\" (UID: \"f47e5d73-be56-42e3-b23e-1710cfab9733\") " pod="openstack/dnsmasq-dns-69644d8897-p4mmz" Feb 23 17:53:23 crc kubenswrapper[4724]: I0223 17:53:23.744270 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f47e5d73-be56-42e3-b23e-1710cfab9733-config\") pod \"dnsmasq-dns-69644d8897-p4mmz\" (UID: \"f47e5d73-be56-42e3-b23e-1710cfab9733\") " pod="openstack/dnsmasq-dns-69644d8897-p4mmz" Feb 23 17:53:23 crc kubenswrapper[4724]: I0223 17:53:23.744737 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f47e5d73-be56-42e3-b23e-1710cfab9733-dns-swift-storage-0\") pod \"dnsmasq-dns-69644d8897-p4mmz\" (UID: \"f47e5d73-be56-42e3-b23e-1710cfab9733\") " pod="openstack/dnsmasq-dns-69644d8897-p4mmz" Feb 23 17:53:23 crc kubenswrapper[4724]: I0223 17:53:23.744889 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f47e5d73-be56-42e3-b23e-1710cfab9733-dns-svc\") pod \"dnsmasq-dns-69644d8897-p4mmz\" (UID: \"f47e5d73-be56-42e3-b23e-1710cfab9733\") " pod="openstack/dnsmasq-dns-69644d8897-p4mmz" Feb 23 17:53:23 crc kubenswrapper[4724]: I0223 17:53:23.745112 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f47e5d73-be56-42e3-b23e-1710cfab9733-ovsdbserver-sb\") pod \"dnsmasq-dns-69644d8897-p4mmz\" (UID: \"f47e5d73-be56-42e3-b23e-1710cfab9733\") " pod="openstack/dnsmasq-dns-69644d8897-p4mmz" Feb 23 17:53:23 crc kubenswrapper[4724]: I0223 17:53:23.745233 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f47e5d73-be56-42e3-b23e-1710cfab9733-openstack-edpm-ipam\") pod \"dnsmasq-dns-69644d8897-p4mmz\" (UID: \"f47e5d73-be56-42e3-b23e-1710cfab9733\") " pod="openstack/dnsmasq-dns-69644d8897-p4mmz" Feb 23 17:53:23 crc kubenswrapper[4724]: I0223 17:53:23.763530 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvrf7\" (UniqueName: \"kubernetes.io/projected/f47e5d73-be56-42e3-b23e-1710cfab9733-kube-api-access-fvrf7\") pod \"dnsmasq-dns-69644d8897-p4mmz\" (UID: \"f47e5d73-be56-42e3-b23e-1710cfab9733\") " pod="openstack/dnsmasq-dns-69644d8897-p4mmz" Feb 23 17:53:23 crc kubenswrapper[4724]: I0223 17:53:23.962020 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69644d8897-p4mmz" Feb 23 17:53:24 crc kubenswrapper[4724]: I0223 17:53:24.063330 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5678c8f4f-9w6qj" Feb 23 17:53:24 crc kubenswrapper[4724]: I0223 17:53:24.093669 4724 generic.go:334] "Generic (PLEG): container finished" podID="37950574-5957-4f62-8d9e-0decba9e87e0" containerID="08a832f9716fe9855ad6d5d3c3385eea03be36a9d532df9cefeb4ace3db98013" exitCode=0 Feb 23 17:53:24 crc kubenswrapper[4724]: I0223 17:53:24.093987 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5678c8f4f-9w6qj" event={"ID":"37950574-5957-4f62-8d9e-0decba9e87e0","Type":"ContainerDied","Data":"08a832f9716fe9855ad6d5d3c3385eea03be36a9d532df9cefeb4ace3db98013"} Feb 23 17:53:24 crc kubenswrapper[4724]: I0223 17:53:24.094013 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5678c8f4f-9w6qj" event={"ID":"37950574-5957-4f62-8d9e-0decba9e87e0","Type":"ContainerDied","Data":"3a3ff76e8df08a3bcece1c4ec7f3f6b30196e7872b9bd1582fa6717bf353b3aa"} Feb 23 17:53:24 crc kubenswrapper[4724]: I0223 17:53:24.094033 4724 scope.go:117] "RemoveContainer" containerID="08a832f9716fe9855ad6d5d3c3385eea03be36a9d532df9cefeb4ace3db98013" Feb 23 17:53:24 crc kubenswrapper[4724]: I0223 17:53:24.094162 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5678c8f4f-9w6qj" Feb 23 17:53:24 crc kubenswrapper[4724]: I0223 17:53:24.130887 4724 scope.go:117] "RemoveContainer" containerID="cc5a3bdfa9c194b82e07c356e1743ade1d8e175045bb13984af6360afe1542ff" Feb 23 17:53:24 crc kubenswrapper[4724]: I0223 17:53:24.155076 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjdk6\" (UniqueName: \"kubernetes.io/projected/37950574-5957-4f62-8d9e-0decba9e87e0-kube-api-access-qjdk6\") pod \"37950574-5957-4f62-8d9e-0decba9e87e0\" (UID: \"37950574-5957-4f62-8d9e-0decba9e87e0\") " Feb 23 17:53:24 crc kubenswrapper[4724]: I0223 17:53:24.155131 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37950574-5957-4f62-8d9e-0decba9e87e0-dns-svc\") pod \"37950574-5957-4f62-8d9e-0decba9e87e0\" (UID: \"37950574-5957-4f62-8d9e-0decba9e87e0\") " Feb 23 17:53:24 crc kubenswrapper[4724]: I0223 17:53:24.155186 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37950574-5957-4f62-8d9e-0decba9e87e0-config\") pod \"37950574-5957-4f62-8d9e-0decba9e87e0\" (UID: \"37950574-5957-4f62-8d9e-0decba9e87e0\") " Feb 23 17:53:24 crc kubenswrapper[4724]: I0223 17:53:24.155226 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37950574-5957-4f62-8d9e-0decba9e87e0-ovsdbserver-nb\") pod \"37950574-5957-4f62-8d9e-0decba9e87e0\" (UID: \"37950574-5957-4f62-8d9e-0decba9e87e0\") " Feb 23 17:53:24 crc kubenswrapper[4724]: I0223 17:53:24.155276 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/37950574-5957-4f62-8d9e-0decba9e87e0-dns-swift-storage-0\") pod \"37950574-5957-4f62-8d9e-0decba9e87e0\" (UID: \"37950574-5957-4f62-8d9e-0decba9e87e0\") " Feb 23 17:53:24 crc kubenswrapper[4724]: I0223 17:53:24.155327 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/37950574-5957-4f62-8d9e-0decba9e87e0-ovsdbserver-sb\") pod \"37950574-5957-4f62-8d9e-0decba9e87e0\" (UID: \"37950574-5957-4f62-8d9e-0decba9e87e0\") " Feb 23 17:53:24 crc kubenswrapper[4724]: I0223 17:53:24.158072 4724 scope.go:117] "RemoveContainer" containerID="08a832f9716fe9855ad6d5d3c3385eea03be36a9d532df9cefeb4ace3db98013" Feb 23 17:53:24 crc kubenswrapper[4724]: E0223 17:53:24.158646 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08a832f9716fe9855ad6d5d3c3385eea03be36a9d532df9cefeb4ace3db98013\": container with ID starting with 08a832f9716fe9855ad6d5d3c3385eea03be36a9d532df9cefeb4ace3db98013 not found: ID does not exist" containerID="08a832f9716fe9855ad6d5d3c3385eea03be36a9d532df9cefeb4ace3db98013" Feb 23 17:53:24 crc kubenswrapper[4724]: I0223 17:53:24.158680 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08a832f9716fe9855ad6d5d3c3385eea03be36a9d532df9cefeb4ace3db98013"} err="failed to get container status \"08a832f9716fe9855ad6d5d3c3385eea03be36a9d532df9cefeb4ace3db98013\": rpc error: code = NotFound desc = could not find container \"08a832f9716fe9855ad6d5d3c3385eea03be36a9d532df9cefeb4ace3db98013\": container with ID starting with 08a832f9716fe9855ad6d5d3c3385eea03be36a9d532df9cefeb4ace3db98013 not found: ID does not exist" Feb 23 17:53:24 crc kubenswrapper[4724]: I0223 17:53:24.158704 4724 scope.go:117] "RemoveContainer" containerID="cc5a3bdfa9c194b82e07c356e1743ade1d8e175045bb13984af6360afe1542ff" Feb 23 17:53:24 crc kubenswrapper[4724]: E0223 17:53:24.160125 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc5a3bdfa9c194b82e07c356e1743ade1d8e175045bb13984af6360afe1542ff\": container with ID starting with cc5a3bdfa9c194b82e07c356e1743ade1d8e175045bb13984af6360afe1542ff not found: ID does not exist" containerID="cc5a3bdfa9c194b82e07c356e1743ade1d8e175045bb13984af6360afe1542ff" Feb 23 17:53:24 crc kubenswrapper[4724]: I0223 17:53:24.160162 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc5a3bdfa9c194b82e07c356e1743ade1d8e175045bb13984af6360afe1542ff"} err="failed to get container status \"cc5a3bdfa9c194b82e07c356e1743ade1d8e175045bb13984af6360afe1542ff\": rpc error: code = NotFound desc = could not find container \"cc5a3bdfa9c194b82e07c356e1743ade1d8e175045bb13984af6360afe1542ff\": container with ID starting with cc5a3bdfa9c194b82e07c356e1743ade1d8e175045bb13984af6360afe1542ff not found: ID does not exist" Feb 23 17:53:24 crc kubenswrapper[4724]: I0223 17:53:24.169072 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37950574-5957-4f62-8d9e-0decba9e87e0-kube-api-access-qjdk6" (OuterVolumeSpecName: "kube-api-access-qjdk6") pod "37950574-5957-4f62-8d9e-0decba9e87e0" (UID: "37950574-5957-4f62-8d9e-0decba9e87e0"). InnerVolumeSpecName "kube-api-access-qjdk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:53:24 crc kubenswrapper[4724]: I0223 17:53:24.209820 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37950574-5957-4f62-8d9e-0decba9e87e0-config" (OuterVolumeSpecName: "config") pod "37950574-5957-4f62-8d9e-0decba9e87e0" (UID: "37950574-5957-4f62-8d9e-0decba9e87e0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:53:24 crc kubenswrapper[4724]: I0223 17:53:24.221123 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37950574-5957-4f62-8d9e-0decba9e87e0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "37950574-5957-4f62-8d9e-0decba9e87e0" (UID: "37950574-5957-4f62-8d9e-0decba9e87e0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:53:24 crc kubenswrapper[4724]: I0223 17:53:24.235981 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37950574-5957-4f62-8d9e-0decba9e87e0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "37950574-5957-4f62-8d9e-0decba9e87e0" (UID: "37950574-5957-4f62-8d9e-0decba9e87e0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:53:24 crc kubenswrapper[4724]: I0223 17:53:24.237611 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37950574-5957-4f62-8d9e-0decba9e87e0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "37950574-5957-4f62-8d9e-0decba9e87e0" (UID: "37950574-5957-4f62-8d9e-0decba9e87e0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:53:24 crc kubenswrapper[4724]: I0223 17:53:24.241090 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37950574-5957-4f62-8d9e-0decba9e87e0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "37950574-5957-4f62-8d9e-0decba9e87e0" (UID: "37950574-5957-4f62-8d9e-0decba9e87e0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:53:24 crc kubenswrapper[4724]: I0223 17:53:24.258039 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjdk6\" (UniqueName: \"kubernetes.io/projected/37950574-5957-4f62-8d9e-0decba9e87e0-kube-api-access-qjdk6\") on node \"crc\" DevicePath \"\"" Feb 23 17:53:24 crc kubenswrapper[4724]: I0223 17:53:24.258080 4724 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37950574-5957-4f62-8d9e-0decba9e87e0-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 17:53:24 crc kubenswrapper[4724]: I0223 17:53:24.258091 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37950574-5957-4f62-8d9e-0decba9e87e0-config\") on node \"crc\" DevicePath \"\"" Feb 23 17:53:24 crc kubenswrapper[4724]: I0223 17:53:24.258100 4724 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37950574-5957-4f62-8d9e-0decba9e87e0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 17:53:24 crc kubenswrapper[4724]: I0223 17:53:24.258111 4724 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/37950574-5957-4f62-8d9e-0decba9e87e0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 23 17:53:24 crc kubenswrapper[4724]: I0223 17:53:24.258120 4724 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/37950574-5957-4f62-8d9e-0decba9e87e0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 23 17:53:24 crc kubenswrapper[4724]: I0223 17:53:24.433971 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5678c8f4f-9w6qj"] Feb 23 17:53:24 crc kubenswrapper[4724]: I0223 17:53:24.449318 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5678c8f4f-9w6qj"] Feb 23 17:53:24 crc kubenswrapper[4724]: W0223 17:53:24.456833 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf47e5d73_be56_42e3_b23e_1710cfab9733.slice/crio-2fa2555dfcb8af41561f76ec7bd2136fc3e92a12289587ea2478c0fd8b7d9ee4 WatchSource:0}: Error finding container 2fa2555dfcb8af41561f76ec7bd2136fc3e92a12289587ea2478c0fd8b7d9ee4: Status 404 returned error can't find the container with id 2fa2555dfcb8af41561f76ec7bd2136fc3e92a12289587ea2478c0fd8b7d9ee4 Feb 23 17:53:24 crc kubenswrapper[4724]: I0223 17:53:24.463013 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69644d8897-p4mmz"] Feb 23 17:53:24 crc kubenswrapper[4724]: I0223 17:53:24.962745 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37950574-5957-4f62-8d9e-0decba9e87e0" path="/var/lib/kubelet/pods/37950574-5957-4f62-8d9e-0decba9e87e0/volumes" Feb 23 17:53:25 crc kubenswrapper[4724]: I0223 17:53:25.106172 4724 generic.go:334] "Generic (PLEG): container finished" podID="f47e5d73-be56-42e3-b23e-1710cfab9733" containerID="7444248c9d08724224f52c1c3eb1b193a20e4ddac6acda94c1a569265daa3458" exitCode=0 Feb 23 17:53:25 crc kubenswrapper[4724]: I0223 17:53:25.106239 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69644d8897-p4mmz" event={"ID":"f47e5d73-be56-42e3-b23e-1710cfab9733","Type":"ContainerDied","Data":"7444248c9d08724224f52c1c3eb1b193a20e4ddac6acda94c1a569265daa3458"} Feb 23 17:53:25 crc kubenswrapper[4724]: I0223 17:53:25.106765 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69644d8897-p4mmz" event={"ID":"f47e5d73-be56-42e3-b23e-1710cfab9733","Type":"ContainerStarted","Data":"2fa2555dfcb8af41561f76ec7bd2136fc3e92a12289587ea2478c0fd8b7d9ee4"} Feb 23 17:53:26 crc kubenswrapper[4724]: I0223 17:53:26.119706 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69644d8897-p4mmz" event={"ID":"f47e5d73-be56-42e3-b23e-1710cfab9733","Type":"ContainerStarted","Data":"979303b7cd7118a72477426b9c63e72a8765818c1a448ef5b3d262976379750e"} Feb 23 17:53:26 crc kubenswrapper[4724]: I0223 17:53:26.119891 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-69644d8897-p4mmz" Feb 23 17:53:26 crc kubenswrapper[4724]: I0223 17:53:26.148568 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-69644d8897-p4mmz" podStartSLOduration=3.148551608 podStartE2EDuration="3.148551608s" podCreationTimestamp="2026-02-23 17:53:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:53:26.143911336 +0000 UTC m=+1361.960110946" watchObservedRunningTime="2026-02-23 17:53:26.148551608 +0000 UTC m=+1361.964751218" Feb 23 17:53:33 crc kubenswrapper[4724]: I0223 17:53:33.964581 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-69644d8897-p4mmz" Feb 23 17:53:34 crc kubenswrapper[4724]: I0223 17:53:34.053843 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-678497f889-p66x2"] Feb 23 17:53:34 crc kubenswrapper[4724]: I0223 17:53:34.054107 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-678497f889-p66x2" podUID="54f1ef63-902d-443b-80d3-906c224707f3" containerName="dnsmasq-dns" containerID="cri-o://68de5db5ef2f69ab467a48af6de0dfc8c9ae6c690548d85d5ba3948e002a2926" gracePeriod=10 Feb 23 17:53:34 crc kubenswrapper[4724]: I0223 17:53:34.205623 4724 generic.go:334] "Generic (PLEG): container finished" podID="54f1ef63-902d-443b-80d3-906c224707f3" containerID="68de5db5ef2f69ab467a48af6de0dfc8c9ae6c690548d85d5ba3948e002a2926" exitCode=0 Feb 23 17:53:34 crc kubenswrapper[4724]: I0223 17:53:34.205664 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-678497f889-p66x2" event={"ID":"54f1ef63-902d-443b-80d3-906c224707f3","Type":"ContainerDied","Data":"68de5db5ef2f69ab467a48af6de0dfc8c9ae6c690548d85d5ba3948e002a2926"} Feb 23 17:53:34 crc kubenswrapper[4724]: I0223 17:53:34.535527 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-678497f889-p66x2" Feb 23 17:53:34 crc kubenswrapper[4724]: I0223 17:53:34.673544 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54f1ef63-902d-443b-80d3-906c224707f3-config\") pod \"54f1ef63-902d-443b-80d3-906c224707f3\" (UID: \"54f1ef63-902d-443b-80d3-906c224707f3\") " Feb 23 17:53:34 crc kubenswrapper[4724]: I0223 17:53:34.673609 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/54f1ef63-902d-443b-80d3-906c224707f3-dns-svc\") pod \"54f1ef63-902d-443b-80d3-906c224707f3\" (UID: \"54f1ef63-902d-443b-80d3-906c224707f3\") " Feb 23 17:53:34 crc kubenswrapper[4724]: I0223 17:53:34.673656 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/54f1ef63-902d-443b-80d3-906c224707f3-ovsdbserver-sb\") pod \"54f1ef63-902d-443b-80d3-906c224707f3\" (UID: \"54f1ef63-902d-443b-80d3-906c224707f3\") " Feb 23 17:53:34 crc kubenswrapper[4724]: I0223 17:53:34.673729 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/54f1ef63-902d-443b-80d3-906c224707f3-openstack-edpm-ipam\") pod \"54f1ef63-902d-443b-80d3-906c224707f3\" (UID: \"54f1ef63-902d-443b-80d3-906c224707f3\") " Feb 23 17:53:34 crc kubenswrapper[4724]: I0223 17:53:34.673755 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/54f1ef63-902d-443b-80d3-906c224707f3-ovsdbserver-nb\") pod \"54f1ef63-902d-443b-80d3-906c224707f3\" (UID: \"54f1ef63-902d-443b-80d3-906c224707f3\") " Feb 23 17:53:34 crc kubenswrapper[4724]: I0223 17:53:34.673829 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/54f1ef63-902d-443b-80d3-906c224707f3-dns-swift-storage-0\") pod \"54f1ef63-902d-443b-80d3-906c224707f3\" (UID: \"54f1ef63-902d-443b-80d3-906c224707f3\") " Feb 23 17:53:34 crc kubenswrapper[4724]: I0223 17:53:34.673844 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b52zq\" (UniqueName: \"kubernetes.io/projected/54f1ef63-902d-443b-80d3-906c224707f3-kube-api-access-b52zq\") pod \"54f1ef63-902d-443b-80d3-906c224707f3\" (UID: \"54f1ef63-902d-443b-80d3-906c224707f3\") " Feb 23 17:53:34 crc kubenswrapper[4724]: I0223 17:53:34.689621 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54f1ef63-902d-443b-80d3-906c224707f3-kube-api-access-b52zq" (OuterVolumeSpecName: "kube-api-access-b52zq") pod "54f1ef63-902d-443b-80d3-906c224707f3" (UID: "54f1ef63-902d-443b-80d3-906c224707f3"). InnerVolumeSpecName "kube-api-access-b52zq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:53:34 crc kubenswrapper[4724]: I0223 17:53:34.722813 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54f1ef63-902d-443b-80d3-906c224707f3-config" (OuterVolumeSpecName: "config") pod "54f1ef63-902d-443b-80d3-906c224707f3" (UID: "54f1ef63-902d-443b-80d3-906c224707f3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:53:34 crc kubenswrapper[4724]: I0223 17:53:34.723706 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54f1ef63-902d-443b-80d3-906c224707f3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "54f1ef63-902d-443b-80d3-906c224707f3" (UID: "54f1ef63-902d-443b-80d3-906c224707f3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:53:34 crc kubenswrapper[4724]: I0223 17:53:34.728456 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54f1ef63-902d-443b-80d3-906c224707f3-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "54f1ef63-902d-443b-80d3-906c224707f3" (UID: "54f1ef63-902d-443b-80d3-906c224707f3"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:53:34 crc kubenswrapper[4724]: I0223 17:53:34.729145 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54f1ef63-902d-443b-80d3-906c224707f3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "54f1ef63-902d-443b-80d3-906c224707f3" (UID: "54f1ef63-902d-443b-80d3-906c224707f3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:53:34 crc kubenswrapper[4724]: I0223 17:53:34.729300 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54f1ef63-902d-443b-80d3-906c224707f3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "54f1ef63-902d-443b-80d3-906c224707f3" (UID: "54f1ef63-902d-443b-80d3-906c224707f3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:53:34 crc kubenswrapper[4724]: I0223 17:53:34.731641 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54f1ef63-902d-443b-80d3-906c224707f3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "54f1ef63-902d-443b-80d3-906c224707f3" (UID: "54f1ef63-902d-443b-80d3-906c224707f3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:53:34 crc kubenswrapper[4724]: I0223 17:53:34.776070 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54f1ef63-902d-443b-80d3-906c224707f3-config\") on node \"crc\" DevicePath \"\"" Feb 23 17:53:34 crc kubenswrapper[4724]: I0223 17:53:34.776103 4724 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/54f1ef63-902d-443b-80d3-906c224707f3-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 17:53:34 crc kubenswrapper[4724]: I0223 17:53:34.776112 4724 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/54f1ef63-902d-443b-80d3-906c224707f3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 23 17:53:34 crc kubenswrapper[4724]: I0223 17:53:34.776123 4724 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/54f1ef63-902d-443b-80d3-906c224707f3-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 17:53:34 crc kubenswrapper[4724]: I0223 17:53:34.776131 4724 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/54f1ef63-902d-443b-80d3-906c224707f3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 17:53:34 crc kubenswrapper[4724]: I0223 17:53:34.776139 4724 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/54f1ef63-902d-443b-80d3-906c224707f3-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 23 17:53:34 crc kubenswrapper[4724]: I0223 17:53:34.776146 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b52zq\" (UniqueName: \"kubernetes.io/projected/54f1ef63-902d-443b-80d3-906c224707f3-kube-api-access-b52zq\") on node \"crc\" DevicePath \"\"" Feb 23 17:53:35 crc kubenswrapper[4724]: I0223 17:53:35.218570 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-678497f889-p66x2" event={"ID":"54f1ef63-902d-443b-80d3-906c224707f3","Type":"ContainerDied","Data":"d38fea50a4b90307ae5a22369a0f92e85c1a18a156d15168048b9b30024b4c50"} Feb 23 17:53:35 crc kubenswrapper[4724]: I0223 17:53:35.218630 4724 scope.go:117] "RemoveContainer" containerID="68de5db5ef2f69ab467a48af6de0dfc8c9ae6c690548d85d5ba3948e002a2926" Feb 23 17:53:35 crc kubenswrapper[4724]: I0223 17:53:35.218684 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-678497f889-p66x2" Feb 23 17:53:35 crc kubenswrapper[4724]: I0223 17:53:35.245994 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-678497f889-p66x2"] Feb 23 17:53:35 crc kubenswrapper[4724]: I0223 17:53:35.260823 4724 scope.go:117] "RemoveContainer" containerID="546d63522d197a2401b404150ba5c8281ac8a7c10c3281869f7de2e04e093c93" Feb 23 17:53:35 crc kubenswrapper[4724]: I0223 17:53:35.263337 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-678497f889-p66x2"] Feb 23 17:53:36 crc kubenswrapper[4724]: I0223 17:53:36.962309 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54f1ef63-902d-443b-80d3-906c224707f3" path="/var/lib/kubelet/pods/54f1ef63-902d-443b-80d3-906c224707f3/volumes" Feb 23 17:53:39 crc kubenswrapper[4724]: I0223 17:53:39.262847 4724 generic.go:334] "Generic (PLEG): container finished" podID="9723ff3a-6da5-46fd-be2a-89693223d4f0" containerID="65e0f5cbec3e20d70799018301287769f77f01e1aaf5bc01079aeef71e6d0af5" exitCode=0 Feb 23 17:53:39 crc kubenswrapper[4724]: I0223 17:53:39.262924 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9723ff3a-6da5-46fd-be2a-89693223d4f0","Type":"ContainerDied","Data":"65e0f5cbec3e20d70799018301287769f77f01e1aaf5bc01079aeef71e6d0af5"} Feb 23 17:53:39 crc kubenswrapper[4724]: I0223 17:53:39.265802 4724 generic.go:334] "Generic (PLEG): container finished" podID="1593736a-2034-4811-90f9-90645b954b2c" containerID="68c7e0bc9895e0f966ccb14f3290ad715578875177c490d2d011c13dbdeac189" exitCode=0 Feb 23 17:53:39 crc kubenswrapper[4724]: I0223 17:53:39.265864 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1593736a-2034-4811-90f9-90645b954b2c","Type":"ContainerDied","Data":"68c7e0bc9895e0f966ccb14f3290ad715578875177c490d2d011c13dbdeac189"} Feb 23 17:53:40 crc kubenswrapper[4724]: I0223 17:53:40.282095 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9723ff3a-6da5-46fd-be2a-89693223d4f0","Type":"ContainerStarted","Data":"00849fe0a20a1d39cde7ad754bf4e7fd0572fcb2254dec9c2c2e8961d07f13ad"} Feb 23 17:53:40 crc kubenswrapper[4724]: I0223 17:53:40.283479 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:53:40 crc kubenswrapper[4724]: I0223 17:53:40.284459 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1593736a-2034-4811-90f9-90645b954b2c","Type":"ContainerStarted","Data":"b72b8471a0608001a22c981a26f70f5d2e8f7b5f3d87bd6453bdc0373719b8da"} Feb 23 17:53:40 crc kubenswrapper[4724]: I0223 17:53:40.284677 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 23 17:53:40 crc kubenswrapper[4724]: I0223 17:53:40.306678 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.306658512 podStartE2EDuration="36.306658512s" podCreationTimestamp="2026-02-23 17:53:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:53:40.304476418 +0000 UTC m=+1376.120676038" watchObservedRunningTime="2026-02-23 17:53:40.306658512 +0000 UTC m=+1376.122858112" Feb 23 17:53:40 crc kubenswrapper[4724]: I0223 17:53:40.340835 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.340813558 podStartE2EDuration="37.340813558s" podCreationTimestamp="2026-02-23 17:53:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 17:53:40.326039922 +0000 UTC m=+1376.142239522" watchObservedRunningTime="2026-02-23 17:53:40.340813558 +0000 UTC m=+1376.157013148" Feb 23 17:53:46 crc kubenswrapper[4724]: I0223 17:53:46.667107 4724 scope.go:117] "RemoveContainer" containerID="49f0c72fb4911ec8aa2fc7339f8af96f7a36570471b463b8b8c3bf494fe72670" Feb 23 17:53:52 crc kubenswrapper[4724]: I0223 17:53:52.069167 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ftpsr"] Feb 23 17:53:52 crc kubenswrapper[4724]: E0223 17:53:52.070025 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37950574-5957-4f62-8d9e-0decba9e87e0" containerName="dnsmasq-dns" Feb 23 17:53:52 crc kubenswrapper[4724]: I0223 17:53:52.070037 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="37950574-5957-4f62-8d9e-0decba9e87e0" containerName="dnsmasq-dns" Feb 23 17:53:52 crc kubenswrapper[4724]: E0223 17:53:52.070062 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54f1ef63-902d-443b-80d3-906c224707f3" containerName="dnsmasq-dns" Feb 23 17:53:52 crc kubenswrapper[4724]: I0223 17:53:52.070067 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="54f1ef63-902d-443b-80d3-906c224707f3" containerName="dnsmasq-dns" Feb 23 17:53:52 crc kubenswrapper[4724]: E0223 17:53:52.070086 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54f1ef63-902d-443b-80d3-906c224707f3" containerName="init" Feb 23 17:53:52 crc kubenswrapper[4724]: I0223 17:53:52.070092 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="54f1ef63-902d-443b-80d3-906c224707f3" containerName="init" Feb 23 17:53:52 crc kubenswrapper[4724]: E0223 17:53:52.070100 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37950574-5957-4f62-8d9e-0decba9e87e0" containerName="init" Feb 23 17:53:52 crc kubenswrapper[4724]: I0223 17:53:52.070105 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="37950574-5957-4f62-8d9e-0decba9e87e0" containerName="init" Feb 23 17:53:52 crc kubenswrapper[4724]: I0223 17:53:52.070288 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="37950574-5957-4f62-8d9e-0decba9e87e0" containerName="dnsmasq-dns" Feb 23 17:53:52 crc kubenswrapper[4724]: I0223 17:53:52.070308 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="54f1ef63-902d-443b-80d3-906c224707f3" containerName="dnsmasq-dns" Feb 23 17:53:52 crc kubenswrapper[4724]: I0223 17:53:52.071065 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ftpsr" Feb 23 17:53:52 crc kubenswrapper[4724]: I0223 17:53:52.072758 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t8jff" Feb 23 17:53:52 crc kubenswrapper[4724]: I0223 17:53:52.072823 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 17:53:52 crc kubenswrapper[4724]: I0223 17:53:52.073258 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 17:53:52 crc kubenswrapper[4724]: I0223 17:53:52.076381 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 17:53:52 crc kubenswrapper[4724]: I0223 17:53:52.088033 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ftpsr"] Feb 23 17:53:52 crc kubenswrapper[4724]: I0223 17:53:52.133348 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8780dd09-5b4b-40f6-81ee-d2163bd3f066-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ftpsr\" (UID: \"8780dd09-5b4b-40f6-81ee-d2163bd3f066\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ftpsr" Feb 23 17:53:52 crc kubenswrapper[4724]: I0223 17:53:52.133447 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8780dd09-5b4b-40f6-81ee-d2163bd3f066-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ftpsr\" (UID: \"8780dd09-5b4b-40f6-81ee-d2163bd3f066\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ftpsr" Feb 23 17:53:52 crc kubenswrapper[4724]: I0223 17:53:52.133854 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8780dd09-5b4b-40f6-81ee-d2163bd3f066-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ftpsr\" (UID: \"8780dd09-5b4b-40f6-81ee-d2163bd3f066\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ftpsr" Feb 23 17:53:52 crc kubenswrapper[4724]: I0223 17:53:52.134008 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6n8q2\" (UniqueName: \"kubernetes.io/projected/8780dd09-5b4b-40f6-81ee-d2163bd3f066-kube-api-access-6n8q2\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ftpsr\" (UID: \"8780dd09-5b4b-40f6-81ee-d2163bd3f066\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ftpsr" Feb 23 17:53:52 crc kubenswrapper[4724]: I0223 17:53:52.237686 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6n8q2\" (UniqueName: \"kubernetes.io/projected/8780dd09-5b4b-40f6-81ee-d2163bd3f066-kube-api-access-6n8q2\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ftpsr\" (UID: \"8780dd09-5b4b-40f6-81ee-d2163bd3f066\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ftpsr" Feb 23 17:53:52 crc kubenswrapper[4724]: I0223 17:53:52.238440 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8780dd09-5b4b-40f6-81ee-d2163bd3f066-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ftpsr\" (UID: \"8780dd09-5b4b-40f6-81ee-d2163bd3f066\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ftpsr" Feb 23 17:53:52 crc kubenswrapper[4724]: I0223 17:53:52.238772 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8780dd09-5b4b-40f6-81ee-d2163bd3f066-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ftpsr\" (UID: \"8780dd09-5b4b-40f6-81ee-d2163bd3f066\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ftpsr" Feb 23 17:53:52 crc kubenswrapper[4724]: I0223 17:53:52.239234 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8780dd09-5b4b-40f6-81ee-d2163bd3f066-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ftpsr\" (UID: \"8780dd09-5b4b-40f6-81ee-d2163bd3f066\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ftpsr" Feb 23 17:53:52 crc kubenswrapper[4724]: I0223 17:53:52.246575 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8780dd09-5b4b-40f6-81ee-d2163bd3f066-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ftpsr\" (UID: \"8780dd09-5b4b-40f6-81ee-d2163bd3f066\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ftpsr" Feb 23 17:53:52 crc kubenswrapper[4724]: I0223 17:53:52.247824 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8780dd09-5b4b-40f6-81ee-d2163bd3f066-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ftpsr\" (UID: \"8780dd09-5b4b-40f6-81ee-d2163bd3f066\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ftpsr" Feb 23 17:53:52 crc kubenswrapper[4724]: I0223 17:53:52.252925 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8780dd09-5b4b-40f6-81ee-d2163bd3f066-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ftpsr\" (UID: \"8780dd09-5b4b-40f6-81ee-d2163bd3f066\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ftpsr" Feb 23 17:53:52 crc kubenswrapper[4724]: I0223 17:53:52.253225 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6n8q2\" (UniqueName: \"kubernetes.io/projected/8780dd09-5b4b-40f6-81ee-d2163bd3f066-kube-api-access-6n8q2\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ftpsr\" (UID: \"8780dd09-5b4b-40f6-81ee-d2163bd3f066\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ftpsr" Feb 23 17:53:52 crc kubenswrapper[4724]: I0223 17:53:52.388528 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ftpsr" Feb 23 17:53:52 crc kubenswrapper[4724]: I0223 17:53:52.986999 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ftpsr"] Feb 23 17:53:53 crc kubenswrapper[4724]: I0223 17:53:53.418912 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ftpsr" event={"ID":"8780dd09-5b4b-40f6-81ee-d2163bd3f066","Type":"ContainerStarted","Data":"ec354513deca35b81857c8b443ce2744401c16340f90f97fb43156e43e603468"} Feb 23 17:53:54 crc kubenswrapper[4724]: I0223 17:53:54.285209 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 23 17:53:54 crc kubenswrapper[4724]: I0223 17:53:54.835565 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 23 17:54:03 crc kubenswrapper[4724]: I0223 17:54:03.544036 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ftpsr" event={"ID":"8780dd09-5b4b-40f6-81ee-d2163bd3f066","Type":"ContainerStarted","Data":"b4f13949eecfc0b66aaf4177f73c768380070f3afac86c307fef792623d7d0d2"} Feb 23 17:54:03 crc kubenswrapper[4724]: I0223 17:54:03.567623 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ftpsr" podStartSLOduration=1.51924363 podStartE2EDuration="11.567601511s" podCreationTimestamp="2026-02-23 17:53:52 +0000 UTC" firstStartedPulling="2026-02-23 17:53:52.994106344 +0000 UTC m=+1388.810305934" lastFinishedPulling="2026-02-23 17:54:03.042464215 +0000 UTC m=+1398.858663815" observedRunningTime="2026-02-23 17:54:03.559321696 +0000 UTC m=+1399.375521286" watchObservedRunningTime="2026-02-23 17:54:03.567601511 +0000 UTC m=+1399.383801111" Feb 23 17:54:14 crc kubenswrapper[4724]: I0223 17:54:14.657636 4724 generic.go:334] "Generic (PLEG): container finished" podID="8780dd09-5b4b-40f6-81ee-d2163bd3f066" containerID="b4f13949eecfc0b66aaf4177f73c768380070f3afac86c307fef792623d7d0d2" exitCode=0 Feb 23 17:54:14 crc kubenswrapper[4724]: I0223 17:54:14.657822 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ftpsr" event={"ID":"8780dd09-5b4b-40f6-81ee-d2163bd3f066","Type":"ContainerDied","Data":"b4f13949eecfc0b66aaf4177f73c768380070f3afac86c307fef792623d7d0d2"} Feb 23 17:54:16 crc kubenswrapper[4724]: I0223 17:54:16.079602 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ftpsr" Feb 23 17:54:16 crc kubenswrapper[4724]: I0223 17:54:16.127478 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8780dd09-5b4b-40f6-81ee-d2163bd3f066-ssh-key-openstack-edpm-ipam\") pod \"8780dd09-5b4b-40f6-81ee-d2163bd3f066\" (UID: \"8780dd09-5b4b-40f6-81ee-d2163bd3f066\") " Feb 23 17:54:16 crc kubenswrapper[4724]: I0223 17:54:16.127583 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6n8q2\" (UniqueName: \"kubernetes.io/projected/8780dd09-5b4b-40f6-81ee-d2163bd3f066-kube-api-access-6n8q2\") pod \"8780dd09-5b4b-40f6-81ee-d2163bd3f066\" (UID: \"8780dd09-5b4b-40f6-81ee-d2163bd3f066\") " Feb 23 17:54:16 crc kubenswrapper[4724]: I0223 17:54:16.127654 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8780dd09-5b4b-40f6-81ee-d2163bd3f066-inventory\") pod \"8780dd09-5b4b-40f6-81ee-d2163bd3f066\" (UID: \"8780dd09-5b4b-40f6-81ee-d2163bd3f066\") " Feb 23 17:54:16 crc kubenswrapper[4724]: I0223 17:54:16.127718 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8780dd09-5b4b-40f6-81ee-d2163bd3f066-repo-setup-combined-ca-bundle\") pod \"8780dd09-5b4b-40f6-81ee-d2163bd3f066\" (UID: \"8780dd09-5b4b-40f6-81ee-d2163bd3f066\") " Feb 23 17:54:16 crc kubenswrapper[4724]: I0223 17:54:16.134497 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8780dd09-5b4b-40f6-81ee-d2163bd3f066-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "8780dd09-5b4b-40f6-81ee-d2163bd3f066" (UID: "8780dd09-5b4b-40f6-81ee-d2163bd3f066"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:54:16 crc kubenswrapper[4724]: I0223 17:54:16.139788 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8780dd09-5b4b-40f6-81ee-d2163bd3f066-kube-api-access-6n8q2" (OuterVolumeSpecName: "kube-api-access-6n8q2") pod "8780dd09-5b4b-40f6-81ee-d2163bd3f066" (UID: "8780dd09-5b4b-40f6-81ee-d2163bd3f066"). InnerVolumeSpecName "kube-api-access-6n8q2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:54:16 crc kubenswrapper[4724]: I0223 17:54:16.158631 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8780dd09-5b4b-40f6-81ee-d2163bd3f066-inventory" (OuterVolumeSpecName: "inventory") pod "8780dd09-5b4b-40f6-81ee-d2163bd3f066" (UID: "8780dd09-5b4b-40f6-81ee-d2163bd3f066"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:54:16 crc kubenswrapper[4724]: I0223 17:54:16.172740 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8780dd09-5b4b-40f6-81ee-d2163bd3f066-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8780dd09-5b4b-40f6-81ee-d2163bd3f066" (UID: "8780dd09-5b4b-40f6-81ee-d2163bd3f066"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:54:16 crc kubenswrapper[4724]: I0223 17:54:16.229654 4724 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8780dd09-5b4b-40f6-81ee-d2163bd3f066-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 17:54:16 crc kubenswrapper[4724]: I0223 17:54:16.229684 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6n8q2\" (UniqueName: \"kubernetes.io/projected/8780dd09-5b4b-40f6-81ee-d2163bd3f066-kube-api-access-6n8q2\") on node \"crc\" DevicePath \"\"" Feb 23 17:54:16 crc kubenswrapper[4724]: I0223 17:54:16.229704 4724 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8780dd09-5b4b-40f6-81ee-d2163bd3f066-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 17:54:16 crc kubenswrapper[4724]: I0223 17:54:16.229714 4724 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8780dd09-5b4b-40f6-81ee-d2163bd3f066-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 17:54:16 crc kubenswrapper[4724]: I0223 17:54:16.681580 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ftpsr" event={"ID":"8780dd09-5b4b-40f6-81ee-d2163bd3f066","Type":"ContainerDied","Data":"ec354513deca35b81857c8b443ce2744401c16340f90f97fb43156e43e603468"} Feb 23 17:54:16 crc kubenswrapper[4724]: I0223 17:54:16.681905 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec354513deca35b81857c8b443ce2744401c16340f90f97fb43156e43e603468" Feb 23 17:54:16 crc kubenswrapper[4724]: I0223 17:54:16.681655 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ftpsr" Feb 23 17:54:16 crc kubenswrapper[4724]: I0223 17:54:16.757074 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-ctwxj"] Feb 23 17:54:16 crc kubenswrapper[4724]: E0223 17:54:16.757483 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8780dd09-5b4b-40f6-81ee-d2163bd3f066" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 23 17:54:16 crc kubenswrapper[4724]: I0223 17:54:16.757497 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="8780dd09-5b4b-40f6-81ee-d2163bd3f066" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 23 17:54:16 crc kubenswrapper[4724]: I0223 17:54:16.757664 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="8780dd09-5b4b-40f6-81ee-d2163bd3f066" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 23 17:54:16 crc kubenswrapper[4724]: I0223 17:54:16.758483 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ctwxj" Feb 23 17:54:16 crc kubenswrapper[4724]: I0223 17:54:16.761559 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 17:54:16 crc kubenswrapper[4724]: I0223 17:54:16.761632 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t8jff" Feb 23 17:54:16 crc kubenswrapper[4724]: I0223 17:54:16.761919 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 17:54:16 crc kubenswrapper[4724]: I0223 17:54:16.764870 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 17:54:16 crc kubenswrapper[4724]: I0223 17:54:16.767078 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-ctwxj"] Feb 23 17:54:16 crc kubenswrapper[4724]: I0223 17:54:16.841563 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/190a2171-8cbd-4bb4-a22d-76d1cf634934-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-ctwxj\" (UID: \"190a2171-8cbd-4bb4-a22d-76d1cf634934\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ctwxj" Feb 23 17:54:16 crc kubenswrapper[4724]: I0223 17:54:16.841615 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/190a2171-8cbd-4bb4-a22d-76d1cf634934-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-ctwxj\" (UID: \"190a2171-8cbd-4bb4-a22d-76d1cf634934\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ctwxj" Feb 23 17:54:16 crc kubenswrapper[4724]: I0223 17:54:16.841655 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqqdr\" (UniqueName: \"kubernetes.io/projected/190a2171-8cbd-4bb4-a22d-76d1cf634934-kube-api-access-nqqdr\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-ctwxj\" (UID: \"190a2171-8cbd-4bb4-a22d-76d1cf634934\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ctwxj" Feb 23 17:54:16 crc kubenswrapper[4724]: I0223 17:54:16.943715 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqqdr\" (UniqueName: \"kubernetes.io/projected/190a2171-8cbd-4bb4-a22d-76d1cf634934-kube-api-access-nqqdr\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-ctwxj\" (UID: \"190a2171-8cbd-4bb4-a22d-76d1cf634934\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ctwxj" Feb 23 17:54:16 crc kubenswrapper[4724]: I0223 17:54:16.943923 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/190a2171-8cbd-4bb4-a22d-76d1cf634934-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-ctwxj\" (UID: \"190a2171-8cbd-4bb4-a22d-76d1cf634934\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ctwxj" Feb 23 17:54:16 crc kubenswrapper[4724]: I0223 17:54:16.943956 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/190a2171-8cbd-4bb4-a22d-76d1cf634934-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-ctwxj\" (UID: \"190a2171-8cbd-4bb4-a22d-76d1cf634934\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ctwxj" Feb 23 17:54:16 crc kubenswrapper[4724]: I0223 17:54:16.949260 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/190a2171-8cbd-4bb4-a22d-76d1cf634934-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-ctwxj\" (UID: \"190a2171-8cbd-4bb4-a22d-76d1cf634934\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ctwxj" Feb 23 17:54:16 crc kubenswrapper[4724]: I0223 17:54:16.956258 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/190a2171-8cbd-4bb4-a22d-76d1cf634934-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-ctwxj\" (UID: \"190a2171-8cbd-4bb4-a22d-76d1cf634934\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ctwxj" Feb 23 17:54:16 crc kubenswrapper[4724]: I0223 17:54:16.961745 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqqdr\" (UniqueName: \"kubernetes.io/projected/190a2171-8cbd-4bb4-a22d-76d1cf634934-kube-api-access-nqqdr\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-ctwxj\" (UID: \"190a2171-8cbd-4bb4-a22d-76d1cf634934\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ctwxj" Feb 23 17:54:17 crc kubenswrapper[4724]: I0223 17:54:17.080141 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ctwxj" Feb 23 17:54:17 crc kubenswrapper[4724]: I0223 17:54:17.615852 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-ctwxj"] Feb 23 17:54:17 crc kubenswrapper[4724]: I0223 17:54:17.693919 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ctwxj" event={"ID":"190a2171-8cbd-4bb4-a22d-76d1cf634934","Type":"ContainerStarted","Data":"6c9bdcf1917b8c81fc6e302542a006a670998685b17e3b8108965850fb0be312"} Feb 23 17:54:18 crc kubenswrapper[4724]: I0223 17:54:18.705983 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ctwxj" event={"ID":"190a2171-8cbd-4bb4-a22d-76d1cf634934","Type":"ContainerStarted","Data":"80fad3920413b9323737cac186e4600bd2192544d6d25c32ee36016de0062aa7"} Feb 23 17:54:18 crc kubenswrapper[4724]: I0223 17:54:18.732175 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ctwxj" podStartSLOduration=2.297440094 podStartE2EDuration="2.732157909s" podCreationTimestamp="2026-02-23 17:54:16 +0000 UTC" firstStartedPulling="2026-02-23 17:54:17.622651259 +0000 UTC m=+1413.438850859" lastFinishedPulling="2026-02-23 17:54:18.057369073 +0000 UTC m=+1413.873568674" observedRunningTime="2026-02-23 17:54:18.728770005 +0000 UTC m=+1414.544969605" watchObservedRunningTime="2026-02-23 17:54:18.732157909 +0000 UTC m=+1414.548357509" Feb 23 17:54:20 crc kubenswrapper[4724]: I0223 17:54:20.741035 4724 generic.go:334] "Generic (PLEG): container finished" podID="190a2171-8cbd-4bb4-a22d-76d1cf634934" containerID="80fad3920413b9323737cac186e4600bd2192544d6d25c32ee36016de0062aa7" exitCode=0 Feb 23 17:54:20 crc kubenswrapper[4724]: I0223 17:54:20.741227 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ctwxj" event={"ID":"190a2171-8cbd-4bb4-a22d-76d1cf634934","Type":"ContainerDied","Data":"80fad3920413b9323737cac186e4600bd2192544d6d25c32ee36016de0062aa7"} Feb 23 17:54:22 crc kubenswrapper[4724]: I0223 17:54:22.231083 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ctwxj" Feb 23 17:54:22 crc kubenswrapper[4724]: I0223 17:54:22.353006 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/190a2171-8cbd-4bb4-a22d-76d1cf634934-ssh-key-openstack-edpm-ipam\") pod \"190a2171-8cbd-4bb4-a22d-76d1cf634934\" (UID: \"190a2171-8cbd-4bb4-a22d-76d1cf634934\") " Feb 23 17:54:22 crc kubenswrapper[4724]: I0223 17:54:22.353181 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/190a2171-8cbd-4bb4-a22d-76d1cf634934-inventory\") pod \"190a2171-8cbd-4bb4-a22d-76d1cf634934\" (UID: \"190a2171-8cbd-4bb4-a22d-76d1cf634934\") " Feb 23 17:54:22 crc kubenswrapper[4724]: I0223 17:54:22.353318 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqqdr\" (UniqueName: \"kubernetes.io/projected/190a2171-8cbd-4bb4-a22d-76d1cf634934-kube-api-access-nqqdr\") pod \"190a2171-8cbd-4bb4-a22d-76d1cf634934\" (UID: \"190a2171-8cbd-4bb4-a22d-76d1cf634934\") " Feb 23 17:54:22 crc kubenswrapper[4724]: I0223 17:54:22.358285 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/190a2171-8cbd-4bb4-a22d-76d1cf634934-kube-api-access-nqqdr" (OuterVolumeSpecName: "kube-api-access-nqqdr") pod "190a2171-8cbd-4bb4-a22d-76d1cf634934" (UID: "190a2171-8cbd-4bb4-a22d-76d1cf634934"). InnerVolumeSpecName "kube-api-access-nqqdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:54:22 crc kubenswrapper[4724]: I0223 17:54:22.387270 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/190a2171-8cbd-4bb4-a22d-76d1cf634934-inventory" (OuterVolumeSpecName: "inventory") pod "190a2171-8cbd-4bb4-a22d-76d1cf634934" (UID: "190a2171-8cbd-4bb4-a22d-76d1cf634934"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:54:22 crc kubenswrapper[4724]: I0223 17:54:22.395728 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/190a2171-8cbd-4bb4-a22d-76d1cf634934-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "190a2171-8cbd-4bb4-a22d-76d1cf634934" (UID: "190a2171-8cbd-4bb4-a22d-76d1cf634934"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:54:22 crc kubenswrapper[4724]: I0223 17:54:22.456041 4724 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/190a2171-8cbd-4bb4-a22d-76d1cf634934-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 17:54:22 crc kubenswrapper[4724]: I0223 17:54:22.456083 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqqdr\" (UniqueName: \"kubernetes.io/projected/190a2171-8cbd-4bb4-a22d-76d1cf634934-kube-api-access-nqqdr\") on node \"crc\" DevicePath \"\"" Feb 23 17:54:22 crc kubenswrapper[4724]: I0223 17:54:22.456095 4724 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/190a2171-8cbd-4bb4-a22d-76d1cf634934-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 17:54:22 crc kubenswrapper[4724]: I0223 17:54:22.773771 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ctwxj" event={"ID":"190a2171-8cbd-4bb4-a22d-76d1cf634934","Type":"ContainerDied","Data":"6c9bdcf1917b8c81fc6e302542a006a670998685b17e3b8108965850fb0be312"} Feb 23 17:54:22 crc kubenswrapper[4724]: I0223 17:54:22.773813 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c9bdcf1917b8c81fc6e302542a006a670998685b17e3b8108965850fb0be312" Feb 23 17:54:22 crc kubenswrapper[4724]: I0223 17:54:22.773867 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ctwxj" Feb 23 17:54:22 crc kubenswrapper[4724]: I0223 17:54:22.839556 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4wbqf"] Feb 23 17:54:22 crc kubenswrapper[4724]: E0223 17:54:22.839999 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="190a2171-8cbd-4bb4-a22d-76d1cf634934" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 23 17:54:22 crc kubenswrapper[4724]: I0223 17:54:22.840043 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="190a2171-8cbd-4bb4-a22d-76d1cf634934" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 23 17:54:22 crc kubenswrapper[4724]: I0223 17:54:22.840254 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="190a2171-8cbd-4bb4-a22d-76d1cf634934" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 23 17:54:22 crc kubenswrapper[4724]: I0223 17:54:22.840947 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4wbqf" Feb 23 17:54:22 crc kubenswrapper[4724]: I0223 17:54:22.843205 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 17:54:22 crc kubenswrapper[4724]: I0223 17:54:22.843377 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 17:54:22 crc kubenswrapper[4724]: I0223 17:54:22.845334 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t8jff" Feb 23 17:54:22 crc kubenswrapper[4724]: I0223 17:54:22.845573 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 17:54:22 crc kubenswrapper[4724]: I0223 17:54:22.865059 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4wbqf"] Feb 23 17:54:22 crc kubenswrapper[4724]: I0223 17:54:22.963761 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/456d50d3-b5f9-4dd4-9eec-c15f21b183e7-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4wbqf\" (UID: \"456d50d3-b5f9-4dd4-9eec-c15f21b183e7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4wbqf" Feb 23 17:54:22 crc kubenswrapper[4724]: I0223 17:54:22.963873 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blt6v\" (UniqueName: \"kubernetes.io/projected/456d50d3-b5f9-4dd4-9eec-c15f21b183e7-kube-api-access-blt6v\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4wbqf\" (UID: \"456d50d3-b5f9-4dd4-9eec-c15f21b183e7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4wbqf" Feb 23 17:54:22 crc kubenswrapper[4724]: I0223 17:54:22.963950 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/456d50d3-b5f9-4dd4-9eec-c15f21b183e7-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4wbqf\" (UID: \"456d50d3-b5f9-4dd4-9eec-c15f21b183e7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4wbqf" Feb 23 17:54:22 crc kubenswrapper[4724]: I0223 17:54:22.963989 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/456d50d3-b5f9-4dd4-9eec-c15f21b183e7-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4wbqf\" (UID: \"456d50d3-b5f9-4dd4-9eec-c15f21b183e7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4wbqf" Feb 23 17:54:23 crc kubenswrapper[4724]: I0223 17:54:23.066414 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/456d50d3-b5f9-4dd4-9eec-c15f21b183e7-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4wbqf\" (UID: \"456d50d3-b5f9-4dd4-9eec-c15f21b183e7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4wbqf" Feb 23 17:54:23 crc kubenswrapper[4724]: I0223 17:54:23.066532 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blt6v\" (UniqueName: \"kubernetes.io/projected/456d50d3-b5f9-4dd4-9eec-c15f21b183e7-kube-api-access-blt6v\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4wbqf\" (UID: \"456d50d3-b5f9-4dd4-9eec-c15f21b183e7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4wbqf" Feb 23 17:54:23 crc kubenswrapper[4724]: I0223 17:54:23.066786 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/456d50d3-b5f9-4dd4-9eec-c15f21b183e7-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4wbqf\" (UID: \"456d50d3-b5f9-4dd4-9eec-c15f21b183e7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4wbqf" Feb 23 17:54:23 crc kubenswrapper[4724]: I0223 17:54:23.066811 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/456d50d3-b5f9-4dd4-9eec-c15f21b183e7-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4wbqf\" (UID: \"456d50d3-b5f9-4dd4-9eec-c15f21b183e7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4wbqf" Feb 23 17:54:23 crc kubenswrapper[4724]: I0223 17:54:23.075986 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/456d50d3-b5f9-4dd4-9eec-c15f21b183e7-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4wbqf\" (UID: \"456d50d3-b5f9-4dd4-9eec-c15f21b183e7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4wbqf" Feb 23 17:54:23 crc kubenswrapper[4724]: I0223 17:54:23.083864 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/456d50d3-b5f9-4dd4-9eec-c15f21b183e7-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4wbqf\" (UID: \"456d50d3-b5f9-4dd4-9eec-c15f21b183e7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4wbqf" Feb 23 17:54:23 crc kubenswrapper[4724]: I0223 17:54:23.084383 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/456d50d3-b5f9-4dd4-9eec-c15f21b183e7-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4wbqf\" (UID: \"456d50d3-b5f9-4dd4-9eec-c15f21b183e7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4wbqf" Feb 23 17:54:23 crc kubenswrapper[4724]: I0223 17:54:23.099047 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blt6v\" (UniqueName: \"kubernetes.io/projected/456d50d3-b5f9-4dd4-9eec-c15f21b183e7-kube-api-access-blt6v\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4wbqf\" (UID: \"456d50d3-b5f9-4dd4-9eec-c15f21b183e7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4wbqf" Feb 23 17:54:23 crc kubenswrapper[4724]: I0223 17:54:23.159818 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4wbqf" Feb 23 17:54:23 crc kubenswrapper[4724]: I0223 17:54:23.804353 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4wbqf"] Feb 23 17:54:24 crc kubenswrapper[4724]: I0223 17:54:24.794128 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4wbqf" event={"ID":"456d50d3-b5f9-4dd4-9eec-c15f21b183e7","Type":"ContainerStarted","Data":"dfebd486be3ef3c9335b08471b292ea3a817a9ebe28bd950228f681101d56074"} Feb 23 17:54:24 crc kubenswrapper[4724]: I0223 17:54:24.794664 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4wbqf" event={"ID":"456d50d3-b5f9-4dd4-9eec-c15f21b183e7","Type":"ContainerStarted","Data":"bc4fbea159cccae757a39a4d72f6f63251a4bb04c16a00e86021cdde1d5586f9"} Feb 23 17:54:24 crc kubenswrapper[4724]: I0223 17:54:24.817336 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4wbqf" podStartSLOduration=2.411625625 podStartE2EDuration="2.817316141s" podCreationTimestamp="2026-02-23 17:54:22 +0000 UTC" firstStartedPulling="2026-02-23 17:54:23.816163146 +0000 UTC m=+1419.632362746" lastFinishedPulling="2026-02-23 17:54:24.221853662 +0000 UTC m=+1420.038053262" observedRunningTime="2026-02-23 17:54:24.80718899 +0000 UTC m=+1420.623388620" watchObservedRunningTime="2026-02-23 17:54:24.817316141 +0000 UTC m=+1420.633515741" Feb 23 17:54:46 crc kubenswrapper[4724]: I0223 17:54:46.795795 4724 scope.go:117] "RemoveContainer" containerID="b4cbb44de734bb9c58a1961f32ab5a2c11cdb1c4c95a88fa7a9a77587c86edee" Feb 23 17:54:46 crc kubenswrapper[4724]: I0223 17:54:46.842028 4724 scope.go:117] "RemoveContainer" containerID="e974807255898f073fdf68444b2590a33f3d9146d2fd3b57a7e029dfa4743a35" Feb 23 17:54:46 crc kubenswrapper[4724]: I0223 17:54:46.874501 4724 scope.go:117] "RemoveContainer" containerID="4d5d27132bce363f640305e795e5f100871e5bcfa4d27f8fecb29a0bbde49b45" Feb 23 17:54:46 crc kubenswrapper[4724]: I0223 17:54:46.894996 4724 scope.go:117] "RemoveContainer" containerID="de5be2342bd4a6b35d3550920aef4a03893172b11fad0fb7fd67afea2e3564d8" Feb 23 17:54:47 crc kubenswrapper[4724]: I0223 17:54:47.001365 4724 scope.go:117] "RemoveContainer" containerID="b2da1cebcd254d7bd0efcccf81d514bcae9dae998557c8791d0fbd6420e53d83" Feb 23 17:54:47 crc kubenswrapper[4724]: I0223 17:54:47.026847 4724 scope.go:117] "RemoveContainer" containerID="283abc00c29ca6a35b5398caf6f4627399287c4f34211c62b13b6db1f3ccfba4" Feb 23 17:54:47 crc kubenswrapper[4724]: I0223 17:54:47.155443 4724 scope.go:117] "RemoveContainer" containerID="f5bf59b649fa98c30ad816168fb029be1cc7c10b7b9f0e5f43d7540ba180fb00" Feb 23 17:55:27 crc kubenswrapper[4724]: I0223 17:55:27.752118 4724 patch_prober.go:28] interesting pod/machine-config-daemon-rw78r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 17:55:27 crc kubenswrapper[4724]: I0223 17:55:27.752637 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 17:55:38 crc kubenswrapper[4724]: I0223 17:55:38.146197 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zvjjg"] Feb 23 17:55:38 crc kubenswrapper[4724]: I0223 17:55:38.151689 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zvjjg" Feb 23 17:55:38 crc kubenswrapper[4724]: I0223 17:55:38.163274 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zvjjg"] Feb 23 17:55:38 crc kubenswrapper[4724]: I0223 17:55:38.262087 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x4sh\" (UniqueName: \"kubernetes.io/projected/2fd01612-a523-4d9a-9505-43e9c85925d0-kube-api-access-6x4sh\") pod \"redhat-operators-zvjjg\" (UID: \"2fd01612-a523-4d9a-9505-43e9c85925d0\") " pod="openshift-marketplace/redhat-operators-zvjjg" Feb 23 17:55:38 crc kubenswrapper[4724]: I0223 17:55:38.262854 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fd01612-a523-4d9a-9505-43e9c85925d0-utilities\") pod \"redhat-operators-zvjjg\" (UID: \"2fd01612-a523-4d9a-9505-43e9c85925d0\") " pod="openshift-marketplace/redhat-operators-zvjjg" Feb 23 17:55:38 crc kubenswrapper[4724]: I0223 17:55:38.263178 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fd01612-a523-4d9a-9505-43e9c85925d0-catalog-content\") pod \"redhat-operators-zvjjg\" (UID: \"2fd01612-a523-4d9a-9505-43e9c85925d0\") " pod="openshift-marketplace/redhat-operators-zvjjg" Feb 23 17:55:38 crc kubenswrapper[4724]: I0223 17:55:38.365476 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x4sh\" (UniqueName: \"kubernetes.io/projected/2fd01612-a523-4d9a-9505-43e9c85925d0-kube-api-access-6x4sh\") pod \"redhat-operators-zvjjg\" (UID: \"2fd01612-a523-4d9a-9505-43e9c85925d0\") " pod="openshift-marketplace/redhat-operators-zvjjg" Feb 23 17:55:38 crc kubenswrapper[4724]: I0223 17:55:38.365556 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fd01612-a523-4d9a-9505-43e9c85925d0-utilities\") pod \"redhat-operators-zvjjg\" (UID: \"2fd01612-a523-4d9a-9505-43e9c85925d0\") " pod="openshift-marketplace/redhat-operators-zvjjg" Feb 23 17:55:38 crc kubenswrapper[4724]: I0223 17:55:38.365654 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fd01612-a523-4d9a-9505-43e9c85925d0-catalog-content\") pod \"redhat-operators-zvjjg\" (UID: \"2fd01612-a523-4d9a-9505-43e9c85925d0\") " pod="openshift-marketplace/redhat-operators-zvjjg" Feb 23 17:55:38 crc kubenswrapper[4724]: I0223 17:55:38.366297 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fd01612-a523-4d9a-9505-43e9c85925d0-utilities\") pod \"redhat-operators-zvjjg\" (UID: \"2fd01612-a523-4d9a-9505-43e9c85925d0\") " pod="openshift-marketplace/redhat-operators-zvjjg" Feb 23 17:55:38 crc kubenswrapper[4724]: I0223 17:55:38.366366 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fd01612-a523-4d9a-9505-43e9c85925d0-catalog-content\") pod \"redhat-operators-zvjjg\" (UID: \"2fd01612-a523-4d9a-9505-43e9c85925d0\") " pod="openshift-marketplace/redhat-operators-zvjjg" Feb 23 17:55:38 crc kubenswrapper[4724]: I0223 17:55:38.397962 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x4sh\" (UniqueName: \"kubernetes.io/projected/2fd01612-a523-4d9a-9505-43e9c85925d0-kube-api-access-6x4sh\") pod \"redhat-operators-zvjjg\" (UID: \"2fd01612-a523-4d9a-9505-43e9c85925d0\") " pod="openshift-marketplace/redhat-operators-zvjjg" Feb 23 17:55:38 crc kubenswrapper[4724]: I0223 17:55:38.531177 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zvjjg" Feb 23 17:55:39 crc kubenswrapper[4724]: I0223 17:55:39.037518 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zvjjg"] Feb 23 17:55:39 crc kubenswrapper[4724]: W0223 17:55:39.039626 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fd01612_a523_4d9a_9505_43e9c85925d0.slice/crio-18b322ec5d0a538098cbc2b3de927ead9c4727b685ca58acf82ebd78c2850c24 WatchSource:0}: Error finding container 18b322ec5d0a538098cbc2b3de927ead9c4727b685ca58acf82ebd78c2850c24: Status 404 returned error can't find the container with id 18b322ec5d0a538098cbc2b3de927ead9c4727b685ca58acf82ebd78c2850c24 Feb 23 17:55:39 crc kubenswrapper[4724]: I0223 17:55:39.557520 4724 generic.go:334] "Generic (PLEG): container finished" podID="2fd01612-a523-4d9a-9505-43e9c85925d0" containerID="7ee626156fbfaf49b297c95ca0215d7274ce06096759b6bb213ebc09f5aaff3f" exitCode=0 Feb 23 17:55:39 crc kubenswrapper[4724]: I0223 17:55:39.557625 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zvjjg" event={"ID":"2fd01612-a523-4d9a-9505-43e9c85925d0","Type":"ContainerDied","Data":"7ee626156fbfaf49b297c95ca0215d7274ce06096759b6bb213ebc09f5aaff3f"} Feb 23 17:55:39 crc kubenswrapper[4724]: I0223 17:55:39.557851 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zvjjg" event={"ID":"2fd01612-a523-4d9a-9505-43e9c85925d0","Type":"ContainerStarted","Data":"18b322ec5d0a538098cbc2b3de927ead9c4727b685ca58acf82ebd78c2850c24"} Feb 23 17:55:41 crc kubenswrapper[4724]: I0223 17:55:41.577301 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zvjjg" event={"ID":"2fd01612-a523-4d9a-9505-43e9c85925d0","Type":"ContainerStarted","Data":"fba8796c0944749ec64de112f6e933825aa56a58a29a8b0d7f06d053826f4562"} Feb 23 17:55:45 crc kubenswrapper[4724]: I0223 17:55:45.619286 4724 generic.go:334] "Generic (PLEG): container finished" podID="2fd01612-a523-4d9a-9505-43e9c85925d0" containerID="fba8796c0944749ec64de112f6e933825aa56a58a29a8b0d7f06d053826f4562" exitCode=0 Feb 23 17:55:45 crc kubenswrapper[4724]: I0223 17:55:45.619539 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zvjjg" event={"ID":"2fd01612-a523-4d9a-9505-43e9c85925d0","Type":"ContainerDied","Data":"fba8796c0944749ec64de112f6e933825aa56a58a29a8b0d7f06d053826f4562"} Feb 23 17:55:46 crc kubenswrapper[4724]: I0223 17:55:46.632808 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zvjjg" event={"ID":"2fd01612-a523-4d9a-9505-43e9c85925d0","Type":"ContainerStarted","Data":"236599f964c4b9b400ca2a40cb87934dba43e0e9c732bcaa41fde4a0db550bf5"} Feb 23 17:55:46 crc kubenswrapper[4724]: I0223 17:55:46.653578 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zvjjg" podStartSLOduration=2.176118566 podStartE2EDuration="8.653556247s" podCreationTimestamp="2026-02-23 17:55:38 +0000 UTC" firstStartedPulling="2026-02-23 17:55:39.559156794 +0000 UTC m=+1495.375356394" lastFinishedPulling="2026-02-23 17:55:46.036594475 +0000 UTC m=+1501.852794075" observedRunningTime="2026-02-23 17:55:46.647365364 +0000 UTC m=+1502.463564964" watchObservedRunningTime="2026-02-23 17:55:46.653556247 +0000 UTC m=+1502.469755847" Feb 23 17:55:47 crc kubenswrapper[4724]: I0223 17:55:47.483417 4724 scope.go:117] "RemoveContainer" containerID="5892e8d7bcde1c2d53816d81acf28f0f496ad8a2b3a54385c84447994d93d5d6" Feb 23 17:55:47 crc kubenswrapper[4724]: I0223 17:55:47.510015 4724 scope.go:117] "RemoveContainer" containerID="b7bbe500fd57f46c0775448e5d5d5c3ebaee9d0f8d97a05f5869f8e43f275452" Feb 23 17:55:47 crc kubenswrapper[4724]: I0223 17:55:47.541616 4724 scope.go:117] "RemoveContainer" containerID="d9fecb18242066d76feca02682eee3c73ddfba742dc5358eaf55e3998693314e" Feb 23 17:55:47 crc kubenswrapper[4724]: I0223 17:55:47.581277 4724 scope.go:117] "RemoveContainer" containerID="cdbb62ec359ed7fd99915fc8f1c2c8c13ec554bbd52b65d843ff2bb7d478290c" Feb 23 17:55:47 crc kubenswrapper[4724]: I0223 17:55:47.639527 4724 scope.go:117] "RemoveContainer" containerID="540cc4dad8054d7db9adbd63abe3367aaa4b9cc3c0d0f17d6296210bd132d60d" Feb 23 17:55:48 crc kubenswrapper[4724]: I0223 17:55:48.532068 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zvjjg" Feb 23 17:55:48 crc kubenswrapper[4724]: I0223 17:55:48.532424 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zvjjg" Feb 23 17:55:49 crc kubenswrapper[4724]: I0223 17:55:49.579636 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zvjjg" podUID="2fd01612-a523-4d9a-9505-43e9c85925d0" containerName="registry-server" probeResult="failure" output=< Feb 23 17:55:49 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 23 17:55:49 crc kubenswrapper[4724]: > Feb 23 17:55:57 crc kubenswrapper[4724]: I0223 17:55:57.752298 4724 patch_prober.go:28] interesting pod/machine-config-daemon-rw78r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 17:55:57 crc kubenswrapper[4724]: I0223 17:55:57.752841 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 17:55:58 crc kubenswrapper[4724]: I0223 17:55:58.579932 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zvjjg" Feb 23 17:55:58 crc kubenswrapper[4724]: I0223 17:55:58.637552 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zvjjg" Feb 23 17:55:58 crc kubenswrapper[4724]: I0223 17:55:58.821709 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zvjjg"] Feb 23 17:55:59 crc kubenswrapper[4724]: I0223 17:55:59.746866 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zvjjg" podUID="2fd01612-a523-4d9a-9505-43e9c85925d0" containerName="registry-server" containerID="cri-o://236599f964c4b9b400ca2a40cb87934dba43e0e9c732bcaa41fde4a0db550bf5" gracePeriod=2 Feb 23 17:56:00 crc kubenswrapper[4724]: I0223 17:56:00.194371 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zvjjg" Feb 23 17:56:00 crc kubenswrapper[4724]: I0223 17:56:00.294267 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6x4sh\" (UniqueName: \"kubernetes.io/projected/2fd01612-a523-4d9a-9505-43e9c85925d0-kube-api-access-6x4sh\") pod \"2fd01612-a523-4d9a-9505-43e9c85925d0\" (UID: \"2fd01612-a523-4d9a-9505-43e9c85925d0\") " Feb 23 17:56:00 crc kubenswrapper[4724]: I0223 17:56:00.294364 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fd01612-a523-4d9a-9505-43e9c85925d0-utilities\") pod \"2fd01612-a523-4d9a-9505-43e9c85925d0\" (UID: \"2fd01612-a523-4d9a-9505-43e9c85925d0\") " Feb 23 17:56:00 crc kubenswrapper[4724]: I0223 17:56:00.294460 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fd01612-a523-4d9a-9505-43e9c85925d0-catalog-content\") pod \"2fd01612-a523-4d9a-9505-43e9c85925d0\" (UID: \"2fd01612-a523-4d9a-9505-43e9c85925d0\") " Feb 23 17:56:00 crc kubenswrapper[4724]: I0223 17:56:00.295522 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fd01612-a523-4d9a-9505-43e9c85925d0-utilities" (OuterVolumeSpecName: "utilities") pod "2fd01612-a523-4d9a-9505-43e9c85925d0" (UID: "2fd01612-a523-4d9a-9505-43e9c85925d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:56:00 crc kubenswrapper[4724]: I0223 17:56:00.302951 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fd01612-a523-4d9a-9505-43e9c85925d0-kube-api-access-6x4sh" (OuterVolumeSpecName: "kube-api-access-6x4sh") pod "2fd01612-a523-4d9a-9505-43e9c85925d0" (UID: "2fd01612-a523-4d9a-9505-43e9c85925d0"). InnerVolumeSpecName "kube-api-access-6x4sh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:56:00 crc kubenswrapper[4724]: I0223 17:56:00.396726 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6x4sh\" (UniqueName: \"kubernetes.io/projected/2fd01612-a523-4d9a-9505-43e9c85925d0-kube-api-access-6x4sh\") on node \"crc\" DevicePath \"\"" Feb 23 17:56:00 crc kubenswrapper[4724]: I0223 17:56:00.396766 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fd01612-a523-4d9a-9505-43e9c85925d0-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 17:56:00 crc kubenswrapper[4724]: I0223 17:56:00.426586 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fd01612-a523-4d9a-9505-43e9c85925d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2fd01612-a523-4d9a-9505-43e9c85925d0" (UID: "2fd01612-a523-4d9a-9505-43e9c85925d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:56:00 crc kubenswrapper[4724]: I0223 17:56:00.498828 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fd01612-a523-4d9a-9505-43e9c85925d0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 17:56:00 crc kubenswrapper[4724]: I0223 17:56:00.763635 4724 generic.go:334] "Generic (PLEG): container finished" podID="2fd01612-a523-4d9a-9505-43e9c85925d0" containerID="236599f964c4b9b400ca2a40cb87934dba43e0e9c732bcaa41fde4a0db550bf5" exitCode=0 Feb 23 17:56:00 crc kubenswrapper[4724]: I0223 17:56:00.763689 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zvjjg" event={"ID":"2fd01612-a523-4d9a-9505-43e9c85925d0","Type":"ContainerDied","Data":"236599f964c4b9b400ca2a40cb87934dba43e0e9c732bcaa41fde4a0db550bf5"} Feb 23 17:56:00 crc kubenswrapper[4724]: I0223 17:56:00.763999 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zvjjg" event={"ID":"2fd01612-a523-4d9a-9505-43e9c85925d0","Type":"ContainerDied","Data":"18b322ec5d0a538098cbc2b3de927ead9c4727b685ca58acf82ebd78c2850c24"} Feb 23 17:56:00 crc kubenswrapper[4724]: I0223 17:56:00.764025 4724 scope.go:117] "RemoveContainer" containerID="236599f964c4b9b400ca2a40cb87934dba43e0e9c732bcaa41fde4a0db550bf5" Feb 23 17:56:00 crc kubenswrapper[4724]: I0223 17:56:00.763774 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zvjjg" Feb 23 17:56:00 crc kubenswrapper[4724]: I0223 17:56:00.787296 4724 scope.go:117] "RemoveContainer" containerID="fba8796c0944749ec64de112f6e933825aa56a58a29a8b0d7f06d053826f4562" Feb 23 17:56:00 crc kubenswrapper[4724]: I0223 17:56:00.819179 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zvjjg"] Feb 23 17:56:00 crc kubenswrapper[4724]: I0223 17:56:00.827684 4724 scope.go:117] "RemoveContainer" containerID="7ee626156fbfaf49b297c95ca0215d7274ce06096759b6bb213ebc09f5aaff3f" Feb 23 17:56:00 crc kubenswrapper[4724]: I0223 17:56:00.831155 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zvjjg"] Feb 23 17:56:00 crc kubenswrapper[4724]: I0223 17:56:00.898525 4724 scope.go:117] "RemoveContainer" containerID="236599f964c4b9b400ca2a40cb87934dba43e0e9c732bcaa41fde4a0db550bf5" Feb 23 17:56:00 crc kubenswrapper[4724]: E0223 17:56:00.903897 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"236599f964c4b9b400ca2a40cb87934dba43e0e9c732bcaa41fde4a0db550bf5\": container with ID starting with 236599f964c4b9b400ca2a40cb87934dba43e0e9c732bcaa41fde4a0db550bf5 not found: ID does not exist" containerID="236599f964c4b9b400ca2a40cb87934dba43e0e9c732bcaa41fde4a0db550bf5" Feb 23 17:56:00 crc kubenswrapper[4724]: I0223 17:56:00.903933 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"236599f964c4b9b400ca2a40cb87934dba43e0e9c732bcaa41fde4a0db550bf5"} err="failed to get container status \"236599f964c4b9b400ca2a40cb87934dba43e0e9c732bcaa41fde4a0db550bf5\": rpc error: code = NotFound desc = could not find container \"236599f964c4b9b400ca2a40cb87934dba43e0e9c732bcaa41fde4a0db550bf5\": container with ID starting with 236599f964c4b9b400ca2a40cb87934dba43e0e9c732bcaa41fde4a0db550bf5 not found: ID does not exist" Feb 23 17:56:00 crc kubenswrapper[4724]: I0223 17:56:00.903954 4724 scope.go:117] "RemoveContainer" containerID="fba8796c0944749ec64de112f6e933825aa56a58a29a8b0d7f06d053826f4562" Feb 23 17:56:00 crc kubenswrapper[4724]: E0223 17:56:00.904200 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fba8796c0944749ec64de112f6e933825aa56a58a29a8b0d7f06d053826f4562\": container with ID starting with fba8796c0944749ec64de112f6e933825aa56a58a29a8b0d7f06d053826f4562 not found: ID does not exist" containerID="fba8796c0944749ec64de112f6e933825aa56a58a29a8b0d7f06d053826f4562" Feb 23 17:56:00 crc kubenswrapper[4724]: I0223 17:56:00.904228 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fba8796c0944749ec64de112f6e933825aa56a58a29a8b0d7f06d053826f4562"} err="failed to get container status \"fba8796c0944749ec64de112f6e933825aa56a58a29a8b0d7f06d053826f4562\": rpc error: code = NotFound desc = could not find container \"fba8796c0944749ec64de112f6e933825aa56a58a29a8b0d7f06d053826f4562\": container with ID starting with fba8796c0944749ec64de112f6e933825aa56a58a29a8b0d7f06d053826f4562 not found: ID does not exist" Feb 23 17:56:00 crc kubenswrapper[4724]: I0223 17:56:00.904245 4724 scope.go:117] "RemoveContainer" containerID="7ee626156fbfaf49b297c95ca0215d7274ce06096759b6bb213ebc09f5aaff3f" Feb 23 17:56:00 crc kubenswrapper[4724]: E0223 17:56:00.904508 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ee626156fbfaf49b297c95ca0215d7274ce06096759b6bb213ebc09f5aaff3f\": container with ID starting with 7ee626156fbfaf49b297c95ca0215d7274ce06096759b6bb213ebc09f5aaff3f not found: ID does not exist" containerID="7ee626156fbfaf49b297c95ca0215d7274ce06096759b6bb213ebc09f5aaff3f" Feb 23 17:56:00 crc kubenswrapper[4724]: I0223 17:56:00.904532 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ee626156fbfaf49b297c95ca0215d7274ce06096759b6bb213ebc09f5aaff3f"} err="failed to get container status \"7ee626156fbfaf49b297c95ca0215d7274ce06096759b6bb213ebc09f5aaff3f\": rpc error: code = NotFound desc = could not find container \"7ee626156fbfaf49b297c95ca0215d7274ce06096759b6bb213ebc09f5aaff3f\": container with ID starting with 7ee626156fbfaf49b297c95ca0215d7274ce06096759b6bb213ebc09f5aaff3f not found: ID does not exist" Feb 23 17:56:00 crc kubenswrapper[4724]: I0223 17:56:00.964664 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fd01612-a523-4d9a-9505-43e9c85925d0" path="/var/lib/kubelet/pods/2fd01612-a523-4d9a-9505-43e9c85925d0/volumes" Feb 23 17:56:08 crc kubenswrapper[4724]: I0223 17:56:08.919418 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9lwk6"] Feb 23 17:56:08 crc kubenswrapper[4724]: E0223 17:56:08.920695 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fd01612-a523-4d9a-9505-43e9c85925d0" containerName="extract-content" Feb 23 17:56:08 crc kubenswrapper[4724]: I0223 17:56:08.920718 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fd01612-a523-4d9a-9505-43e9c85925d0" containerName="extract-content" Feb 23 17:56:08 crc kubenswrapper[4724]: E0223 17:56:08.920765 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fd01612-a523-4d9a-9505-43e9c85925d0" containerName="registry-server" Feb 23 17:56:08 crc kubenswrapper[4724]: I0223 17:56:08.920775 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fd01612-a523-4d9a-9505-43e9c85925d0" containerName="registry-server" Feb 23 17:56:08 crc kubenswrapper[4724]: E0223 17:56:08.920812 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fd01612-a523-4d9a-9505-43e9c85925d0" containerName="extract-utilities" Feb 23 17:56:08 crc kubenswrapper[4724]: I0223 17:56:08.920822 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fd01612-a523-4d9a-9505-43e9c85925d0" containerName="extract-utilities" Feb 23 17:56:08 crc kubenswrapper[4724]: I0223 17:56:08.921181 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fd01612-a523-4d9a-9505-43e9c85925d0" containerName="registry-server" Feb 23 17:56:08 crc kubenswrapper[4724]: I0223 17:56:08.940947 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9lwk6" Feb 23 17:56:08 crc kubenswrapper[4724]: I0223 17:56:08.949784 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9lwk6"] Feb 23 17:56:09 crc kubenswrapper[4724]: I0223 17:56:09.072965 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/344fa2fa-2c7a-4a45-8286-ff82f4962620-catalog-content\") pod \"redhat-marketplace-9lwk6\" (UID: \"344fa2fa-2c7a-4a45-8286-ff82f4962620\") " pod="openshift-marketplace/redhat-marketplace-9lwk6" Feb 23 17:56:09 crc kubenswrapper[4724]: I0223 17:56:09.073093 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/344fa2fa-2c7a-4a45-8286-ff82f4962620-utilities\") pod \"redhat-marketplace-9lwk6\" (UID: \"344fa2fa-2c7a-4a45-8286-ff82f4962620\") " pod="openshift-marketplace/redhat-marketplace-9lwk6" Feb 23 17:56:09 crc kubenswrapper[4724]: I0223 17:56:09.073170 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wwd6\" (UniqueName: \"kubernetes.io/projected/344fa2fa-2c7a-4a45-8286-ff82f4962620-kube-api-access-6wwd6\") pod \"redhat-marketplace-9lwk6\" (UID: \"344fa2fa-2c7a-4a45-8286-ff82f4962620\") " pod="openshift-marketplace/redhat-marketplace-9lwk6" Feb 23 17:56:09 crc kubenswrapper[4724]: I0223 17:56:09.174672 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/344fa2fa-2c7a-4a45-8286-ff82f4962620-utilities\") pod \"redhat-marketplace-9lwk6\" (UID: \"344fa2fa-2c7a-4a45-8286-ff82f4962620\") " pod="openshift-marketplace/redhat-marketplace-9lwk6" Feb 23 17:56:09 crc kubenswrapper[4724]: I0223 17:56:09.175043 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wwd6\" (UniqueName: \"kubernetes.io/projected/344fa2fa-2c7a-4a45-8286-ff82f4962620-kube-api-access-6wwd6\") pod \"redhat-marketplace-9lwk6\" (UID: \"344fa2fa-2c7a-4a45-8286-ff82f4962620\") " pod="openshift-marketplace/redhat-marketplace-9lwk6" Feb 23 17:56:09 crc kubenswrapper[4724]: I0223 17:56:09.175145 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/344fa2fa-2c7a-4a45-8286-ff82f4962620-catalog-content\") pod \"redhat-marketplace-9lwk6\" (UID: \"344fa2fa-2c7a-4a45-8286-ff82f4962620\") " pod="openshift-marketplace/redhat-marketplace-9lwk6" Feb 23 17:56:09 crc kubenswrapper[4724]: I0223 17:56:09.176725 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/344fa2fa-2c7a-4a45-8286-ff82f4962620-utilities\") pod \"redhat-marketplace-9lwk6\" (UID: \"344fa2fa-2c7a-4a45-8286-ff82f4962620\") " pod="openshift-marketplace/redhat-marketplace-9lwk6" Feb 23 17:56:09 crc kubenswrapper[4724]: I0223 17:56:09.177165 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/344fa2fa-2c7a-4a45-8286-ff82f4962620-catalog-content\") pod \"redhat-marketplace-9lwk6\" (UID: \"344fa2fa-2c7a-4a45-8286-ff82f4962620\") " pod="openshift-marketplace/redhat-marketplace-9lwk6" Feb 23 17:56:09 crc kubenswrapper[4724]: I0223 17:56:09.200545 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wwd6\" (UniqueName: \"kubernetes.io/projected/344fa2fa-2c7a-4a45-8286-ff82f4962620-kube-api-access-6wwd6\") pod \"redhat-marketplace-9lwk6\" (UID: \"344fa2fa-2c7a-4a45-8286-ff82f4962620\") " pod="openshift-marketplace/redhat-marketplace-9lwk6" Feb 23 17:56:09 crc kubenswrapper[4724]: I0223 17:56:09.277219 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9lwk6" Feb 23 17:56:09 crc kubenswrapper[4724]: I0223 17:56:09.792459 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9lwk6"] Feb 23 17:56:10 crc kubenswrapper[4724]: I0223 17:56:10.049772 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9lwk6" event={"ID":"344fa2fa-2c7a-4a45-8286-ff82f4962620","Type":"ContainerStarted","Data":"f5e49894548480e54d7817f1d0f812a23fc746fc3a94c6c65600d6c4e6ea6a61"} Feb 23 17:56:10 crc kubenswrapper[4724]: I0223 17:56:10.050189 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9lwk6" event={"ID":"344fa2fa-2c7a-4a45-8286-ff82f4962620","Type":"ContainerStarted","Data":"e1f7dd52560992947e1753cfcd28a53b5ef4d34cc3f9630d75ed31bcb9576d2e"} Feb 23 17:56:11 crc kubenswrapper[4724]: I0223 17:56:11.069861 4724 generic.go:334] "Generic (PLEG): container finished" podID="344fa2fa-2c7a-4a45-8286-ff82f4962620" containerID="f5e49894548480e54d7817f1d0f812a23fc746fc3a94c6c65600d6c4e6ea6a61" exitCode=0 Feb 23 17:56:11 crc kubenswrapper[4724]: I0223 17:56:11.069900 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9lwk6" event={"ID":"344fa2fa-2c7a-4a45-8286-ff82f4962620","Type":"ContainerDied","Data":"f5e49894548480e54d7817f1d0f812a23fc746fc3a94c6c65600d6c4e6ea6a61"} Feb 23 17:56:13 crc kubenswrapper[4724]: I0223 17:56:13.100363 4724 generic.go:334] "Generic (PLEG): container finished" podID="344fa2fa-2c7a-4a45-8286-ff82f4962620" containerID="02a3963ea95b4f718d8c817da6f5b92a51226ca8c3842a2630eeffd45ad066d3" exitCode=0 Feb 23 17:56:13 crc kubenswrapper[4724]: I0223 17:56:13.100619 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9lwk6" event={"ID":"344fa2fa-2c7a-4a45-8286-ff82f4962620","Type":"ContainerDied","Data":"02a3963ea95b4f718d8c817da6f5b92a51226ca8c3842a2630eeffd45ad066d3"} Feb 23 17:56:14 crc kubenswrapper[4724]: I0223 17:56:14.114954 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9lwk6" event={"ID":"344fa2fa-2c7a-4a45-8286-ff82f4962620","Type":"ContainerStarted","Data":"2b2b5c26339a4dd1c0628005c38f666b9d6e4e2f8e6fb579fcef0530e65dbb94"} Feb 23 17:56:14 crc kubenswrapper[4724]: I0223 17:56:14.138598 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9lwk6" podStartSLOduration=3.7402450419999997 podStartE2EDuration="6.138568616s" podCreationTimestamp="2026-02-23 17:56:08 +0000 UTC" firstStartedPulling="2026-02-23 17:56:11.072273344 +0000 UTC m=+1526.888472944" lastFinishedPulling="2026-02-23 17:56:13.470596928 +0000 UTC m=+1529.286796518" observedRunningTime="2026-02-23 17:56:14.134736961 +0000 UTC m=+1529.950936571" watchObservedRunningTime="2026-02-23 17:56:14.138568616 +0000 UTC m=+1529.954768226" Feb 23 17:56:19 crc kubenswrapper[4724]: I0223 17:56:19.277941 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9lwk6" Feb 23 17:56:19 crc kubenswrapper[4724]: I0223 17:56:19.278421 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9lwk6" Feb 23 17:56:19 crc kubenswrapper[4724]: I0223 17:56:19.325900 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9lwk6" Feb 23 17:56:20 crc kubenswrapper[4724]: I0223 17:56:20.225777 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9lwk6" Feb 23 17:56:20 crc kubenswrapper[4724]: I0223 17:56:20.282238 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9lwk6"] Feb 23 17:56:22 crc kubenswrapper[4724]: I0223 17:56:22.196902 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9lwk6" podUID="344fa2fa-2c7a-4a45-8286-ff82f4962620" containerName="registry-server" containerID="cri-o://2b2b5c26339a4dd1c0628005c38f666b9d6e4e2f8e6fb579fcef0530e65dbb94" gracePeriod=2 Feb 23 17:56:22 crc kubenswrapper[4724]: I0223 17:56:22.670610 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9lwk6" Feb 23 17:56:22 crc kubenswrapper[4724]: I0223 17:56:22.756005 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/344fa2fa-2c7a-4a45-8286-ff82f4962620-catalog-content\") pod \"344fa2fa-2c7a-4a45-8286-ff82f4962620\" (UID: \"344fa2fa-2c7a-4a45-8286-ff82f4962620\") " Feb 23 17:56:22 crc kubenswrapper[4724]: I0223 17:56:22.756295 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wwd6\" (UniqueName: \"kubernetes.io/projected/344fa2fa-2c7a-4a45-8286-ff82f4962620-kube-api-access-6wwd6\") pod \"344fa2fa-2c7a-4a45-8286-ff82f4962620\" (UID: \"344fa2fa-2c7a-4a45-8286-ff82f4962620\") " Feb 23 17:56:22 crc kubenswrapper[4724]: I0223 17:56:22.756464 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/344fa2fa-2c7a-4a45-8286-ff82f4962620-utilities\") pod \"344fa2fa-2c7a-4a45-8286-ff82f4962620\" (UID: \"344fa2fa-2c7a-4a45-8286-ff82f4962620\") " Feb 23 17:56:22 crc kubenswrapper[4724]: I0223 17:56:22.757918 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/344fa2fa-2c7a-4a45-8286-ff82f4962620-utilities" (OuterVolumeSpecName: "utilities") pod "344fa2fa-2c7a-4a45-8286-ff82f4962620" (UID: "344fa2fa-2c7a-4a45-8286-ff82f4962620"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:56:22 crc kubenswrapper[4724]: I0223 17:56:22.771239 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/344fa2fa-2c7a-4a45-8286-ff82f4962620-kube-api-access-6wwd6" (OuterVolumeSpecName: "kube-api-access-6wwd6") pod "344fa2fa-2c7a-4a45-8286-ff82f4962620" (UID: "344fa2fa-2c7a-4a45-8286-ff82f4962620"). InnerVolumeSpecName "kube-api-access-6wwd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:56:22 crc kubenswrapper[4724]: I0223 17:56:22.859247 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wwd6\" (UniqueName: \"kubernetes.io/projected/344fa2fa-2c7a-4a45-8286-ff82f4962620-kube-api-access-6wwd6\") on node \"crc\" DevicePath \"\"" Feb 23 17:56:22 crc kubenswrapper[4724]: I0223 17:56:22.859288 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/344fa2fa-2c7a-4a45-8286-ff82f4962620-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 17:56:23 crc kubenswrapper[4724]: I0223 17:56:23.045992 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/344fa2fa-2c7a-4a45-8286-ff82f4962620-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "344fa2fa-2c7a-4a45-8286-ff82f4962620" (UID: "344fa2fa-2c7a-4a45-8286-ff82f4962620"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:56:23 crc kubenswrapper[4724]: I0223 17:56:23.064017 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/344fa2fa-2c7a-4a45-8286-ff82f4962620-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 17:56:23 crc kubenswrapper[4724]: I0223 17:56:23.211434 4724 generic.go:334] "Generic (PLEG): container finished" podID="344fa2fa-2c7a-4a45-8286-ff82f4962620" containerID="2b2b5c26339a4dd1c0628005c38f666b9d6e4e2f8e6fb579fcef0530e65dbb94" exitCode=0 Feb 23 17:56:23 crc kubenswrapper[4724]: I0223 17:56:23.211476 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9lwk6" event={"ID":"344fa2fa-2c7a-4a45-8286-ff82f4962620","Type":"ContainerDied","Data":"2b2b5c26339a4dd1c0628005c38f666b9d6e4e2f8e6fb579fcef0530e65dbb94"} Feb 23 17:56:23 crc kubenswrapper[4724]: I0223 17:56:23.211504 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9lwk6" event={"ID":"344fa2fa-2c7a-4a45-8286-ff82f4962620","Type":"ContainerDied","Data":"e1f7dd52560992947e1753cfcd28a53b5ef4d34cc3f9630d75ed31bcb9576d2e"} Feb 23 17:56:23 crc kubenswrapper[4724]: I0223 17:56:23.211537 4724 scope.go:117] "RemoveContainer" containerID="2b2b5c26339a4dd1c0628005c38f666b9d6e4e2f8e6fb579fcef0530e65dbb94" Feb 23 17:56:23 crc kubenswrapper[4724]: I0223 17:56:23.211571 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9lwk6" Feb 23 17:56:23 crc kubenswrapper[4724]: I0223 17:56:23.248790 4724 scope.go:117] "RemoveContainer" containerID="02a3963ea95b4f718d8c817da6f5b92a51226ca8c3842a2630eeffd45ad066d3" Feb 23 17:56:23 crc kubenswrapper[4724]: I0223 17:56:23.252774 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9lwk6"] Feb 23 17:56:23 crc kubenswrapper[4724]: I0223 17:56:23.263789 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9lwk6"] Feb 23 17:56:23 crc kubenswrapper[4724]: I0223 17:56:23.283951 4724 scope.go:117] "RemoveContainer" containerID="f5e49894548480e54d7817f1d0f812a23fc746fc3a94c6c65600d6c4e6ea6a61" Feb 23 17:56:23 crc kubenswrapper[4724]: I0223 17:56:23.344365 4724 scope.go:117] "RemoveContainer" containerID="2b2b5c26339a4dd1c0628005c38f666b9d6e4e2f8e6fb579fcef0530e65dbb94" Feb 23 17:56:23 crc kubenswrapper[4724]: E0223 17:56:23.345519 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b2b5c26339a4dd1c0628005c38f666b9d6e4e2f8e6fb579fcef0530e65dbb94\": container with ID starting with 2b2b5c26339a4dd1c0628005c38f666b9d6e4e2f8e6fb579fcef0530e65dbb94 not found: ID does not exist" containerID="2b2b5c26339a4dd1c0628005c38f666b9d6e4e2f8e6fb579fcef0530e65dbb94" Feb 23 17:56:23 crc kubenswrapper[4724]: I0223 17:56:23.345566 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b2b5c26339a4dd1c0628005c38f666b9d6e4e2f8e6fb579fcef0530e65dbb94"} err="failed to get container status \"2b2b5c26339a4dd1c0628005c38f666b9d6e4e2f8e6fb579fcef0530e65dbb94\": rpc error: code = NotFound desc = could not find container \"2b2b5c26339a4dd1c0628005c38f666b9d6e4e2f8e6fb579fcef0530e65dbb94\": container with ID starting with 2b2b5c26339a4dd1c0628005c38f666b9d6e4e2f8e6fb579fcef0530e65dbb94 not found: ID does not exist" Feb 23 17:56:23 crc kubenswrapper[4724]: I0223 17:56:23.345617 4724 scope.go:117] "RemoveContainer" containerID="02a3963ea95b4f718d8c817da6f5b92a51226ca8c3842a2630eeffd45ad066d3" Feb 23 17:56:23 crc kubenswrapper[4724]: E0223 17:56:23.346198 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02a3963ea95b4f718d8c817da6f5b92a51226ca8c3842a2630eeffd45ad066d3\": container with ID starting with 02a3963ea95b4f718d8c817da6f5b92a51226ca8c3842a2630eeffd45ad066d3 not found: ID does not exist" containerID="02a3963ea95b4f718d8c817da6f5b92a51226ca8c3842a2630eeffd45ad066d3" Feb 23 17:56:23 crc kubenswrapper[4724]: I0223 17:56:23.346244 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02a3963ea95b4f718d8c817da6f5b92a51226ca8c3842a2630eeffd45ad066d3"} err="failed to get container status \"02a3963ea95b4f718d8c817da6f5b92a51226ca8c3842a2630eeffd45ad066d3\": rpc error: code = NotFound desc = could not find container \"02a3963ea95b4f718d8c817da6f5b92a51226ca8c3842a2630eeffd45ad066d3\": container with ID starting with 02a3963ea95b4f718d8c817da6f5b92a51226ca8c3842a2630eeffd45ad066d3 not found: ID does not exist" Feb 23 17:56:23 crc kubenswrapper[4724]: I0223 17:56:23.346267 4724 scope.go:117] "RemoveContainer" containerID="f5e49894548480e54d7817f1d0f812a23fc746fc3a94c6c65600d6c4e6ea6a61" Feb 23 17:56:23 crc kubenswrapper[4724]: E0223 17:56:23.346695 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5e49894548480e54d7817f1d0f812a23fc746fc3a94c6c65600d6c4e6ea6a61\": container with ID starting with f5e49894548480e54d7817f1d0f812a23fc746fc3a94c6c65600d6c4e6ea6a61 not found: ID does not exist" containerID="f5e49894548480e54d7817f1d0f812a23fc746fc3a94c6c65600d6c4e6ea6a61" Feb 23 17:56:23 crc kubenswrapper[4724]: I0223 17:56:23.346719 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5e49894548480e54d7817f1d0f812a23fc746fc3a94c6c65600d6c4e6ea6a61"} err="failed to get container status \"f5e49894548480e54d7817f1d0f812a23fc746fc3a94c6c65600d6c4e6ea6a61\": rpc error: code = NotFound desc = could not find container \"f5e49894548480e54d7817f1d0f812a23fc746fc3a94c6c65600d6c4e6ea6a61\": container with ID starting with f5e49894548480e54d7817f1d0f812a23fc746fc3a94c6c65600d6c4e6ea6a61 not found: ID does not exist" Feb 23 17:56:24 crc kubenswrapper[4724]: I0223 17:56:24.961953 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="344fa2fa-2c7a-4a45-8286-ff82f4962620" path="/var/lib/kubelet/pods/344fa2fa-2c7a-4a45-8286-ff82f4962620/volumes" Feb 23 17:56:27 crc kubenswrapper[4724]: I0223 17:56:27.752291 4724 patch_prober.go:28] interesting pod/machine-config-daemon-rw78r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 17:56:27 crc kubenswrapper[4724]: I0223 17:56:27.752949 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 17:56:27 crc kubenswrapper[4724]: I0223 17:56:27.753006 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" Feb 23 17:56:27 crc kubenswrapper[4724]: I0223 17:56:27.754684 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f6f8a7efa8383e0b1ed8ac5db72df9df740ff1c95794a0256d6285d176592a6b"} pod="openshift-machine-config-operator/machine-config-daemon-rw78r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 17:56:27 crc kubenswrapper[4724]: I0223 17:56:27.754770 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" containerID="cri-o://f6f8a7efa8383e0b1ed8ac5db72df9df740ff1c95794a0256d6285d176592a6b" gracePeriod=600 Feb 23 17:56:28 crc kubenswrapper[4724]: I0223 17:56:28.260478 4724 generic.go:334] "Generic (PLEG): container finished" podID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerID="f6f8a7efa8383e0b1ed8ac5db72df9df740ff1c95794a0256d6285d176592a6b" exitCode=0 Feb 23 17:56:28 crc kubenswrapper[4724]: I0223 17:56:28.260568 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" event={"ID":"a065b197-b354-4d9b-b2e9-7d4882a3d1a2","Type":"ContainerDied","Data":"f6f8a7efa8383e0b1ed8ac5db72df9df740ff1c95794a0256d6285d176592a6b"} Feb 23 17:56:28 crc kubenswrapper[4724]: I0223 17:56:28.260795 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" event={"ID":"a065b197-b354-4d9b-b2e9-7d4882a3d1a2","Type":"ContainerStarted","Data":"a9d9154f54d232a189bc26a1a3f88396d8e7bd4b7bf9b2b3dcf38e1648177ac8"} Feb 23 17:56:28 crc kubenswrapper[4724]: I0223 17:56:28.260817 4724 scope.go:117] "RemoveContainer" containerID="4c3c149666e58c3520418e687c5807bec12f2dc12c5496fde070763093334840" Feb 23 17:56:47 crc kubenswrapper[4724]: I0223 17:56:47.706232 4724 scope.go:117] "RemoveContainer" containerID="294f31868cdcc017ecaf968c643faacf81ef76a3632549753f1869e91df6f12e" Feb 23 17:56:47 crc kubenswrapper[4724]: I0223 17:56:47.735009 4724 scope.go:117] "RemoveContainer" containerID="daa172e2828ce21702379f3d032a44553407ffd0e5e3a6dbd3e72bf44e56fd19" Feb 23 17:56:47 crc kubenswrapper[4724]: I0223 17:56:47.770491 4724 scope.go:117] "RemoveContainer" containerID="d88672f21ece6f9c8b57a6221022fb9dea8ccaff2517cb59f9c19ead7d02e2b5" Feb 23 17:56:47 crc kubenswrapper[4724]: I0223 17:56:47.795783 4724 scope.go:117] "RemoveContainer" containerID="bff2dc251ebe0f525f3a9b7f471ef5c7e64ab32deb29c9382a6695ad17c6762e" Feb 23 17:56:47 crc kubenswrapper[4724]: I0223 17:56:47.818683 4724 scope.go:117] "RemoveContainer" containerID="b357b2a8522d3f847d9e43eee225e59e952ea4b14358acc96fcb8190b20b1cac" Feb 23 17:56:47 crc kubenswrapper[4724]: I0223 17:56:47.840306 4724 scope.go:117] "RemoveContainer" containerID="0fbf36a8cc9e67893583e1422047a79a5eda98ccb265bd58a2a16e860b8f24af" Feb 23 17:56:47 crc kubenswrapper[4724]: I0223 17:56:47.859476 4724 scope.go:117] "RemoveContainer" containerID="8223178440b58213ec3955e3ceda7ac024a2e482a99ba3369fdf5fbfd8f0f815" Feb 23 17:57:46 crc kubenswrapper[4724]: I0223 17:57:46.619657 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-j99hd"] Feb 23 17:57:46 crc kubenswrapper[4724]: E0223 17:57:46.620662 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="344fa2fa-2c7a-4a45-8286-ff82f4962620" containerName="extract-utilities" Feb 23 17:57:46 crc kubenswrapper[4724]: I0223 17:57:46.620680 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="344fa2fa-2c7a-4a45-8286-ff82f4962620" containerName="extract-utilities" Feb 23 17:57:46 crc kubenswrapper[4724]: E0223 17:57:46.620714 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="344fa2fa-2c7a-4a45-8286-ff82f4962620" containerName="extract-content" Feb 23 17:57:46 crc kubenswrapper[4724]: I0223 17:57:46.620722 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="344fa2fa-2c7a-4a45-8286-ff82f4962620" containerName="extract-content" Feb 23 17:57:46 crc kubenswrapper[4724]: E0223 17:57:46.620737 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="344fa2fa-2c7a-4a45-8286-ff82f4962620" containerName="registry-server" Feb 23 17:57:46 crc kubenswrapper[4724]: I0223 17:57:46.620745 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="344fa2fa-2c7a-4a45-8286-ff82f4962620" containerName="registry-server" Feb 23 17:57:46 crc kubenswrapper[4724]: I0223 17:57:46.621014 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="344fa2fa-2c7a-4a45-8286-ff82f4962620" containerName="registry-server" Feb 23 17:57:46 crc kubenswrapper[4724]: I0223 17:57:46.622873 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j99hd" Feb 23 17:57:46 crc kubenswrapper[4724]: I0223 17:57:46.636756 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j99hd"] Feb 23 17:57:46 crc kubenswrapper[4724]: I0223 17:57:46.727544 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxqkl\" (UniqueName: \"kubernetes.io/projected/ded2365e-04a7-4475-92a1-0ce86f237344-kube-api-access-lxqkl\") pod \"community-operators-j99hd\" (UID: \"ded2365e-04a7-4475-92a1-0ce86f237344\") " pod="openshift-marketplace/community-operators-j99hd" Feb 23 17:57:46 crc kubenswrapper[4724]: I0223 17:57:46.727600 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ded2365e-04a7-4475-92a1-0ce86f237344-utilities\") pod \"community-operators-j99hd\" (UID: \"ded2365e-04a7-4475-92a1-0ce86f237344\") " pod="openshift-marketplace/community-operators-j99hd" Feb 23 17:57:46 crc kubenswrapper[4724]: I0223 17:57:46.727623 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ded2365e-04a7-4475-92a1-0ce86f237344-catalog-content\") pod \"community-operators-j99hd\" (UID: \"ded2365e-04a7-4475-92a1-0ce86f237344\") " pod="openshift-marketplace/community-operators-j99hd" Feb 23 17:57:46 crc kubenswrapper[4724]: I0223 17:57:46.829636 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxqkl\" (UniqueName: \"kubernetes.io/projected/ded2365e-04a7-4475-92a1-0ce86f237344-kube-api-access-lxqkl\") pod \"community-operators-j99hd\" (UID: \"ded2365e-04a7-4475-92a1-0ce86f237344\") " pod="openshift-marketplace/community-operators-j99hd" Feb 23 17:57:46 crc kubenswrapper[4724]: I0223 17:57:46.829700 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ded2365e-04a7-4475-92a1-0ce86f237344-utilities\") pod \"community-operators-j99hd\" (UID: \"ded2365e-04a7-4475-92a1-0ce86f237344\") " pod="openshift-marketplace/community-operators-j99hd" Feb 23 17:57:46 crc kubenswrapper[4724]: I0223 17:57:46.829730 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ded2365e-04a7-4475-92a1-0ce86f237344-catalog-content\") pod \"community-operators-j99hd\" (UID: \"ded2365e-04a7-4475-92a1-0ce86f237344\") " pod="openshift-marketplace/community-operators-j99hd" Feb 23 17:57:46 crc kubenswrapper[4724]: I0223 17:57:46.830190 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ded2365e-04a7-4475-92a1-0ce86f237344-utilities\") pod \"community-operators-j99hd\" (UID: \"ded2365e-04a7-4475-92a1-0ce86f237344\") " pod="openshift-marketplace/community-operators-j99hd" Feb 23 17:57:46 crc kubenswrapper[4724]: I0223 17:57:46.830265 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ded2365e-04a7-4475-92a1-0ce86f237344-catalog-content\") pod \"community-operators-j99hd\" (UID: \"ded2365e-04a7-4475-92a1-0ce86f237344\") " pod="openshift-marketplace/community-operators-j99hd" Feb 23 17:57:46 crc kubenswrapper[4724]: I0223 17:57:46.851319 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxqkl\" (UniqueName: \"kubernetes.io/projected/ded2365e-04a7-4475-92a1-0ce86f237344-kube-api-access-lxqkl\") pod \"community-operators-j99hd\" (UID: \"ded2365e-04a7-4475-92a1-0ce86f237344\") " pod="openshift-marketplace/community-operators-j99hd" Feb 23 17:57:46 crc kubenswrapper[4724]: I0223 17:57:46.967624 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j99hd" Feb 23 17:57:47 crc kubenswrapper[4724]: I0223 17:57:47.511222 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j99hd"] Feb 23 17:57:47 crc kubenswrapper[4724]: I0223 17:57:47.966411 4724 scope.go:117] "RemoveContainer" containerID="d4d3318f70900e7c5943e5365fb51a413a1cf1332739af013be363f751884ef6" Feb 23 17:57:47 crc kubenswrapper[4724]: I0223 17:57:47.991290 4724 scope.go:117] "RemoveContainer" containerID="1fe1cc8de80d03f4c774cbc8279a2802d878d46a140e6309a3274349cd326acf" Feb 23 17:57:48 crc kubenswrapper[4724]: I0223 17:57:48.007555 4724 scope.go:117] "RemoveContainer" containerID="f4579f75f922b51bafacf8973994907ec9d7ea11e67094d960ceb2d8068095ec" Feb 23 17:57:48 crc kubenswrapper[4724]: I0223 17:57:48.095331 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j99hd" event={"ID":"ded2365e-04a7-4475-92a1-0ce86f237344","Type":"ContainerDied","Data":"db3d895ad83d9fb02b96016d3b3f504e7bbb355e073b1e560cea0168d79c037e"} Feb 23 17:57:48 crc kubenswrapper[4724]: I0223 17:57:48.095740 4724 generic.go:334] "Generic (PLEG): container finished" podID="ded2365e-04a7-4475-92a1-0ce86f237344" containerID="db3d895ad83d9fb02b96016d3b3f504e7bbb355e073b1e560cea0168d79c037e" exitCode=0 Feb 23 17:57:48 crc kubenswrapper[4724]: I0223 17:57:48.095784 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j99hd" event={"ID":"ded2365e-04a7-4475-92a1-0ce86f237344","Type":"ContainerStarted","Data":"8638e1f6d894432bfad29c47ca86986047cc3530c788daa988511250b5f98209"} Feb 23 17:57:48 crc kubenswrapper[4724]: I0223 17:57:48.097318 4724 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 17:57:49 crc kubenswrapper[4724]: I0223 17:57:49.108965 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j99hd" event={"ID":"ded2365e-04a7-4475-92a1-0ce86f237344","Type":"ContainerStarted","Data":"da310d35f8acd2d114cec100cfc3e831d1a5789ae7c8f292afb1b939e246863d"} Feb 23 17:57:51 crc kubenswrapper[4724]: I0223 17:57:51.129929 4724 generic.go:334] "Generic (PLEG): container finished" podID="ded2365e-04a7-4475-92a1-0ce86f237344" containerID="da310d35f8acd2d114cec100cfc3e831d1a5789ae7c8f292afb1b939e246863d" exitCode=0 Feb 23 17:57:51 crc kubenswrapper[4724]: I0223 17:57:51.129997 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j99hd" event={"ID":"ded2365e-04a7-4475-92a1-0ce86f237344","Type":"ContainerDied","Data":"da310d35f8acd2d114cec100cfc3e831d1a5789ae7c8f292afb1b939e246863d"} Feb 23 17:57:52 crc kubenswrapper[4724]: I0223 17:57:52.141943 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j99hd" event={"ID":"ded2365e-04a7-4475-92a1-0ce86f237344","Type":"ContainerStarted","Data":"b34322c88e148b5098f762bc4bbcab4aa51fc4639e561538ec356035299ec3a0"} Feb 23 17:57:52 crc kubenswrapper[4724]: I0223 17:57:52.183342 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-j99hd" podStartSLOduration=2.708034487 podStartE2EDuration="6.183308074s" podCreationTimestamp="2026-02-23 17:57:46 +0000 UTC" firstStartedPulling="2026-02-23 17:57:48.097030526 +0000 UTC m=+1623.913230126" lastFinishedPulling="2026-02-23 17:57:51.572304113 +0000 UTC m=+1627.388503713" observedRunningTime="2026-02-23 17:57:52.163604494 +0000 UTC m=+1627.979804094" watchObservedRunningTime="2026-02-23 17:57:52.183308074 +0000 UTC m=+1627.999507704" Feb 23 17:57:56 crc kubenswrapper[4724]: I0223 17:57:56.968295 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-j99hd" Feb 23 17:57:56 crc kubenswrapper[4724]: I0223 17:57:56.968696 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-j99hd" Feb 23 17:57:57 crc kubenswrapper[4724]: I0223 17:57:57.029082 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-j99hd" Feb 23 17:57:57 crc kubenswrapper[4724]: I0223 17:57:57.247181 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-j99hd" Feb 23 17:57:57 crc kubenswrapper[4724]: I0223 17:57:57.310931 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j99hd"] Feb 23 17:57:59 crc kubenswrapper[4724]: I0223 17:57:59.210443 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-j99hd" podUID="ded2365e-04a7-4475-92a1-0ce86f237344" containerName="registry-server" containerID="cri-o://b34322c88e148b5098f762bc4bbcab4aa51fc4639e561538ec356035299ec3a0" gracePeriod=2 Feb 23 17:57:59 crc kubenswrapper[4724]: I0223 17:57:59.805568 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j99hd" Feb 23 17:57:59 crc kubenswrapper[4724]: I0223 17:57:59.881173 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ded2365e-04a7-4475-92a1-0ce86f237344-catalog-content\") pod \"ded2365e-04a7-4475-92a1-0ce86f237344\" (UID: \"ded2365e-04a7-4475-92a1-0ce86f237344\") " Feb 23 17:57:59 crc kubenswrapper[4724]: I0223 17:57:59.881690 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ded2365e-04a7-4475-92a1-0ce86f237344-utilities\") pod \"ded2365e-04a7-4475-92a1-0ce86f237344\" (UID: \"ded2365e-04a7-4475-92a1-0ce86f237344\") " Feb 23 17:57:59 crc kubenswrapper[4724]: I0223 17:57:59.881798 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxqkl\" (UniqueName: \"kubernetes.io/projected/ded2365e-04a7-4475-92a1-0ce86f237344-kube-api-access-lxqkl\") pod \"ded2365e-04a7-4475-92a1-0ce86f237344\" (UID: \"ded2365e-04a7-4475-92a1-0ce86f237344\") " Feb 23 17:57:59 crc kubenswrapper[4724]: I0223 17:57:59.882541 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ded2365e-04a7-4475-92a1-0ce86f237344-utilities" (OuterVolumeSpecName: "utilities") pod "ded2365e-04a7-4475-92a1-0ce86f237344" (UID: "ded2365e-04a7-4475-92a1-0ce86f237344"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:57:59 crc kubenswrapper[4724]: I0223 17:57:59.892548 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ded2365e-04a7-4475-92a1-0ce86f237344-kube-api-access-lxqkl" (OuterVolumeSpecName: "kube-api-access-lxqkl") pod "ded2365e-04a7-4475-92a1-0ce86f237344" (UID: "ded2365e-04a7-4475-92a1-0ce86f237344"). InnerVolumeSpecName "kube-api-access-lxqkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:57:59 crc kubenswrapper[4724]: I0223 17:57:59.984222 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ded2365e-04a7-4475-92a1-0ce86f237344-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 17:57:59 crc kubenswrapper[4724]: I0223 17:57:59.984250 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxqkl\" (UniqueName: \"kubernetes.io/projected/ded2365e-04a7-4475-92a1-0ce86f237344-kube-api-access-lxqkl\") on node \"crc\" DevicePath \"\"" Feb 23 17:58:00 crc kubenswrapper[4724]: I0223 17:58:00.018816 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ded2365e-04a7-4475-92a1-0ce86f237344-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ded2365e-04a7-4475-92a1-0ce86f237344" (UID: "ded2365e-04a7-4475-92a1-0ce86f237344"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:58:00 crc kubenswrapper[4724]: I0223 17:58:00.086113 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ded2365e-04a7-4475-92a1-0ce86f237344-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 17:58:00 crc kubenswrapper[4724]: I0223 17:58:00.221539 4724 generic.go:334] "Generic (PLEG): container finished" podID="ded2365e-04a7-4475-92a1-0ce86f237344" containerID="b34322c88e148b5098f762bc4bbcab4aa51fc4639e561538ec356035299ec3a0" exitCode=0 Feb 23 17:58:00 crc kubenswrapper[4724]: I0223 17:58:00.221609 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j99hd" event={"ID":"ded2365e-04a7-4475-92a1-0ce86f237344","Type":"ContainerDied","Data":"b34322c88e148b5098f762bc4bbcab4aa51fc4639e561538ec356035299ec3a0"} Feb 23 17:58:00 crc kubenswrapper[4724]: I0223 17:58:00.221642 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j99hd" event={"ID":"ded2365e-04a7-4475-92a1-0ce86f237344","Type":"ContainerDied","Data":"8638e1f6d894432bfad29c47ca86986047cc3530c788daa988511250b5f98209"} Feb 23 17:58:00 crc kubenswrapper[4724]: I0223 17:58:00.221653 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j99hd" Feb 23 17:58:00 crc kubenswrapper[4724]: I0223 17:58:00.221658 4724 scope.go:117] "RemoveContainer" containerID="b34322c88e148b5098f762bc4bbcab4aa51fc4639e561538ec356035299ec3a0" Feb 23 17:58:00 crc kubenswrapper[4724]: I0223 17:58:00.243346 4724 scope.go:117] "RemoveContainer" containerID="da310d35f8acd2d114cec100cfc3e831d1a5789ae7c8f292afb1b939e246863d" Feb 23 17:58:00 crc kubenswrapper[4724]: I0223 17:58:00.263410 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j99hd"] Feb 23 17:58:00 crc kubenswrapper[4724]: I0223 17:58:00.280132 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-j99hd"] Feb 23 17:58:00 crc kubenswrapper[4724]: I0223 17:58:00.282745 4724 scope.go:117] "RemoveContainer" containerID="db3d895ad83d9fb02b96016d3b3f504e7bbb355e073b1e560cea0168d79c037e" Feb 23 17:58:00 crc kubenswrapper[4724]: I0223 17:58:00.316635 4724 scope.go:117] "RemoveContainer" containerID="b34322c88e148b5098f762bc4bbcab4aa51fc4639e561538ec356035299ec3a0" Feb 23 17:58:00 crc kubenswrapper[4724]: E0223 17:58:00.317046 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b34322c88e148b5098f762bc4bbcab4aa51fc4639e561538ec356035299ec3a0\": container with ID starting with b34322c88e148b5098f762bc4bbcab4aa51fc4639e561538ec356035299ec3a0 not found: ID does not exist" containerID="b34322c88e148b5098f762bc4bbcab4aa51fc4639e561538ec356035299ec3a0" Feb 23 17:58:00 crc kubenswrapper[4724]: I0223 17:58:00.317092 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b34322c88e148b5098f762bc4bbcab4aa51fc4639e561538ec356035299ec3a0"} err="failed to get container status \"b34322c88e148b5098f762bc4bbcab4aa51fc4639e561538ec356035299ec3a0\": rpc error: code = NotFound desc = could not find container \"b34322c88e148b5098f762bc4bbcab4aa51fc4639e561538ec356035299ec3a0\": container with ID starting with b34322c88e148b5098f762bc4bbcab4aa51fc4639e561538ec356035299ec3a0 not found: ID does not exist" Feb 23 17:58:00 crc kubenswrapper[4724]: I0223 17:58:00.317117 4724 scope.go:117] "RemoveContainer" containerID="da310d35f8acd2d114cec100cfc3e831d1a5789ae7c8f292afb1b939e246863d" Feb 23 17:58:00 crc kubenswrapper[4724]: E0223 17:58:00.317424 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da310d35f8acd2d114cec100cfc3e831d1a5789ae7c8f292afb1b939e246863d\": container with ID starting with da310d35f8acd2d114cec100cfc3e831d1a5789ae7c8f292afb1b939e246863d not found: ID does not exist" containerID="da310d35f8acd2d114cec100cfc3e831d1a5789ae7c8f292afb1b939e246863d" Feb 23 17:58:00 crc kubenswrapper[4724]: I0223 17:58:00.317465 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da310d35f8acd2d114cec100cfc3e831d1a5789ae7c8f292afb1b939e246863d"} err="failed to get container status \"da310d35f8acd2d114cec100cfc3e831d1a5789ae7c8f292afb1b939e246863d\": rpc error: code = NotFound desc = could not find container \"da310d35f8acd2d114cec100cfc3e831d1a5789ae7c8f292afb1b939e246863d\": container with ID starting with da310d35f8acd2d114cec100cfc3e831d1a5789ae7c8f292afb1b939e246863d not found: ID does not exist" Feb 23 17:58:00 crc kubenswrapper[4724]: I0223 17:58:00.317490 4724 scope.go:117] "RemoveContainer" containerID="db3d895ad83d9fb02b96016d3b3f504e7bbb355e073b1e560cea0168d79c037e" Feb 23 17:58:00 crc kubenswrapper[4724]: E0223 17:58:00.317764 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db3d895ad83d9fb02b96016d3b3f504e7bbb355e073b1e560cea0168d79c037e\": container with ID starting with db3d895ad83d9fb02b96016d3b3f504e7bbb355e073b1e560cea0168d79c037e not found: ID does not exist" containerID="db3d895ad83d9fb02b96016d3b3f504e7bbb355e073b1e560cea0168d79c037e" Feb 23 17:58:00 crc kubenswrapper[4724]: I0223 17:58:00.317792 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db3d895ad83d9fb02b96016d3b3f504e7bbb355e073b1e560cea0168d79c037e"} err="failed to get container status \"db3d895ad83d9fb02b96016d3b3f504e7bbb355e073b1e560cea0168d79c037e\": rpc error: code = NotFound desc = could not find container \"db3d895ad83d9fb02b96016d3b3f504e7bbb355e073b1e560cea0168d79c037e\": container with ID starting with db3d895ad83d9fb02b96016d3b3f504e7bbb355e073b1e560cea0168d79c037e not found: ID does not exist" Feb 23 17:58:00 crc kubenswrapper[4724]: I0223 17:58:00.968569 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ded2365e-04a7-4475-92a1-0ce86f237344" path="/var/lib/kubelet/pods/ded2365e-04a7-4475-92a1-0ce86f237344/volumes" Feb 23 17:58:11 crc kubenswrapper[4724]: I0223 17:58:11.961572 4724 generic.go:334] "Generic (PLEG): container finished" podID="456d50d3-b5f9-4dd4-9eec-c15f21b183e7" containerID="dfebd486be3ef3c9335b08471b292ea3a817a9ebe28bd950228f681101d56074" exitCode=0 Feb 23 17:58:11 crc kubenswrapper[4724]: I0223 17:58:11.961618 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4wbqf" event={"ID":"456d50d3-b5f9-4dd4-9eec-c15f21b183e7","Type":"ContainerDied","Data":"dfebd486be3ef3c9335b08471b292ea3a817a9ebe28bd950228f681101d56074"} Feb 23 17:58:13 crc kubenswrapper[4724]: I0223 17:58:13.427364 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4wbqf" Feb 23 17:58:13 crc kubenswrapper[4724]: I0223 17:58:13.560613 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/456d50d3-b5f9-4dd4-9eec-c15f21b183e7-ssh-key-openstack-edpm-ipam\") pod \"456d50d3-b5f9-4dd4-9eec-c15f21b183e7\" (UID: \"456d50d3-b5f9-4dd4-9eec-c15f21b183e7\") " Feb 23 17:58:13 crc kubenswrapper[4724]: I0223 17:58:13.560860 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/456d50d3-b5f9-4dd4-9eec-c15f21b183e7-bootstrap-combined-ca-bundle\") pod \"456d50d3-b5f9-4dd4-9eec-c15f21b183e7\" (UID: \"456d50d3-b5f9-4dd4-9eec-c15f21b183e7\") " Feb 23 17:58:13 crc kubenswrapper[4724]: I0223 17:58:13.560909 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/456d50d3-b5f9-4dd4-9eec-c15f21b183e7-inventory\") pod \"456d50d3-b5f9-4dd4-9eec-c15f21b183e7\" (UID: \"456d50d3-b5f9-4dd4-9eec-c15f21b183e7\") " Feb 23 17:58:13 crc kubenswrapper[4724]: I0223 17:58:13.561015 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blt6v\" (UniqueName: \"kubernetes.io/projected/456d50d3-b5f9-4dd4-9eec-c15f21b183e7-kube-api-access-blt6v\") pod \"456d50d3-b5f9-4dd4-9eec-c15f21b183e7\" (UID: \"456d50d3-b5f9-4dd4-9eec-c15f21b183e7\") " Feb 23 17:58:13 crc kubenswrapper[4724]: I0223 17:58:13.569980 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/456d50d3-b5f9-4dd4-9eec-c15f21b183e7-kube-api-access-blt6v" (OuterVolumeSpecName: "kube-api-access-blt6v") pod "456d50d3-b5f9-4dd4-9eec-c15f21b183e7" (UID: "456d50d3-b5f9-4dd4-9eec-c15f21b183e7"). InnerVolumeSpecName "kube-api-access-blt6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:58:13 crc kubenswrapper[4724]: I0223 17:58:13.570338 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/456d50d3-b5f9-4dd4-9eec-c15f21b183e7-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "456d50d3-b5f9-4dd4-9eec-c15f21b183e7" (UID: "456d50d3-b5f9-4dd4-9eec-c15f21b183e7"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:58:13 crc kubenswrapper[4724]: I0223 17:58:13.595989 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/456d50d3-b5f9-4dd4-9eec-c15f21b183e7-inventory" (OuterVolumeSpecName: "inventory") pod "456d50d3-b5f9-4dd4-9eec-c15f21b183e7" (UID: "456d50d3-b5f9-4dd4-9eec-c15f21b183e7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:58:13 crc kubenswrapper[4724]: I0223 17:58:13.601879 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/456d50d3-b5f9-4dd4-9eec-c15f21b183e7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "456d50d3-b5f9-4dd4-9eec-c15f21b183e7" (UID: "456d50d3-b5f9-4dd4-9eec-c15f21b183e7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:58:13 crc kubenswrapper[4724]: I0223 17:58:13.663334 4724 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/456d50d3-b5f9-4dd4-9eec-c15f21b183e7-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 17:58:13 crc kubenswrapper[4724]: I0223 17:58:13.663375 4724 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/456d50d3-b5f9-4dd4-9eec-c15f21b183e7-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 17:58:13 crc kubenswrapper[4724]: I0223 17:58:13.663387 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blt6v\" (UniqueName: \"kubernetes.io/projected/456d50d3-b5f9-4dd4-9eec-c15f21b183e7-kube-api-access-blt6v\") on node \"crc\" DevicePath \"\"" Feb 23 17:58:13 crc kubenswrapper[4724]: I0223 17:58:13.663414 4724 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/456d50d3-b5f9-4dd4-9eec-c15f21b183e7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 17:58:13 crc kubenswrapper[4724]: I0223 17:58:13.981125 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4wbqf" event={"ID":"456d50d3-b5f9-4dd4-9eec-c15f21b183e7","Type":"ContainerDied","Data":"bc4fbea159cccae757a39a4d72f6f63251a4bb04c16a00e86021cdde1d5586f9"} Feb 23 17:58:13 crc kubenswrapper[4724]: I0223 17:58:13.981168 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc4fbea159cccae757a39a4d72f6f63251a4bb04c16a00e86021cdde1d5586f9" Feb 23 17:58:13 crc kubenswrapper[4724]: I0223 17:58:13.981226 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4wbqf" Feb 23 17:58:14 crc kubenswrapper[4724]: I0223 17:58:14.087442 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-255hh"] Feb 23 17:58:14 crc kubenswrapper[4724]: E0223 17:58:14.087970 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ded2365e-04a7-4475-92a1-0ce86f237344" containerName="extract-utilities" Feb 23 17:58:14 crc kubenswrapper[4724]: I0223 17:58:14.087991 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="ded2365e-04a7-4475-92a1-0ce86f237344" containerName="extract-utilities" Feb 23 17:58:14 crc kubenswrapper[4724]: E0223 17:58:14.088010 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ded2365e-04a7-4475-92a1-0ce86f237344" containerName="extract-content" Feb 23 17:58:14 crc kubenswrapper[4724]: I0223 17:58:14.088020 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="ded2365e-04a7-4475-92a1-0ce86f237344" containerName="extract-content" Feb 23 17:58:14 crc kubenswrapper[4724]: E0223 17:58:14.088051 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="456d50d3-b5f9-4dd4-9eec-c15f21b183e7" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 23 17:58:14 crc kubenswrapper[4724]: I0223 17:58:14.088062 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="456d50d3-b5f9-4dd4-9eec-c15f21b183e7" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 23 17:58:14 crc kubenswrapper[4724]: E0223 17:58:14.088075 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ded2365e-04a7-4475-92a1-0ce86f237344" containerName="registry-server" Feb 23 17:58:14 crc kubenswrapper[4724]: I0223 17:58:14.088083 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="ded2365e-04a7-4475-92a1-0ce86f237344" containerName="registry-server" Feb 23 17:58:14 crc kubenswrapper[4724]: I0223 17:58:14.088311 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="ded2365e-04a7-4475-92a1-0ce86f237344" containerName="registry-server" Feb 23 17:58:14 crc kubenswrapper[4724]: I0223 17:58:14.088331 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="456d50d3-b5f9-4dd4-9eec-c15f21b183e7" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 23 17:58:14 crc kubenswrapper[4724]: I0223 17:58:14.089174 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-255hh" Feb 23 17:58:14 crc kubenswrapper[4724]: I0223 17:58:14.091382 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 17:58:14 crc kubenswrapper[4724]: I0223 17:58:14.091967 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 17:58:14 crc kubenswrapper[4724]: I0223 17:58:14.092134 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 17:58:14 crc kubenswrapper[4724]: I0223 17:58:14.092450 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t8jff" Feb 23 17:58:14 crc kubenswrapper[4724]: I0223 17:58:14.117336 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-255hh"] Feb 23 17:58:14 crc kubenswrapper[4724]: I0223 17:58:14.273974 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/15bf49cb-7015-49e6-9710-4f701dc9d6f7-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-255hh\" (UID: \"15bf49cb-7015-49e6-9710-4f701dc9d6f7\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-255hh" Feb 23 17:58:14 crc kubenswrapper[4724]: I0223 17:58:14.274474 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8c7n\" (UniqueName: \"kubernetes.io/projected/15bf49cb-7015-49e6-9710-4f701dc9d6f7-kube-api-access-c8c7n\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-255hh\" (UID: \"15bf49cb-7015-49e6-9710-4f701dc9d6f7\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-255hh" Feb 23 17:58:14 crc kubenswrapper[4724]: I0223 17:58:14.274544 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/15bf49cb-7015-49e6-9710-4f701dc9d6f7-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-255hh\" (UID: \"15bf49cb-7015-49e6-9710-4f701dc9d6f7\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-255hh" Feb 23 17:58:14 crc kubenswrapper[4724]: I0223 17:58:14.376326 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/15bf49cb-7015-49e6-9710-4f701dc9d6f7-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-255hh\" (UID: \"15bf49cb-7015-49e6-9710-4f701dc9d6f7\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-255hh" Feb 23 17:58:14 crc kubenswrapper[4724]: I0223 17:58:14.376603 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8c7n\" (UniqueName: \"kubernetes.io/projected/15bf49cb-7015-49e6-9710-4f701dc9d6f7-kube-api-access-c8c7n\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-255hh\" (UID: \"15bf49cb-7015-49e6-9710-4f701dc9d6f7\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-255hh" Feb 23 17:58:14 crc kubenswrapper[4724]: I0223 17:58:14.377031 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/15bf49cb-7015-49e6-9710-4f701dc9d6f7-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-255hh\" (UID: \"15bf49cb-7015-49e6-9710-4f701dc9d6f7\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-255hh" Feb 23 17:58:14 crc kubenswrapper[4724]: I0223 17:58:14.379897 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/15bf49cb-7015-49e6-9710-4f701dc9d6f7-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-255hh\" (UID: \"15bf49cb-7015-49e6-9710-4f701dc9d6f7\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-255hh" Feb 23 17:58:14 crc kubenswrapper[4724]: I0223 17:58:14.379924 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/15bf49cb-7015-49e6-9710-4f701dc9d6f7-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-255hh\" (UID: \"15bf49cb-7015-49e6-9710-4f701dc9d6f7\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-255hh" Feb 23 17:58:14 crc kubenswrapper[4724]: I0223 17:58:14.403510 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8c7n\" (UniqueName: \"kubernetes.io/projected/15bf49cb-7015-49e6-9710-4f701dc9d6f7-kube-api-access-c8c7n\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-255hh\" (UID: \"15bf49cb-7015-49e6-9710-4f701dc9d6f7\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-255hh" Feb 23 17:58:14 crc kubenswrapper[4724]: I0223 17:58:14.414017 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-255hh" Feb 23 17:58:14 crc kubenswrapper[4724]: I0223 17:58:14.934275 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-255hh"] Feb 23 17:58:14 crc kubenswrapper[4724]: I0223 17:58:14.991525 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-255hh" event={"ID":"15bf49cb-7015-49e6-9710-4f701dc9d6f7","Type":"ContainerStarted","Data":"7fe0ae89e6be6d820b179da2e1b1067cc4b7bded2670a637d69a7a8ca5196411"} Feb 23 17:58:16 crc kubenswrapper[4724]: I0223 17:58:16.023671 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-255hh" event={"ID":"15bf49cb-7015-49e6-9710-4f701dc9d6f7","Type":"ContainerStarted","Data":"f76783baa779e24ffb66da0b22471eab88d058e09b1b21cfee9ea4dda5afa4bb"} Feb 23 17:58:16 crc kubenswrapper[4724]: I0223 17:58:16.053925 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-255hh" podStartSLOduration=1.6674615419999999 podStartE2EDuration="2.053907454s" podCreationTimestamp="2026-02-23 17:58:14 +0000 UTC" firstStartedPulling="2026-02-23 17:58:14.931642143 +0000 UTC m=+1650.747841753" lastFinishedPulling="2026-02-23 17:58:15.318088065 +0000 UTC m=+1651.134287665" observedRunningTime="2026-02-23 17:58:16.049702112 +0000 UTC m=+1651.865901742" watchObservedRunningTime="2026-02-23 17:58:16.053907454 +0000 UTC m=+1651.870107044" Feb 23 17:58:19 crc kubenswrapper[4724]: I0223 17:58:19.038622 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-x87lg"] Feb 23 17:58:19 crc kubenswrapper[4724]: I0223 17:58:19.049101 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-fq65r"] Feb 23 17:58:19 crc kubenswrapper[4724]: I0223 17:58:19.060059 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-x87lg"] Feb 23 17:58:19 crc kubenswrapper[4724]: I0223 17:58:19.068614 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-fq65r"] Feb 23 17:58:20 crc kubenswrapper[4724]: I0223 17:58:20.032684 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-1e6b-account-create-update-2q4q5"] Feb 23 17:58:20 crc kubenswrapper[4724]: I0223 17:58:20.050193 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-483b-account-create-update-x5wds"] Feb 23 17:58:20 crc kubenswrapper[4724]: I0223 17:58:20.082550 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-1e6b-account-create-update-2q4q5"] Feb 23 17:58:20 crc kubenswrapper[4724]: I0223 17:58:20.089494 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-483b-account-create-update-x5wds"] Feb 23 17:58:20 crc kubenswrapper[4724]: I0223 17:58:20.967203 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fd67fc2-80dd-4c14-aed2-99eb130182b1" path="/var/lib/kubelet/pods/1fd67fc2-80dd-4c14-aed2-99eb130182b1/volumes" Feb 23 17:58:20 crc kubenswrapper[4724]: I0223 17:58:20.969515 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28b45ff2-6bda-4335-aeb4-862daa049364" path="/var/lib/kubelet/pods/28b45ff2-6bda-4335-aeb4-862daa049364/volumes" Feb 23 17:58:20 crc kubenswrapper[4724]: I0223 17:58:20.972705 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3adf9177-46cc-47ea-8884-2868dd612c07" path="/var/lib/kubelet/pods/3adf9177-46cc-47ea-8884-2868dd612c07/volumes" Feb 23 17:58:20 crc kubenswrapper[4724]: I0223 17:58:20.975521 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="866fb28f-2850-4b40-8285-f89763b322e3" path="/var/lib/kubelet/pods/866fb28f-2850-4b40-8285-f89763b322e3/volumes" Feb 23 17:58:21 crc kubenswrapper[4724]: I0223 17:58:21.046731 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-create-hfvt8"] Feb 23 17:58:21 crc kubenswrapper[4724]: I0223 17:58:21.060986 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-pzqhh"] Feb 23 17:58:21 crc kubenswrapper[4724]: I0223 17:58:21.074998 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-4f5c-account-create-update-skkrv"] Feb 23 17:58:21 crc kubenswrapper[4724]: I0223 17:58:21.083530 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-75e7-account-create-update-j7mwp"] Feb 23 17:58:21 crc kubenswrapper[4724]: I0223 17:58:21.094399 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-4f5c-account-create-update-skkrv"] Feb 23 17:58:21 crc kubenswrapper[4724]: I0223 17:58:21.103612 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-pzqhh"] Feb 23 17:58:21 crc kubenswrapper[4724]: I0223 17:58:21.113205 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-create-hfvt8"] Feb 23 17:58:21 crc kubenswrapper[4724]: I0223 17:58:21.122293 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-75e7-account-create-update-j7mwp"] Feb 23 17:58:22 crc kubenswrapper[4724]: I0223 17:58:22.961369 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0764f35c-ec7a-48c0-bdb9-da3568db426a" path="/var/lib/kubelet/pods/0764f35c-ec7a-48c0-bdb9-da3568db426a/volumes" Feb 23 17:58:22 crc kubenswrapper[4724]: I0223 17:58:22.961981 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08606168-c618-4094-a730-68080afc85d7" path="/var/lib/kubelet/pods/08606168-c618-4094-a730-68080afc85d7/volumes" Feb 23 17:58:22 crc kubenswrapper[4724]: I0223 17:58:22.963356 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dea0f4c-65e8-438f-a731-024b2074c8df" path="/var/lib/kubelet/pods/1dea0f4c-65e8-438f-a731-024b2074c8df/volumes" Feb 23 17:58:22 crc kubenswrapper[4724]: I0223 17:58:22.963905 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83342eb4-7660-4e5f-96e2-883ab91b855e" path="/var/lib/kubelet/pods/83342eb4-7660-4e5f-96e2-883ab91b855e/volumes" Feb 23 17:58:35 crc kubenswrapper[4724]: I0223 17:58:35.040890 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-mm7h8"] Feb 23 17:58:35 crc kubenswrapper[4724]: I0223 17:58:35.053226 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-mm7h8"] Feb 23 17:58:36 crc kubenswrapper[4724]: I0223 17:58:36.969693 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb690dfd-e2b9-4ea0-a4d6-9a9ff6adf4ef" path="/var/lib/kubelet/pods/cb690dfd-e2b9-4ea0-a4d6-9a9ff6adf4ef/volumes" Feb 23 17:58:48 crc kubenswrapper[4724]: I0223 17:58:48.085398 4724 scope.go:117] "RemoveContainer" containerID="b8b0290dd7c985d62b4c0175ab00cdca8faf5c6dd4f85457ccb82c659838f144" Feb 23 17:58:48 crc kubenswrapper[4724]: I0223 17:58:48.112331 4724 scope.go:117] "RemoveContainer" containerID="b7ee0a74cb8b42ae64f62fb5d95f30e31f5133f05f935d6b4c2d7551863c20eb" Feb 23 17:58:48 crc kubenswrapper[4724]: I0223 17:58:48.168174 4724 scope.go:117] "RemoveContainer" containerID="53b7ebad95b623390eef006949fdd7cce73db7f2464e0230186f14ed534af6c8" Feb 23 17:58:48 crc kubenswrapper[4724]: I0223 17:58:48.211968 4724 scope.go:117] "RemoveContainer" containerID="966fa6a5bd5eb4a2b058eb8b35d278396f9980fe1a6dd3b6b518e73bd4d555e8" Feb 23 17:58:48 crc kubenswrapper[4724]: I0223 17:58:48.261915 4724 scope.go:117] "RemoveContainer" containerID="c15b6390eb34d563565d04df57b7770748a4986a9b30c93e3356990f2e1ce9ab" Feb 23 17:58:48 crc kubenswrapper[4724]: I0223 17:58:48.313051 4724 scope.go:117] "RemoveContainer" containerID="3a35009c3015a93806124800b99dae482bbeb42551adb7459592551248ef6e3b" Feb 23 17:58:48 crc kubenswrapper[4724]: I0223 17:58:48.363813 4724 scope.go:117] "RemoveContainer" containerID="e527f021b3523d1876018534bed4e165a6c75ee9d83006a33586c06f283ad5b6" Feb 23 17:58:48 crc kubenswrapper[4724]: I0223 17:58:48.394374 4724 scope.go:117] "RemoveContainer" containerID="06fb9d8c6b056cdb5197bad91bfc6bebeedc47243b7dfa65594633a702723d50" Feb 23 17:58:48 crc kubenswrapper[4724]: I0223 17:58:48.413197 4724 scope.go:117] "RemoveContainer" containerID="91233d0bfc6d43e7787f565c29d652054e55ccd20a88722464640b9da3923f5c" Feb 23 17:58:57 crc kubenswrapper[4724]: I0223 17:58:57.752357 4724 patch_prober.go:28] interesting pod/machine-config-daemon-rw78r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 17:58:57 crc kubenswrapper[4724]: I0223 17:58:57.752883 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 17:58:58 crc kubenswrapper[4724]: I0223 17:58:58.076383 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-w2gbv"] Feb 23 17:58:58 crc kubenswrapper[4724]: I0223 17:58:58.092083 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-959e-account-create-update-kfz4c"] Feb 23 17:58:58 crc kubenswrapper[4724]: I0223 17:58:58.108033 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-4rgbz"] Feb 23 17:58:58 crc kubenswrapper[4724]: I0223 17:58:58.116719 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-e738-account-create-update-6sklf"] Feb 23 17:58:58 crc kubenswrapper[4724]: I0223 17:58:58.125434 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-c185-account-create-update-2zg86"] Feb 23 17:58:58 crc kubenswrapper[4724]: I0223 17:58:58.133844 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-chd5w"] Feb 23 17:58:58 crc kubenswrapper[4724]: I0223 17:58:58.143324 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-c185-account-create-update-2zg86"] Feb 23 17:58:58 crc kubenswrapper[4724]: I0223 17:58:58.154531 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-959e-account-create-update-kfz4c"] Feb 23 17:58:58 crc kubenswrapper[4724]: I0223 17:58:58.162802 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-w2gbv"] Feb 23 17:58:58 crc kubenswrapper[4724]: I0223 17:58:58.187949 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-chd5w"] Feb 23 17:58:58 crc kubenswrapper[4724]: I0223 17:58:58.196947 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-4rgbz"] Feb 23 17:58:58 crc kubenswrapper[4724]: I0223 17:58:58.205590 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-e738-account-create-update-6sklf"] Feb 23 17:58:58 crc kubenswrapper[4724]: I0223 17:58:58.960957 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c3f7706-ecc6-45ce-90d7-89bafc6588fd" path="/var/lib/kubelet/pods/1c3f7706-ecc6-45ce-90d7-89bafc6588fd/volumes" Feb 23 17:58:58 crc kubenswrapper[4724]: I0223 17:58:58.961903 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e8659d3-de95-405b-b137-54708400f566" path="/var/lib/kubelet/pods/2e8659d3-de95-405b-b137-54708400f566/volumes" Feb 23 17:58:58 crc kubenswrapper[4724]: I0223 17:58:58.962502 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68bbcfd5-a073-443b-afab-650c48febc56" path="/var/lib/kubelet/pods/68bbcfd5-a073-443b-afab-650c48febc56/volumes" Feb 23 17:58:58 crc kubenswrapper[4724]: I0223 17:58:58.963004 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8615b39e-8bab-4706-a6b6-e719c566b7dc" path="/var/lib/kubelet/pods/8615b39e-8bab-4706-a6b6-e719c566b7dc/volumes" Feb 23 17:58:58 crc kubenswrapper[4724]: I0223 17:58:58.963930 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="934da594-6ca7-46b8-954e-c1cff91e3f44" path="/var/lib/kubelet/pods/934da594-6ca7-46b8-954e-c1cff91e3f44/volumes" Feb 23 17:58:58 crc kubenswrapper[4724]: I0223 17:58:58.964458 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3811756-e4b8-40fe-9158-30f432841b07" path="/var/lib/kubelet/pods/a3811756-e4b8-40fe-9158-30f432841b07/volumes" Feb 23 17:59:01 crc kubenswrapper[4724]: I0223 17:59:01.030277 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-qqzwt"] Feb 23 17:59:01 crc kubenswrapper[4724]: I0223 17:59:01.040269 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-qqzwt"] Feb 23 17:59:02 crc kubenswrapper[4724]: I0223 17:59:02.980581 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4835f23c-1737-45fa-8d8f-d5a381c9d498" path="/var/lib/kubelet/pods/4835f23c-1737-45fa-8d8f-d5a381c9d498/volumes" Feb 23 17:59:09 crc kubenswrapper[4724]: I0223 17:59:09.031518 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-sync-69qxx"] Feb 23 17:59:09 crc kubenswrapper[4724]: I0223 17:59:09.046666 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-sync-69qxx"] Feb 23 17:59:10 crc kubenswrapper[4724]: I0223 17:59:10.027316 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-zt6pn"] Feb 23 17:59:10 crc kubenswrapper[4724]: I0223 17:59:10.037342 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-zt6pn"] Feb 23 17:59:10 crc kubenswrapper[4724]: I0223 17:59:10.961306 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20956a35-60c2-4df4-b475-0a64a3fa11ae" path="/var/lib/kubelet/pods/20956a35-60c2-4df4-b475-0a64a3fa11ae/volumes" Feb 23 17:59:10 crc kubenswrapper[4724]: I0223 17:59:10.962231 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9686c843-cd47-4a6c-992a-97dd99d4304e" path="/var/lib/kubelet/pods/9686c843-cd47-4a6c-992a-97dd99d4304e/volumes" Feb 23 17:59:27 crc kubenswrapper[4724]: I0223 17:59:27.752849 4724 patch_prober.go:28] interesting pod/machine-config-daemon-rw78r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 17:59:27 crc kubenswrapper[4724]: I0223 17:59:27.753521 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 17:59:48 crc kubenswrapper[4724]: I0223 17:59:48.580319 4724 scope.go:117] "RemoveContainer" containerID="5a59058eb1fc336cf42338c957b13971843c3de509c90f6fdb13015a7658d4e0" Feb 23 17:59:48 crc kubenswrapper[4724]: I0223 17:59:48.630882 4724 scope.go:117] "RemoveContainer" containerID="75fa4f6c6f11759f32ec59979bd45a4d7a27600b287363fc603da871ee5f39bd" Feb 23 17:59:48 crc kubenswrapper[4724]: I0223 17:59:48.671378 4724 scope.go:117] "RemoveContainer" containerID="575e86fe8fcea57123a642069fbd47eb6f5ab040c6ba5558f2c99288053d2460" Feb 23 17:59:48 crc kubenswrapper[4724]: I0223 17:59:48.715649 4724 scope.go:117] "RemoveContainer" containerID="c52634768d11e5831a1c24915ed25d659594d04b13b74863bee9b508a9921985" Feb 23 17:59:48 crc kubenswrapper[4724]: I0223 17:59:48.775436 4724 scope.go:117] "RemoveContainer" containerID="10d7d78fa3fd71b4661921677c22121cd774b794b279c6d1b98cbcd8e2abd565" Feb 23 17:59:48 crc kubenswrapper[4724]: I0223 17:59:48.828383 4724 scope.go:117] "RemoveContainer" containerID="8be48d7a1eb0ad41d208dc3291a360e63288a04c10748e48c7853ba7e3656644" Feb 23 17:59:48 crc kubenswrapper[4724]: I0223 17:59:48.862924 4724 scope.go:117] "RemoveContainer" containerID="ec242011e8cfa23eb71faae51f72794f9bc6b983c3dae0e34c6fd63b2963e704" Feb 23 17:59:48 crc kubenswrapper[4724]: I0223 17:59:48.897267 4724 scope.go:117] "RemoveContainer" containerID="9c6a1bae99ea621ca5f410f1dd510a271fd827f91fe0b0a36fba0a5600e407a2" Feb 23 17:59:48 crc kubenswrapper[4724]: I0223 17:59:48.930297 4724 scope.go:117] "RemoveContainer" containerID="fd8dcc156220d851af55cbcd9261a4f3b0d8fa6dce9b958f316266b34bfcd863" Feb 23 17:59:49 crc kubenswrapper[4724]: I0223 17:59:49.075481 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-stfh8"] Feb 23 17:59:49 crc kubenswrapper[4724]: I0223 17:59:49.084004 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-stfh8"] Feb 23 17:59:50 crc kubenswrapper[4724]: I0223 17:59:50.962673 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23123829-c64d-4376-8be6-660e7892a057" path="/var/lib/kubelet/pods/23123829-c64d-4376-8be6-660e7892a057/volumes" Feb 23 17:59:56 crc kubenswrapper[4724]: I0223 17:59:56.999344 4724 generic.go:334] "Generic (PLEG): container finished" podID="15bf49cb-7015-49e6-9710-4f701dc9d6f7" containerID="f76783baa779e24ffb66da0b22471eab88d058e09b1b21cfee9ea4dda5afa4bb" exitCode=0 Feb 23 17:59:56 crc kubenswrapper[4724]: I0223 17:59:56.999496 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-255hh" event={"ID":"15bf49cb-7015-49e6-9710-4f701dc9d6f7","Type":"ContainerDied","Data":"f76783baa779e24ffb66da0b22471eab88d058e09b1b21cfee9ea4dda5afa4bb"} Feb 23 17:59:57 crc kubenswrapper[4724]: I0223 17:59:57.044812 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-2tlht"] Feb 23 17:59:57 crc kubenswrapper[4724]: I0223 17:59:57.053904 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-q2ssq"] Feb 23 17:59:57 crc kubenswrapper[4724]: I0223 17:59:57.061863 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-2tlht"] Feb 23 17:59:57 crc kubenswrapper[4724]: I0223 17:59:57.070959 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-q2ssq"] Feb 23 17:59:57 crc kubenswrapper[4724]: I0223 17:59:57.751928 4724 patch_prober.go:28] interesting pod/machine-config-daemon-rw78r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 17:59:57 crc kubenswrapper[4724]: I0223 17:59:57.752030 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 17:59:57 crc kubenswrapper[4724]: I0223 17:59:57.752092 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" Feb 23 17:59:57 crc kubenswrapper[4724]: I0223 17:59:57.753570 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a9d9154f54d232a189bc26a1a3f88396d8e7bd4b7bf9b2b3dcf38e1648177ac8"} pod="openshift-machine-config-operator/machine-config-daemon-rw78r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 17:59:57 crc kubenswrapper[4724]: I0223 17:59:57.753664 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" containerID="cri-o://a9d9154f54d232a189bc26a1a3f88396d8e7bd4b7bf9b2b3dcf38e1648177ac8" gracePeriod=600 Feb 23 17:59:57 crc kubenswrapper[4724]: E0223 17:59:57.910503 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 17:59:58 crc kubenswrapper[4724]: I0223 17:59:58.009810 4724 generic.go:334] "Generic (PLEG): container finished" podID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerID="a9d9154f54d232a189bc26a1a3f88396d8e7bd4b7bf9b2b3dcf38e1648177ac8" exitCode=0 Feb 23 17:59:58 crc kubenswrapper[4724]: I0223 17:59:58.009883 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" event={"ID":"a065b197-b354-4d9b-b2e9-7d4882a3d1a2","Type":"ContainerDied","Data":"a9d9154f54d232a189bc26a1a3f88396d8e7bd4b7bf9b2b3dcf38e1648177ac8"} Feb 23 17:59:58 crc kubenswrapper[4724]: I0223 17:59:58.009943 4724 scope.go:117] "RemoveContainer" containerID="f6f8a7efa8383e0b1ed8ac5db72df9df740ff1c95794a0256d6285d176592a6b" Feb 23 17:59:58 crc kubenswrapper[4724]: I0223 17:59:58.010755 4724 scope.go:117] "RemoveContainer" containerID="a9d9154f54d232a189bc26a1a3f88396d8e7bd4b7bf9b2b3dcf38e1648177ac8" Feb 23 17:59:58 crc kubenswrapper[4724]: E0223 17:59:58.011070 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 17:59:58 crc kubenswrapper[4724]: I0223 17:59:58.451109 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-255hh" Feb 23 17:59:58 crc kubenswrapper[4724]: I0223 17:59:58.465050 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8c7n\" (UniqueName: \"kubernetes.io/projected/15bf49cb-7015-49e6-9710-4f701dc9d6f7-kube-api-access-c8c7n\") pod \"15bf49cb-7015-49e6-9710-4f701dc9d6f7\" (UID: \"15bf49cb-7015-49e6-9710-4f701dc9d6f7\") " Feb 23 17:59:58 crc kubenswrapper[4724]: I0223 17:59:58.472563 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15bf49cb-7015-49e6-9710-4f701dc9d6f7-kube-api-access-c8c7n" (OuterVolumeSpecName: "kube-api-access-c8c7n") pod "15bf49cb-7015-49e6-9710-4f701dc9d6f7" (UID: "15bf49cb-7015-49e6-9710-4f701dc9d6f7"). InnerVolumeSpecName "kube-api-access-c8c7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:59:58 crc kubenswrapper[4724]: I0223 17:59:58.567666 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/15bf49cb-7015-49e6-9710-4f701dc9d6f7-inventory\") pod \"15bf49cb-7015-49e6-9710-4f701dc9d6f7\" (UID: \"15bf49cb-7015-49e6-9710-4f701dc9d6f7\") " Feb 23 17:59:58 crc kubenswrapper[4724]: I0223 17:59:58.568005 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/15bf49cb-7015-49e6-9710-4f701dc9d6f7-ssh-key-openstack-edpm-ipam\") pod \"15bf49cb-7015-49e6-9710-4f701dc9d6f7\" (UID: \"15bf49cb-7015-49e6-9710-4f701dc9d6f7\") " Feb 23 17:59:58 crc kubenswrapper[4724]: I0223 17:59:58.568722 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8c7n\" (UniqueName: \"kubernetes.io/projected/15bf49cb-7015-49e6-9710-4f701dc9d6f7-kube-api-access-c8c7n\") on node \"crc\" DevicePath \"\"" Feb 23 17:59:58 crc kubenswrapper[4724]: I0223 17:59:58.598527 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15bf49cb-7015-49e6-9710-4f701dc9d6f7-inventory" (OuterVolumeSpecName: "inventory") pod "15bf49cb-7015-49e6-9710-4f701dc9d6f7" (UID: "15bf49cb-7015-49e6-9710-4f701dc9d6f7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:59:58 crc kubenswrapper[4724]: I0223 17:59:58.600775 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15bf49cb-7015-49e6-9710-4f701dc9d6f7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "15bf49cb-7015-49e6-9710-4f701dc9d6f7" (UID: "15bf49cb-7015-49e6-9710-4f701dc9d6f7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:59:58 crc kubenswrapper[4724]: I0223 17:59:58.669869 4724 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/15bf49cb-7015-49e6-9710-4f701dc9d6f7-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 17:59:58 crc kubenswrapper[4724]: I0223 17:59:58.669908 4724 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/15bf49cb-7015-49e6-9710-4f701dc9d6f7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 17:59:58 crc kubenswrapper[4724]: I0223 17:59:58.961221 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b" path="/var/lib/kubelet/pods/3cc5a19d-05b2-4ca5-bf8e-0274d62c9a0b/volumes" Feb 23 17:59:58 crc kubenswrapper[4724]: I0223 17:59:58.962003 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7421067a-d596-4a56-82f2-39eabd33567c" path="/var/lib/kubelet/pods/7421067a-d596-4a56-82f2-39eabd33567c/volumes" Feb 23 17:59:59 crc kubenswrapper[4724]: I0223 17:59:59.019886 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-255hh" event={"ID":"15bf49cb-7015-49e6-9710-4f701dc9d6f7","Type":"ContainerDied","Data":"7fe0ae89e6be6d820b179da2e1b1067cc4b7bded2670a637d69a7a8ca5196411"} Feb 23 17:59:59 crc kubenswrapper[4724]: I0223 17:59:59.019958 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fe0ae89e6be6d820b179da2e1b1067cc4b7bded2670a637d69a7a8ca5196411" Feb 23 17:59:59 crc kubenswrapper[4724]: I0223 17:59:59.019903 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-255hh" Feb 23 17:59:59 crc kubenswrapper[4724]: I0223 17:59:59.094748 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5s9b9"] Feb 23 17:59:59 crc kubenswrapper[4724]: E0223 17:59:59.095279 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15bf49cb-7015-49e6-9710-4f701dc9d6f7" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 23 17:59:59 crc kubenswrapper[4724]: I0223 17:59:59.095306 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="15bf49cb-7015-49e6-9710-4f701dc9d6f7" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 23 17:59:59 crc kubenswrapper[4724]: I0223 17:59:59.095788 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="15bf49cb-7015-49e6-9710-4f701dc9d6f7" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 23 17:59:59 crc kubenswrapper[4724]: I0223 17:59:59.096680 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5s9b9" Feb 23 17:59:59 crc kubenswrapper[4724]: I0223 17:59:59.099622 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 17:59:59 crc kubenswrapper[4724]: I0223 17:59:59.099688 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t8jff" Feb 23 17:59:59 crc kubenswrapper[4724]: I0223 17:59:59.099690 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 17:59:59 crc kubenswrapper[4724]: I0223 17:59:59.100696 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 17:59:59 crc kubenswrapper[4724]: I0223 17:59:59.104868 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5s9b9"] Feb 23 17:59:59 crc kubenswrapper[4724]: I0223 17:59:59.178985 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzz9f\" (UniqueName: \"kubernetes.io/projected/a5ffe362-1a42-40ec-8cbf-ce9b83db854d-kube-api-access-gzz9f\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5s9b9\" (UID: \"a5ffe362-1a42-40ec-8cbf-ce9b83db854d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5s9b9" Feb 23 17:59:59 crc kubenswrapper[4724]: I0223 17:59:59.179049 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a5ffe362-1a42-40ec-8cbf-ce9b83db854d-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5s9b9\" (UID: \"a5ffe362-1a42-40ec-8cbf-ce9b83db854d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5s9b9" Feb 23 17:59:59 crc kubenswrapper[4724]: I0223 17:59:59.179275 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a5ffe362-1a42-40ec-8cbf-ce9b83db854d-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5s9b9\" (UID: \"a5ffe362-1a42-40ec-8cbf-ce9b83db854d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5s9b9" Feb 23 17:59:59 crc kubenswrapper[4724]: I0223 17:59:59.281090 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a5ffe362-1a42-40ec-8cbf-ce9b83db854d-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5s9b9\" (UID: \"a5ffe362-1a42-40ec-8cbf-ce9b83db854d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5s9b9" Feb 23 17:59:59 crc kubenswrapper[4724]: I0223 17:59:59.281656 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzz9f\" (UniqueName: \"kubernetes.io/projected/a5ffe362-1a42-40ec-8cbf-ce9b83db854d-kube-api-access-gzz9f\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5s9b9\" (UID: \"a5ffe362-1a42-40ec-8cbf-ce9b83db854d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5s9b9" Feb 23 17:59:59 crc kubenswrapper[4724]: I0223 17:59:59.281788 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a5ffe362-1a42-40ec-8cbf-ce9b83db854d-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5s9b9\" (UID: \"a5ffe362-1a42-40ec-8cbf-ce9b83db854d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5s9b9" Feb 23 17:59:59 crc kubenswrapper[4724]: I0223 17:59:59.287225 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a5ffe362-1a42-40ec-8cbf-ce9b83db854d-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5s9b9\" (UID: \"a5ffe362-1a42-40ec-8cbf-ce9b83db854d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5s9b9" Feb 23 17:59:59 crc kubenswrapper[4724]: I0223 17:59:59.288092 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a5ffe362-1a42-40ec-8cbf-ce9b83db854d-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5s9b9\" (UID: \"a5ffe362-1a42-40ec-8cbf-ce9b83db854d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5s9b9" Feb 23 17:59:59 crc kubenswrapper[4724]: I0223 17:59:59.298543 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzz9f\" (UniqueName: \"kubernetes.io/projected/a5ffe362-1a42-40ec-8cbf-ce9b83db854d-kube-api-access-gzz9f\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5s9b9\" (UID: \"a5ffe362-1a42-40ec-8cbf-ce9b83db854d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5s9b9" Feb 23 17:59:59 crc kubenswrapper[4724]: I0223 17:59:59.422617 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5s9b9" Feb 23 17:59:59 crc kubenswrapper[4724]: I0223 17:59:59.945885 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5s9b9"] Feb 23 18:00:00 crc kubenswrapper[4724]: I0223 18:00:00.031566 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5s9b9" event={"ID":"a5ffe362-1a42-40ec-8cbf-ce9b83db854d","Type":"ContainerStarted","Data":"715ee4abf721c598d1e66193b8b64f55c2fd3ad536cddc3e4bdab8b82bdfd070"} Feb 23 18:00:00 crc kubenswrapper[4724]: I0223 18:00:00.154711 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531160-2jhl9"] Feb 23 18:00:00 crc kubenswrapper[4724]: I0223 18:00:00.156367 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531160-2jhl9" Feb 23 18:00:00 crc kubenswrapper[4724]: I0223 18:00:00.158913 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 23 18:00:00 crc kubenswrapper[4724]: I0223 18:00:00.159460 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 23 18:00:00 crc kubenswrapper[4724]: I0223 18:00:00.186673 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531160-2jhl9"] Feb 23 18:00:00 crc kubenswrapper[4724]: I0223 18:00:00.303648 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b908dd80-78c0-49ab-9091-758eec839746-secret-volume\") pod \"collect-profiles-29531160-2jhl9\" (UID: \"b908dd80-78c0-49ab-9091-758eec839746\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531160-2jhl9" Feb 23 18:00:00 crc kubenswrapper[4724]: I0223 18:00:00.303700 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b908dd80-78c0-49ab-9091-758eec839746-config-volume\") pod \"collect-profiles-29531160-2jhl9\" (UID: \"b908dd80-78c0-49ab-9091-758eec839746\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531160-2jhl9" Feb 23 18:00:00 crc kubenswrapper[4724]: I0223 18:00:00.303734 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89khk\" (UniqueName: \"kubernetes.io/projected/b908dd80-78c0-49ab-9091-758eec839746-kube-api-access-89khk\") pod \"collect-profiles-29531160-2jhl9\" (UID: \"b908dd80-78c0-49ab-9091-758eec839746\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531160-2jhl9" Feb 23 18:00:00 crc kubenswrapper[4724]: I0223 18:00:00.405178 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b908dd80-78c0-49ab-9091-758eec839746-secret-volume\") pod \"collect-profiles-29531160-2jhl9\" (UID: \"b908dd80-78c0-49ab-9091-758eec839746\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531160-2jhl9" Feb 23 18:00:00 crc kubenswrapper[4724]: I0223 18:00:00.405219 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b908dd80-78c0-49ab-9091-758eec839746-config-volume\") pod \"collect-profiles-29531160-2jhl9\" (UID: \"b908dd80-78c0-49ab-9091-758eec839746\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531160-2jhl9" Feb 23 18:00:00 crc kubenswrapper[4724]: I0223 18:00:00.405251 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89khk\" (UniqueName: \"kubernetes.io/projected/b908dd80-78c0-49ab-9091-758eec839746-kube-api-access-89khk\") pod \"collect-profiles-29531160-2jhl9\" (UID: \"b908dd80-78c0-49ab-9091-758eec839746\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531160-2jhl9" Feb 23 18:00:00 crc kubenswrapper[4724]: I0223 18:00:00.407119 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b908dd80-78c0-49ab-9091-758eec839746-config-volume\") pod \"collect-profiles-29531160-2jhl9\" (UID: \"b908dd80-78c0-49ab-9091-758eec839746\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531160-2jhl9" Feb 23 18:00:00 crc kubenswrapper[4724]: I0223 18:00:00.410127 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b908dd80-78c0-49ab-9091-758eec839746-secret-volume\") pod \"collect-profiles-29531160-2jhl9\" (UID: \"b908dd80-78c0-49ab-9091-758eec839746\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531160-2jhl9" Feb 23 18:00:00 crc kubenswrapper[4724]: I0223 18:00:00.421874 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89khk\" (UniqueName: \"kubernetes.io/projected/b908dd80-78c0-49ab-9091-758eec839746-kube-api-access-89khk\") pod \"collect-profiles-29531160-2jhl9\" (UID: \"b908dd80-78c0-49ab-9091-758eec839746\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531160-2jhl9" Feb 23 18:00:00 crc kubenswrapper[4724]: I0223 18:00:00.483753 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531160-2jhl9" Feb 23 18:00:00 crc kubenswrapper[4724]: I0223 18:00:00.938947 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531160-2jhl9"] Feb 23 18:00:01 crc kubenswrapper[4724]: I0223 18:00:01.044851 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531160-2jhl9" event={"ID":"b908dd80-78c0-49ab-9091-758eec839746","Type":"ContainerStarted","Data":"f8acee797cddb02aa1f27cbc3ca3536daf53879201deb410cd11a1c088a5c1cd"} Feb 23 18:00:01 crc kubenswrapper[4724]: I0223 18:00:01.049805 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5s9b9" event={"ID":"a5ffe362-1a42-40ec-8cbf-ce9b83db854d","Type":"ContainerStarted","Data":"7ecd1d708580ae5ab319fa09e2b18275f140d16297ef9b1c6be058409ead8ebb"} Feb 23 18:00:01 crc kubenswrapper[4724]: I0223 18:00:01.067896 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5s9b9" podStartSLOduration=1.627055529 podStartE2EDuration="2.067876668s" podCreationTimestamp="2026-02-23 17:59:59 +0000 UTC" firstStartedPulling="2026-02-23 17:59:59.94870144 +0000 UTC m=+1755.764901030" lastFinishedPulling="2026-02-23 18:00:00.389522569 +0000 UTC m=+1756.205722169" observedRunningTime="2026-02-23 18:00:01.063667125 +0000 UTC m=+1756.879866725" watchObservedRunningTime="2026-02-23 18:00:01.067876668 +0000 UTC m=+1756.884076268" Feb 23 18:00:02 crc kubenswrapper[4724]: I0223 18:00:02.059328 4724 generic.go:334] "Generic (PLEG): container finished" podID="b908dd80-78c0-49ab-9091-758eec839746" containerID="a832189586f1522db0f96b9fe38520118a7e97e9accf22634fb6643b9a33d9b9" exitCode=0 Feb 23 18:00:02 crc kubenswrapper[4724]: I0223 18:00:02.060854 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531160-2jhl9" event={"ID":"b908dd80-78c0-49ab-9091-758eec839746","Type":"ContainerDied","Data":"a832189586f1522db0f96b9fe38520118a7e97e9accf22634fb6643b9a33d9b9"} Feb 23 18:00:03 crc kubenswrapper[4724]: I0223 18:00:03.383931 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531160-2jhl9" Feb 23 18:00:03 crc kubenswrapper[4724]: I0223 18:00:03.567739 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b908dd80-78c0-49ab-9091-758eec839746-secret-volume\") pod \"b908dd80-78c0-49ab-9091-758eec839746\" (UID: \"b908dd80-78c0-49ab-9091-758eec839746\") " Feb 23 18:00:03 crc kubenswrapper[4724]: I0223 18:00:03.567888 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89khk\" (UniqueName: \"kubernetes.io/projected/b908dd80-78c0-49ab-9091-758eec839746-kube-api-access-89khk\") pod \"b908dd80-78c0-49ab-9091-758eec839746\" (UID: \"b908dd80-78c0-49ab-9091-758eec839746\") " Feb 23 18:00:03 crc kubenswrapper[4724]: I0223 18:00:03.568007 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b908dd80-78c0-49ab-9091-758eec839746-config-volume\") pod \"b908dd80-78c0-49ab-9091-758eec839746\" (UID: \"b908dd80-78c0-49ab-9091-758eec839746\") " Feb 23 18:00:03 crc kubenswrapper[4724]: I0223 18:00:03.569377 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b908dd80-78c0-49ab-9091-758eec839746-config-volume" (OuterVolumeSpecName: "config-volume") pod "b908dd80-78c0-49ab-9091-758eec839746" (UID: "b908dd80-78c0-49ab-9091-758eec839746"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:00:03 crc kubenswrapper[4724]: I0223 18:00:03.670373 4724 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b908dd80-78c0-49ab-9091-758eec839746-config-volume\") on node \"crc\" DevicePath \"\"" Feb 23 18:00:04 crc kubenswrapper[4724]: I0223 18:00:04.078133 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531160-2jhl9" event={"ID":"b908dd80-78c0-49ab-9091-758eec839746","Type":"ContainerDied","Data":"f8acee797cddb02aa1f27cbc3ca3536daf53879201deb410cd11a1c088a5c1cd"} Feb 23 18:00:04 crc kubenswrapper[4724]: I0223 18:00:04.078174 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8acee797cddb02aa1f27cbc3ca3536daf53879201deb410cd11a1c088a5c1cd" Feb 23 18:00:04 crc kubenswrapper[4724]: I0223 18:00:04.078246 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531160-2jhl9" Feb 23 18:00:04 crc kubenswrapper[4724]: I0223 18:00:04.572370 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b908dd80-78c0-49ab-9091-758eec839746-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b908dd80-78c0-49ab-9091-758eec839746" (UID: "b908dd80-78c0-49ab-9091-758eec839746"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:00:04 crc kubenswrapper[4724]: I0223 18:00:04.572548 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b908dd80-78c0-49ab-9091-758eec839746-kube-api-access-89khk" (OuterVolumeSpecName: "kube-api-access-89khk") pod "b908dd80-78c0-49ab-9091-758eec839746" (UID: "b908dd80-78c0-49ab-9091-758eec839746"). InnerVolumeSpecName "kube-api-access-89khk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:00:04 crc kubenswrapper[4724]: I0223 18:00:04.614028 4724 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b908dd80-78c0-49ab-9091-758eec839746-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 23 18:00:04 crc kubenswrapper[4724]: I0223 18:00:04.614070 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89khk\" (UniqueName: \"kubernetes.io/projected/b908dd80-78c0-49ab-9091-758eec839746-kube-api-access-89khk\") on node \"crc\" DevicePath \"\"" Feb 23 18:00:10 crc kubenswrapper[4724]: I0223 18:00:10.051755 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-k8sd8"] Feb 23 18:00:10 crc kubenswrapper[4724]: I0223 18:00:10.070574 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-k8sd8"] Feb 23 18:00:10 crc kubenswrapper[4724]: I0223 18:00:10.973070 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05e85ad1-bec3-4a1d-b77e-cac6dab7c9fd" path="/var/lib/kubelet/pods/05e85ad1-bec3-4a1d-b77e-cac6dab7c9fd/volumes" Feb 23 18:00:11 crc kubenswrapper[4724]: I0223 18:00:11.059135 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-kbqzq"] Feb 23 18:00:11 crc kubenswrapper[4724]: I0223 18:00:11.071842 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-kbqzq"] Feb 23 18:00:12 crc kubenswrapper[4724]: I0223 18:00:12.951205 4724 scope.go:117] "RemoveContainer" containerID="a9d9154f54d232a189bc26a1a3f88396d8e7bd4b7bf9b2b3dcf38e1648177ac8" Feb 23 18:00:12 crc kubenswrapper[4724]: E0223 18:00:12.952983 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:00:12 crc kubenswrapper[4724]: I0223 18:00:12.971431 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="987df27c-52c5-4950-be0d-72bbd4164ea6" path="/var/lib/kubelet/pods/987df27c-52c5-4950-be0d-72bbd4164ea6/volumes" Feb 23 18:00:24 crc kubenswrapper[4724]: I0223 18:00:24.963947 4724 scope.go:117] "RemoveContainer" containerID="a9d9154f54d232a189bc26a1a3f88396d8e7bd4b7bf9b2b3dcf38e1648177ac8" Feb 23 18:00:24 crc kubenswrapper[4724]: E0223 18:00:24.965032 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:00:35 crc kubenswrapper[4724]: I0223 18:00:35.951286 4724 scope.go:117] "RemoveContainer" containerID="a9d9154f54d232a189bc26a1a3f88396d8e7bd4b7bf9b2b3dcf38e1648177ac8" Feb 23 18:00:35 crc kubenswrapper[4724]: E0223 18:00:35.952185 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:00:48 crc kubenswrapper[4724]: I0223 18:00:48.037521 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-phssd"] Feb 23 18:00:48 crc kubenswrapper[4724]: I0223 18:00:48.046894 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-phssd"] Feb 23 18:00:48 crc kubenswrapper[4724]: I0223 18:00:48.963122 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59c8cc9e-590f-46f6-a1b3-3cdda2e66f5b" path="/var/lib/kubelet/pods/59c8cc9e-590f-46f6-a1b3-3cdda2e66f5b/volumes" Feb 23 18:00:49 crc kubenswrapper[4724]: I0223 18:00:49.035942 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-5016-account-create-update-qmmcj"] Feb 23 18:00:49 crc kubenswrapper[4724]: I0223 18:00:49.053182 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-2pkpd"] Feb 23 18:00:49 crc kubenswrapper[4724]: I0223 18:00:49.062748 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-a49e-account-create-update-w5ddb"] Feb 23 18:00:49 crc kubenswrapper[4724]: I0223 18:00:49.071661 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-6wzkp"] Feb 23 18:00:49 crc kubenswrapper[4724]: I0223 18:00:49.084262 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-5016-account-create-update-qmmcj"] Feb 23 18:00:49 crc kubenswrapper[4724]: I0223 18:00:49.093584 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-586e-account-create-update-5vfvj"] Feb 23 18:00:49 crc kubenswrapper[4724]: I0223 18:00:49.101202 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-2pkpd"] Feb 23 18:00:49 crc kubenswrapper[4724]: I0223 18:00:49.109602 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-6wzkp"] Feb 23 18:00:49 crc kubenswrapper[4724]: I0223 18:00:49.116953 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-a49e-account-create-update-w5ddb"] Feb 23 18:00:49 crc kubenswrapper[4724]: I0223 18:00:49.126126 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-586e-account-create-update-5vfvj"] Feb 23 18:00:49 crc kubenswrapper[4724]: I0223 18:00:49.138823 4724 scope.go:117] "RemoveContainer" containerID="d5b80fec05b3057ddd89553615912d9121562cc5a9aae14eccf80b88544ea6e4" Feb 23 18:00:49 crc kubenswrapper[4724]: I0223 18:00:49.216781 4724 scope.go:117] "RemoveContainer" containerID="cb27846d6f45fc5bb8869f74bc52bff927385c5e9ffa3a8b5c01b350275cfcab" Feb 23 18:00:49 crc kubenswrapper[4724]: I0223 18:00:49.241433 4724 scope.go:117] "RemoveContainer" containerID="83fba2cb037e706579174b1241e86fafbdce0404a40284dd7003cf796d401f35" Feb 23 18:00:49 crc kubenswrapper[4724]: I0223 18:00:49.306890 4724 scope.go:117] "RemoveContainer" containerID="2d17032652c3c4cd2052b7d405025127ff9fe855f64f91c894b7002244475759" Feb 23 18:00:49 crc kubenswrapper[4724]: I0223 18:00:49.359670 4724 scope.go:117] "RemoveContainer" containerID="277f67881cf8a06dd036d527211a4dabd0e326e4726048374f9afc657ebda77f" Feb 23 18:00:49 crc kubenswrapper[4724]: I0223 18:00:49.412150 4724 scope.go:117] "RemoveContainer" containerID="020bc17f4ca0c21f819b49a77352653529f8879ef25555fab725be24d49f8c76" Feb 23 18:00:49 crc kubenswrapper[4724]: I0223 18:00:49.950929 4724 scope.go:117] "RemoveContainer" containerID="a9d9154f54d232a189bc26a1a3f88396d8e7bd4b7bf9b2b3dcf38e1648177ac8" Feb 23 18:00:49 crc kubenswrapper[4724]: E0223 18:00:49.951475 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:00:50 crc kubenswrapper[4724]: I0223 18:00:50.966765 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34e7be71-74ab-423b-9dfd-bd025758573d" path="/var/lib/kubelet/pods/34e7be71-74ab-423b-9dfd-bd025758573d/volumes" Feb 23 18:00:50 crc kubenswrapper[4724]: I0223 18:00:50.968348 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44b41778-b0c6-4bc1-8754-99fc38f1dad5" path="/var/lib/kubelet/pods/44b41778-b0c6-4bc1-8754-99fc38f1dad5/volumes" Feb 23 18:00:50 crc kubenswrapper[4724]: I0223 18:00:50.969510 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cd2bad2-04ed-4658-b65e-c9a4f208114c" path="/var/lib/kubelet/pods/4cd2bad2-04ed-4658-b65e-c9a4f208114c/volumes" Feb 23 18:00:50 crc kubenswrapper[4724]: I0223 18:00:50.971423 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b54d2670-b9ee-480a-a622-386abf8656f1" path="/var/lib/kubelet/pods/b54d2670-b9ee-480a-a622-386abf8656f1/volumes" Feb 23 18:00:50 crc kubenswrapper[4724]: I0223 18:00:50.973352 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3a4fd93-b17a-411c-9173-a8038523ffac" path="/var/lib/kubelet/pods/e3a4fd93-b17a-411c-9173-a8038523ffac/volumes" Feb 23 18:01:00 crc kubenswrapper[4724]: I0223 18:01:00.158515 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29531161-mxchr"] Feb 23 18:01:00 crc kubenswrapper[4724]: E0223 18:01:00.159711 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b908dd80-78c0-49ab-9091-758eec839746" containerName="collect-profiles" Feb 23 18:01:00 crc kubenswrapper[4724]: I0223 18:01:00.159733 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="b908dd80-78c0-49ab-9091-758eec839746" containerName="collect-profiles" Feb 23 18:01:00 crc kubenswrapper[4724]: I0223 18:01:00.160364 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="b908dd80-78c0-49ab-9091-758eec839746" containerName="collect-profiles" Feb 23 18:01:00 crc kubenswrapper[4724]: I0223 18:01:00.161695 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29531161-mxchr" Feb 23 18:01:00 crc kubenswrapper[4724]: I0223 18:01:00.190965 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29531161-mxchr"] Feb 23 18:01:00 crc kubenswrapper[4724]: I0223 18:01:00.281647 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b373b9a-1005-41fb-92c8-22d259d8f036-combined-ca-bundle\") pod \"keystone-cron-29531161-mxchr\" (UID: \"3b373b9a-1005-41fb-92c8-22d259d8f036\") " pod="openstack/keystone-cron-29531161-mxchr" Feb 23 18:01:00 crc kubenswrapper[4724]: I0223 18:01:00.281704 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b373b9a-1005-41fb-92c8-22d259d8f036-config-data\") pod \"keystone-cron-29531161-mxchr\" (UID: \"3b373b9a-1005-41fb-92c8-22d259d8f036\") " pod="openstack/keystone-cron-29531161-mxchr" Feb 23 18:01:00 crc kubenswrapper[4724]: I0223 18:01:00.281726 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8phl\" (UniqueName: \"kubernetes.io/projected/3b373b9a-1005-41fb-92c8-22d259d8f036-kube-api-access-h8phl\") pod \"keystone-cron-29531161-mxchr\" (UID: \"3b373b9a-1005-41fb-92c8-22d259d8f036\") " pod="openstack/keystone-cron-29531161-mxchr" Feb 23 18:01:00 crc kubenswrapper[4724]: I0223 18:01:00.281757 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3b373b9a-1005-41fb-92c8-22d259d8f036-fernet-keys\") pod \"keystone-cron-29531161-mxchr\" (UID: \"3b373b9a-1005-41fb-92c8-22d259d8f036\") " pod="openstack/keystone-cron-29531161-mxchr" Feb 23 18:01:00 crc kubenswrapper[4724]: I0223 18:01:00.383569 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b373b9a-1005-41fb-92c8-22d259d8f036-combined-ca-bundle\") pod \"keystone-cron-29531161-mxchr\" (UID: \"3b373b9a-1005-41fb-92c8-22d259d8f036\") " pod="openstack/keystone-cron-29531161-mxchr" Feb 23 18:01:00 crc kubenswrapper[4724]: I0223 18:01:00.383803 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b373b9a-1005-41fb-92c8-22d259d8f036-config-data\") pod \"keystone-cron-29531161-mxchr\" (UID: \"3b373b9a-1005-41fb-92c8-22d259d8f036\") " pod="openstack/keystone-cron-29531161-mxchr" Feb 23 18:01:00 crc kubenswrapper[4724]: I0223 18:01:00.383848 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8phl\" (UniqueName: \"kubernetes.io/projected/3b373b9a-1005-41fb-92c8-22d259d8f036-kube-api-access-h8phl\") pod \"keystone-cron-29531161-mxchr\" (UID: \"3b373b9a-1005-41fb-92c8-22d259d8f036\") " pod="openstack/keystone-cron-29531161-mxchr" Feb 23 18:01:00 crc kubenswrapper[4724]: I0223 18:01:00.383936 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3b373b9a-1005-41fb-92c8-22d259d8f036-fernet-keys\") pod \"keystone-cron-29531161-mxchr\" (UID: \"3b373b9a-1005-41fb-92c8-22d259d8f036\") " pod="openstack/keystone-cron-29531161-mxchr" Feb 23 18:01:00 crc kubenswrapper[4724]: I0223 18:01:00.389629 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3b373b9a-1005-41fb-92c8-22d259d8f036-fernet-keys\") pod \"keystone-cron-29531161-mxchr\" (UID: \"3b373b9a-1005-41fb-92c8-22d259d8f036\") " pod="openstack/keystone-cron-29531161-mxchr" Feb 23 18:01:00 crc kubenswrapper[4724]: I0223 18:01:00.389694 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b373b9a-1005-41fb-92c8-22d259d8f036-combined-ca-bundle\") pod \"keystone-cron-29531161-mxchr\" (UID: \"3b373b9a-1005-41fb-92c8-22d259d8f036\") " pod="openstack/keystone-cron-29531161-mxchr" Feb 23 18:01:00 crc kubenswrapper[4724]: I0223 18:01:00.390603 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b373b9a-1005-41fb-92c8-22d259d8f036-config-data\") pod \"keystone-cron-29531161-mxchr\" (UID: \"3b373b9a-1005-41fb-92c8-22d259d8f036\") " pod="openstack/keystone-cron-29531161-mxchr" Feb 23 18:01:00 crc kubenswrapper[4724]: I0223 18:01:00.401659 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8phl\" (UniqueName: \"kubernetes.io/projected/3b373b9a-1005-41fb-92c8-22d259d8f036-kube-api-access-h8phl\") pod \"keystone-cron-29531161-mxchr\" (UID: \"3b373b9a-1005-41fb-92c8-22d259d8f036\") " pod="openstack/keystone-cron-29531161-mxchr" Feb 23 18:01:00 crc kubenswrapper[4724]: I0223 18:01:00.489567 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29531161-mxchr" Feb 23 18:01:00 crc kubenswrapper[4724]: I0223 18:01:00.920199 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29531161-mxchr"] Feb 23 18:01:01 crc kubenswrapper[4724]: I0223 18:01:01.654622 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29531161-mxchr" event={"ID":"3b373b9a-1005-41fb-92c8-22d259d8f036","Type":"ContainerStarted","Data":"d829c773d466a69893541e9da7388e10baffb833e36baff7eba66ee526b89112"} Feb 23 18:01:01 crc kubenswrapper[4724]: I0223 18:01:01.656534 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29531161-mxchr" event={"ID":"3b373b9a-1005-41fb-92c8-22d259d8f036","Type":"ContainerStarted","Data":"e95f1c22a74dc93102fb0889b545c44693474b29bfe721414e8c91e7db2f0010"} Feb 23 18:01:01 crc kubenswrapper[4724]: I0223 18:01:01.672664 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29531161-mxchr" podStartSLOduration=1.672641766 podStartE2EDuration="1.672641766s" podCreationTimestamp="2026-02-23 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:01:01.671255631 +0000 UTC m=+1817.487455241" watchObservedRunningTime="2026-02-23 18:01:01.672641766 +0000 UTC m=+1817.488841366" Feb 23 18:01:01 crc kubenswrapper[4724]: I0223 18:01:01.951666 4724 scope.go:117] "RemoveContainer" containerID="a9d9154f54d232a189bc26a1a3f88396d8e7bd4b7bf9b2b3dcf38e1648177ac8" Feb 23 18:01:01 crc kubenswrapper[4724]: E0223 18:01:01.951919 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:01:05 crc kubenswrapper[4724]: I0223 18:01:05.729780 4724 generic.go:334] "Generic (PLEG): container finished" podID="3b373b9a-1005-41fb-92c8-22d259d8f036" containerID="d829c773d466a69893541e9da7388e10baffb833e36baff7eba66ee526b89112" exitCode=0 Feb 23 18:01:05 crc kubenswrapper[4724]: I0223 18:01:05.729957 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29531161-mxchr" event={"ID":"3b373b9a-1005-41fb-92c8-22d259d8f036","Type":"ContainerDied","Data":"d829c773d466a69893541e9da7388e10baffb833e36baff7eba66ee526b89112"} Feb 23 18:01:07 crc kubenswrapper[4724]: I0223 18:01:07.059822 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29531161-mxchr" Feb 23 18:01:07 crc kubenswrapper[4724]: I0223 18:01:07.257229 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b373b9a-1005-41fb-92c8-22d259d8f036-config-data\") pod \"3b373b9a-1005-41fb-92c8-22d259d8f036\" (UID: \"3b373b9a-1005-41fb-92c8-22d259d8f036\") " Feb 23 18:01:07 crc kubenswrapper[4724]: I0223 18:01:07.257369 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3b373b9a-1005-41fb-92c8-22d259d8f036-fernet-keys\") pod \"3b373b9a-1005-41fb-92c8-22d259d8f036\" (UID: \"3b373b9a-1005-41fb-92c8-22d259d8f036\") " Feb 23 18:01:07 crc kubenswrapper[4724]: I0223 18:01:07.257442 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8phl\" (UniqueName: \"kubernetes.io/projected/3b373b9a-1005-41fb-92c8-22d259d8f036-kube-api-access-h8phl\") pod \"3b373b9a-1005-41fb-92c8-22d259d8f036\" (UID: \"3b373b9a-1005-41fb-92c8-22d259d8f036\") " Feb 23 18:01:07 crc kubenswrapper[4724]: I0223 18:01:07.257501 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b373b9a-1005-41fb-92c8-22d259d8f036-combined-ca-bundle\") pod \"3b373b9a-1005-41fb-92c8-22d259d8f036\" (UID: \"3b373b9a-1005-41fb-92c8-22d259d8f036\") " Feb 23 18:01:07 crc kubenswrapper[4724]: I0223 18:01:07.262830 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b373b9a-1005-41fb-92c8-22d259d8f036-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "3b373b9a-1005-41fb-92c8-22d259d8f036" (UID: "3b373b9a-1005-41fb-92c8-22d259d8f036"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:01:07 crc kubenswrapper[4724]: I0223 18:01:07.262861 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b373b9a-1005-41fb-92c8-22d259d8f036-kube-api-access-h8phl" (OuterVolumeSpecName: "kube-api-access-h8phl") pod "3b373b9a-1005-41fb-92c8-22d259d8f036" (UID: "3b373b9a-1005-41fb-92c8-22d259d8f036"). InnerVolumeSpecName "kube-api-access-h8phl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:01:07 crc kubenswrapper[4724]: I0223 18:01:07.290549 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b373b9a-1005-41fb-92c8-22d259d8f036-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b373b9a-1005-41fb-92c8-22d259d8f036" (UID: "3b373b9a-1005-41fb-92c8-22d259d8f036"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:01:07 crc kubenswrapper[4724]: I0223 18:01:07.307443 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b373b9a-1005-41fb-92c8-22d259d8f036-config-data" (OuterVolumeSpecName: "config-data") pod "3b373b9a-1005-41fb-92c8-22d259d8f036" (UID: "3b373b9a-1005-41fb-92c8-22d259d8f036"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:01:07 crc kubenswrapper[4724]: I0223 18:01:07.359417 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b373b9a-1005-41fb-92c8-22d259d8f036-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:01:07 crc kubenswrapper[4724]: I0223 18:01:07.359463 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b373b9a-1005-41fb-92c8-22d259d8f036-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 18:01:07 crc kubenswrapper[4724]: I0223 18:01:07.359474 4724 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3b373b9a-1005-41fb-92c8-22d259d8f036-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 23 18:01:07 crc kubenswrapper[4724]: I0223 18:01:07.359487 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8phl\" (UniqueName: \"kubernetes.io/projected/3b373b9a-1005-41fb-92c8-22d259d8f036-kube-api-access-h8phl\") on node \"crc\" DevicePath \"\"" Feb 23 18:01:07 crc kubenswrapper[4724]: I0223 18:01:07.746214 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29531161-mxchr" event={"ID":"3b373b9a-1005-41fb-92c8-22d259d8f036","Type":"ContainerDied","Data":"e95f1c22a74dc93102fb0889b545c44693474b29bfe721414e8c91e7db2f0010"} Feb 23 18:01:07 crc kubenswrapper[4724]: I0223 18:01:07.746483 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e95f1c22a74dc93102fb0889b545c44693474b29bfe721414e8c91e7db2f0010" Feb 23 18:01:07 crc kubenswrapper[4724]: I0223 18:01:07.746245 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29531161-mxchr" Feb 23 18:01:12 crc kubenswrapper[4724]: I0223 18:01:12.794122 4724 generic.go:334] "Generic (PLEG): container finished" podID="a5ffe362-1a42-40ec-8cbf-ce9b83db854d" containerID="7ecd1d708580ae5ab319fa09e2b18275f140d16297ef9b1c6be058409ead8ebb" exitCode=0 Feb 23 18:01:12 crc kubenswrapper[4724]: I0223 18:01:12.794222 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5s9b9" event={"ID":"a5ffe362-1a42-40ec-8cbf-ce9b83db854d","Type":"ContainerDied","Data":"7ecd1d708580ae5ab319fa09e2b18275f140d16297ef9b1c6be058409ead8ebb"} Feb 23 18:01:12 crc kubenswrapper[4724]: I0223 18:01:12.951147 4724 scope.go:117] "RemoveContainer" containerID="a9d9154f54d232a189bc26a1a3f88396d8e7bd4b7bf9b2b3dcf38e1648177ac8" Feb 23 18:01:12 crc kubenswrapper[4724]: E0223 18:01:12.951513 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:01:14 crc kubenswrapper[4724]: I0223 18:01:14.292966 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5s9b9" Feb 23 18:01:14 crc kubenswrapper[4724]: I0223 18:01:14.405768 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzz9f\" (UniqueName: \"kubernetes.io/projected/a5ffe362-1a42-40ec-8cbf-ce9b83db854d-kube-api-access-gzz9f\") pod \"a5ffe362-1a42-40ec-8cbf-ce9b83db854d\" (UID: \"a5ffe362-1a42-40ec-8cbf-ce9b83db854d\") " Feb 23 18:01:14 crc kubenswrapper[4724]: I0223 18:01:14.405947 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a5ffe362-1a42-40ec-8cbf-ce9b83db854d-inventory\") pod \"a5ffe362-1a42-40ec-8cbf-ce9b83db854d\" (UID: \"a5ffe362-1a42-40ec-8cbf-ce9b83db854d\") " Feb 23 18:01:14 crc kubenswrapper[4724]: I0223 18:01:14.406014 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a5ffe362-1a42-40ec-8cbf-ce9b83db854d-ssh-key-openstack-edpm-ipam\") pod \"a5ffe362-1a42-40ec-8cbf-ce9b83db854d\" (UID: \"a5ffe362-1a42-40ec-8cbf-ce9b83db854d\") " Feb 23 18:01:14 crc kubenswrapper[4724]: I0223 18:01:14.410977 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5ffe362-1a42-40ec-8cbf-ce9b83db854d-kube-api-access-gzz9f" (OuterVolumeSpecName: "kube-api-access-gzz9f") pod "a5ffe362-1a42-40ec-8cbf-ce9b83db854d" (UID: "a5ffe362-1a42-40ec-8cbf-ce9b83db854d"). InnerVolumeSpecName "kube-api-access-gzz9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:01:14 crc kubenswrapper[4724]: I0223 18:01:14.433063 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5ffe362-1a42-40ec-8cbf-ce9b83db854d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a5ffe362-1a42-40ec-8cbf-ce9b83db854d" (UID: "a5ffe362-1a42-40ec-8cbf-ce9b83db854d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:01:14 crc kubenswrapper[4724]: I0223 18:01:14.445695 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5ffe362-1a42-40ec-8cbf-ce9b83db854d-inventory" (OuterVolumeSpecName: "inventory") pod "a5ffe362-1a42-40ec-8cbf-ce9b83db854d" (UID: "a5ffe362-1a42-40ec-8cbf-ce9b83db854d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:01:14 crc kubenswrapper[4724]: I0223 18:01:14.508961 4724 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a5ffe362-1a42-40ec-8cbf-ce9b83db854d-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 18:01:14 crc kubenswrapper[4724]: I0223 18:01:14.508995 4724 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a5ffe362-1a42-40ec-8cbf-ce9b83db854d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 18:01:14 crc kubenswrapper[4724]: I0223 18:01:14.509006 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzz9f\" (UniqueName: \"kubernetes.io/projected/a5ffe362-1a42-40ec-8cbf-ce9b83db854d-kube-api-access-gzz9f\") on node \"crc\" DevicePath \"\"" Feb 23 18:01:14 crc kubenswrapper[4724]: I0223 18:01:14.809215 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5s9b9" event={"ID":"a5ffe362-1a42-40ec-8cbf-ce9b83db854d","Type":"ContainerDied","Data":"715ee4abf721c598d1e66193b8b64f55c2fd3ad536cddc3e4bdab8b82bdfd070"} Feb 23 18:01:14 crc kubenswrapper[4724]: I0223 18:01:14.809253 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="715ee4abf721c598d1e66193b8b64f55c2fd3ad536cddc3e4bdab8b82bdfd070" Feb 23 18:01:14 crc kubenswrapper[4724]: I0223 18:01:14.809405 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5s9b9" Feb 23 18:01:14 crc kubenswrapper[4724]: I0223 18:01:14.879899 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x4j75"] Feb 23 18:01:14 crc kubenswrapper[4724]: E0223 18:01:14.880385 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b373b9a-1005-41fb-92c8-22d259d8f036" containerName="keystone-cron" Feb 23 18:01:14 crc kubenswrapper[4724]: I0223 18:01:14.880425 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b373b9a-1005-41fb-92c8-22d259d8f036" containerName="keystone-cron" Feb 23 18:01:14 crc kubenswrapper[4724]: E0223 18:01:14.880474 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5ffe362-1a42-40ec-8cbf-ce9b83db854d" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 23 18:01:14 crc kubenswrapper[4724]: I0223 18:01:14.880485 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5ffe362-1a42-40ec-8cbf-ce9b83db854d" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 23 18:01:14 crc kubenswrapper[4724]: I0223 18:01:14.880713 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5ffe362-1a42-40ec-8cbf-ce9b83db854d" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 23 18:01:14 crc kubenswrapper[4724]: I0223 18:01:14.880753 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b373b9a-1005-41fb-92c8-22d259d8f036" containerName="keystone-cron" Feb 23 18:01:14 crc kubenswrapper[4724]: I0223 18:01:14.881771 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x4j75" Feb 23 18:01:14 crc kubenswrapper[4724]: I0223 18:01:14.884045 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t8jff" Feb 23 18:01:14 crc kubenswrapper[4724]: I0223 18:01:14.884313 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 18:01:14 crc kubenswrapper[4724]: I0223 18:01:14.884553 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 18:01:14 crc kubenswrapper[4724]: I0223 18:01:14.887851 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 18:01:14 crc kubenswrapper[4724]: I0223 18:01:14.893808 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x4j75"] Feb 23 18:01:15 crc kubenswrapper[4724]: I0223 18:01:15.018664 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scfqv\" (UniqueName: \"kubernetes.io/projected/a46f5b1a-20be-4f6e-97fb-00662f817dc9-kube-api-access-scfqv\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-x4j75\" (UID: \"a46f5b1a-20be-4f6e-97fb-00662f817dc9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x4j75" Feb 23 18:01:15 crc kubenswrapper[4724]: I0223 18:01:15.018717 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a46f5b1a-20be-4f6e-97fb-00662f817dc9-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-x4j75\" (UID: \"a46f5b1a-20be-4f6e-97fb-00662f817dc9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x4j75" Feb 23 18:01:15 crc kubenswrapper[4724]: I0223 18:01:15.018765 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a46f5b1a-20be-4f6e-97fb-00662f817dc9-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-x4j75\" (UID: \"a46f5b1a-20be-4f6e-97fb-00662f817dc9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x4j75" Feb 23 18:01:15 crc kubenswrapper[4724]: I0223 18:01:15.120881 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scfqv\" (UniqueName: \"kubernetes.io/projected/a46f5b1a-20be-4f6e-97fb-00662f817dc9-kube-api-access-scfqv\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-x4j75\" (UID: \"a46f5b1a-20be-4f6e-97fb-00662f817dc9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x4j75" Feb 23 18:01:15 crc kubenswrapper[4724]: I0223 18:01:15.120940 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a46f5b1a-20be-4f6e-97fb-00662f817dc9-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-x4j75\" (UID: \"a46f5b1a-20be-4f6e-97fb-00662f817dc9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x4j75" Feb 23 18:01:15 crc kubenswrapper[4724]: I0223 18:01:15.121001 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a46f5b1a-20be-4f6e-97fb-00662f817dc9-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-x4j75\" (UID: \"a46f5b1a-20be-4f6e-97fb-00662f817dc9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x4j75" Feb 23 18:01:15 crc kubenswrapper[4724]: I0223 18:01:15.124888 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a46f5b1a-20be-4f6e-97fb-00662f817dc9-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-x4j75\" (UID: \"a46f5b1a-20be-4f6e-97fb-00662f817dc9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x4j75" Feb 23 18:01:15 crc kubenswrapper[4724]: I0223 18:01:15.125024 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a46f5b1a-20be-4f6e-97fb-00662f817dc9-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-x4j75\" (UID: \"a46f5b1a-20be-4f6e-97fb-00662f817dc9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x4j75" Feb 23 18:01:15 crc kubenswrapper[4724]: I0223 18:01:15.141154 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scfqv\" (UniqueName: \"kubernetes.io/projected/a46f5b1a-20be-4f6e-97fb-00662f817dc9-kube-api-access-scfqv\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-x4j75\" (UID: \"a46f5b1a-20be-4f6e-97fb-00662f817dc9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x4j75" Feb 23 18:01:15 crc kubenswrapper[4724]: I0223 18:01:15.212468 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x4j75" Feb 23 18:01:15 crc kubenswrapper[4724]: I0223 18:01:15.727562 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x4j75"] Feb 23 18:01:15 crc kubenswrapper[4724]: I0223 18:01:15.818750 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x4j75" event={"ID":"a46f5b1a-20be-4f6e-97fb-00662f817dc9","Type":"ContainerStarted","Data":"26eabf58a4fa25aee856181f1f00bee66da23cded43660320396d10385c13b5c"} Feb 23 18:01:16 crc kubenswrapper[4724]: I0223 18:01:16.833066 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x4j75" event={"ID":"a46f5b1a-20be-4f6e-97fb-00662f817dc9","Type":"ContainerStarted","Data":"2bd8a48c3f4ddaa36ea47b0ff77678989a069df1cf13c2b7968708601e15eeec"} Feb 23 18:01:16 crc kubenswrapper[4724]: I0223 18:01:16.860673 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x4j75" podStartSLOduration=2.348227786 podStartE2EDuration="2.860649274s" podCreationTimestamp="2026-02-23 18:01:14 +0000 UTC" firstStartedPulling="2026-02-23 18:01:15.733931962 +0000 UTC m=+1831.550131562" lastFinishedPulling="2026-02-23 18:01:16.24635345 +0000 UTC m=+1832.062553050" observedRunningTime="2026-02-23 18:01:16.853196276 +0000 UTC m=+1832.669395886" watchObservedRunningTime="2026-02-23 18:01:16.860649274 +0000 UTC m=+1832.676848874" Feb 23 18:01:19 crc kubenswrapper[4724]: I0223 18:01:19.041906 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-s46d2"] Feb 23 18:01:19 crc kubenswrapper[4724]: I0223 18:01:19.051602 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-s46d2"] Feb 23 18:01:20 crc kubenswrapper[4724]: I0223 18:01:20.869932 4724 generic.go:334] "Generic (PLEG): container finished" podID="a46f5b1a-20be-4f6e-97fb-00662f817dc9" containerID="2bd8a48c3f4ddaa36ea47b0ff77678989a069df1cf13c2b7968708601e15eeec" exitCode=0 Feb 23 18:01:20 crc kubenswrapper[4724]: I0223 18:01:20.870020 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x4j75" event={"ID":"a46f5b1a-20be-4f6e-97fb-00662f817dc9","Type":"ContainerDied","Data":"2bd8a48c3f4ddaa36ea47b0ff77678989a069df1cf13c2b7968708601e15eeec"} Feb 23 18:01:20 crc kubenswrapper[4724]: I0223 18:01:20.961472 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c23170d-6cdc-4c4e-be8d-a4e61cb8feeb" path="/var/lib/kubelet/pods/1c23170d-6cdc-4c4e-be8d-a4e61cb8feeb/volumes" Feb 23 18:01:22 crc kubenswrapper[4724]: I0223 18:01:22.387934 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x4j75" Feb 23 18:01:22 crc kubenswrapper[4724]: I0223 18:01:22.465898 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scfqv\" (UniqueName: \"kubernetes.io/projected/a46f5b1a-20be-4f6e-97fb-00662f817dc9-kube-api-access-scfqv\") pod \"a46f5b1a-20be-4f6e-97fb-00662f817dc9\" (UID: \"a46f5b1a-20be-4f6e-97fb-00662f817dc9\") " Feb 23 18:01:22 crc kubenswrapper[4724]: I0223 18:01:22.466438 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a46f5b1a-20be-4f6e-97fb-00662f817dc9-inventory\") pod \"a46f5b1a-20be-4f6e-97fb-00662f817dc9\" (UID: \"a46f5b1a-20be-4f6e-97fb-00662f817dc9\") " Feb 23 18:01:22 crc kubenswrapper[4724]: I0223 18:01:22.466790 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a46f5b1a-20be-4f6e-97fb-00662f817dc9-ssh-key-openstack-edpm-ipam\") pod \"a46f5b1a-20be-4f6e-97fb-00662f817dc9\" (UID: \"a46f5b1a-20be-4f6e-97fb-00662f817dc9\") " Feb 23 18:01:22 crc kubenswrapper[4724]: I0223 18:01:22.473330 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a46f5b1a-20be-4f6e-97fb-00662f817dc9-kube-api-access-scfqv" (OuterVolumeSpecName: "kube-api-access-scfqv") pod "a46f5b1a-20be-4f6e-97fb-00662f817dc9" (UID: "a46f5b1a-20be-4f6e-97fb-00662f817dc9"). InnerVolumeSpecName "kube-api-access-scfqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:01:22 crc kubenswrapper[4724]: I0223 18:01:22.503254 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a46f5b1a-20be-4f6e-97fb-00662f817dc9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a46f5b1a-20be-4f6e-97fb-00662f817dc9" (UID: "a46f5b1a-20be-4f6e-97fb-00662f817dc9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:01:22 crc kubenswrapper[4724]: I0223 18:01:22.508264 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a46f5b1a-20be-4f6e-97fb-00662f817dc9-inventory" (OuterVolumeSpecName: "inventory") pod "a46f5b1a-20be-4f6e-97fb-00662f817dc9" (UID: "a46f5b1a-20be-4f6e-97fb-00662f817dc9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:01:22 crc kubenswrapper[4724]: I0223 18:01:22.569727 4724 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a46f5b1a-20be-4f6e-97fb-00662f817dc9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 18:01:22 crc kubenswrapper[4724]: I0223 18:01:22.569770 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-scfqv\" (UniqueName: \"kubernetes.io/projected/a46f5b1a-20be-4f6e-97fb-00662f817dc9-kube-api-access-scfqv\") on node \"crc\" DevicePath \"\"" Feb 23 18:01:22 crc kubenswrapper[4724]: I0223 18:01:22.569783 4724 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a46f5b1a-20be-4f6e-97fb-00662f817dc9-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 18:01:22 crc kubenswrapper[4724]: I0223 18:01:22.899838 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x4j75" event={"ID":"a46f5b1a-20be-4f6e-97fb-00662f817dc9","Type":"ContainerDied","Data":"26eabf58a4fa25aee856181f1f00bee66da23cded43660320396d10385c13b5c"} Feb 23 18:01:22 crc kubenswrapper[4724]: I0223 18:01:22.899879 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26eabf58a4fa25aee856181f1f00bee66da23cded43660320396d10385c13b5c" Feb 23 18:01:22 crc kubenswrapper[4724]: I0223 18:01:22.899964 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x4j75" Feb 23 18:01:23 crc kubenswrapper[4724]: I0223 18:01:23.016110 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-ktllk"] Feb 23 18:01:23 crc kubenswrapper[4724]: E0223 18:01:23.016566 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a46f5b1a-20be-4f6e-97fb-00662f817dc9" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 23 18:01:23 crc kubenswrapper[4724]: I0223 18:01:23.016585 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="a46f5b1a-20be-4f6e-97fb-00662f817dc9" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 23 18:01:23 crc kubenswrapper[4724]: I0223 18:01:23.016789 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="a46f5b1a-20be-4f6e-97fb-00662f817dc9" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 23 18:01:23 crc kubenswrapper[4724]: I0223 18:01:23.017436 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ktllk" Feb 23 18:01:23 crc kubenswrapper[4724]: I0223 18:01:23.020078 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 18:01:23 crc kubenswrapper[4724]: I0223 18:01:23.020476 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t8jff" Feb 23 18:01:23 crc kubenswrapper[4724]: I0223 18:01:23.020518 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 18:01:23 crc kubenswrapper[4724]: I0223 18:01:23.020866 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 18:01:23 crc kubenswrapper[4724]: I0223 18:01:23.046508 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-ktllk"] Feb 23 18:01:23 crc kubenswrapper[4724]: I0223 18:01:23.083571 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ccfb9295-92e0-4f3d-a25c-a3a7f433126e-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ktllk\" (UID: \"ccfb9295-92e0-4f3d-a25c-a3a7f433126e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ktllk" Feb 23 18:01:23 crc kubenswrapper[4724]: I0223 18:01:23.083673 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ccfb9295-92e0-4f3d-a25c-a3a7f433126e-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ktllk\" (UID: \"ccfb9295-92e0-4f3d-a25c-a3a7f433126e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ktllk" Feb 23 18:01:23 crc kubenswrapper[4724]: I0223 18:01:23.084036 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lj8w\" (UniqueName: \"kubernetes.io/projected/ccfb9295-92e0-4f3d-a25c-a3a7f433126e-kube-api-access-4lj8w\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ktllk\" (UID: \"ccfb9295-92e0-4f3d-a25c-a3a7f433126e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ktllk" Feb 23 18:01:23 crc kubenswrapper[4724]: I0223 18:01:23.185638 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ccfb9295-92e0-4f3d-a25c-a3a7f433126e-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ktllk\" (UID: \"ccfb9295-92e0-4f3d-a25c-a3a7f433126e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ktllk" Feb 23 18:01:23 crc kubenswrapper[4724]: I0223 18:01:23.185696 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ccfb9295-92e0-4f3d-a25c-a3a7f433126e-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ktllk\" (UID: \"ccfb9295-92e0-4f3d-a25c-a3a7f433126e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ktllk" Feb 23 18:01:23 crc kubenswrapper[4724]: I0223 18:01:23.185791 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lj8w\" (UniqueName: \"kubernetes.io/projected/ccfb9295-92e0-4f3d-a25c-a3a7f433126e-kube-api-access-4lj8w\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ktllk\" (UID: \"ccfb9295-92e0-4f3d-a25c-a3a7f433126e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ktllk" Feb 23 18:01:23 crc kubenswrapper[4724]: I0223 18:01:23.190895 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ccfb9295-92e0-4f3d-a25c-a3a7f433126e-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ktllk\" (UID: \"ccfb9295-92e0-4f3d-a25c-a3a7f433126e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ktllk" Feb 23 18:01:23 crc kubenswrapper[4724]: I0223 18:01:23.190943 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ccfb9295-92e0-4f3d-a25c-a3a7f433126e-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ktllk\" (UID: \"ccfb9295-92e0-4f3d-a25c-a3a7f433126e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ktllk" Feb 23 18:01:23 crc kubenswrapper[4724]: I0223 18:01:23.215786 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lj8w\" (UniqueName: \"kubernetes.io/projected/ccfb9295-92e0-4f3d-a25c-a3a7f433126e-kube-api-access-4lj8w\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ktllk\" (UID: \"ccfb9295-92e0-4f3d-a25c-a3a7f433126e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ktllk" Feb 23 18:01:23 crc kubenswrapper[4724]: I0223 18:01:23.337191 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ktllk" Feb 23 18:01:23 crc kubenswrapper[4724]: I0223 18:01:23.892618 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-ktllk"] Feb 23 18:01:23 crc kubenswrapper[4724]: W0223 18:01:23.897800 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podccfb9295_92e0_4f3d_a25c_a3a7f433126e.slice/crio-c32dbb1db641c8c057875a3d3e1f35191210596372a57343bb51a004e9de24e4 WatchSource:0}: Error finding container c32dbb1db641c8c057875a3d3e1f35191210596372a57343bb51a004e9de24e4: Status 404 returned error can't find the container with id c32dbb1db641c8c057875a3d3e1f35191210596372a57343bb51a004e9de24e4 Feb 23 18:01:23 crc kubenswrapper[4724]: I0223 18:01:23.908882 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ktllk" event={"ID":"ccfb9295-92e0-4f3d-a25c-a3a7f433126e","Type":"ContainerStarted","Data":"c32dbb1db641c8c057875a3d3e1f35191210596372a57343bb51a004e9de24e4"} Feb 23 18:01:24 crc kubenswrapper[4724]: I0223 18:01:24.917811 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ktllk" event={"ID":"ccfb9295-92e0-4f3d-a25c-a3a7f433126e","Type":"ContainerStarted","Data":"bf3762f07aba7c8c4979c71a459b4a4a56664d32680e9d6c2255060b89c00b41"} Feb 23 18:01:24 crc kubenswrapper[4724]: I0223 18:01:24.939880 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ktllk" podStartSLOduration=2.54093065 podStartE2EDuration="2.939855798s" podCreationTimestamp="2026-02-23 18:01:22 +0000 UTC" firstStartedPulling="2026-02-23 18:01:23.900263709 +0000 UTC m=+1839.716463309" lastFinishedPulling="2026-02-23 18:01:24.299188857 +0000 UTC m=+1840.115388457" observedRunningTime="2026-02-23 18:01:24.932535473 +0000 UTC m=+1840.748735073" watchObservedRunningTime="2026-02-23 18:01:24.939855798 +0000 UTC m=+1840.756055398" Feb 23 18:01:27 crc kubenswrapper[4724]: I0223 18:01:27.951861 4724 scope.go:117] "RemoveContainer" containerID="a9d9154f54d232a189bc26a1a3f88396d8e7bd4b7bf9b2b3dcf38e1648177ac8" Feb 23 18:01:27 crc kubenswrapper[4724]: E0223 18:01:27.953170 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:01:40 crc kubenswrapper[4724]: I0223 18:01:40.951305 4724 scope.go:117] "RemoveContainer" containerID="a9d9154f54d232a189bc26a1a3f88396d8e7bd4b7bf9b2b3dcf38e1648177ac8" Feb 23 18:01:40 crc kubenswrapper[4724]: E0223 18:01:40.953220 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:01:41 crc kubenswrapper[4724]: I0223 18:01:41.606268 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ct9lg"] Feb 23 18:01:41 crc kubenswrapper[4724]: I0223 18:01:41.610551 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ct9lg" Feb 23 18:01:41 crc kubenswrapper[4724]: I0223 18:01:41.619468 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ct9lg"] Feb 23 18:01:41 crc kubenswrapper[4724]: I0223 18:01:41.687080 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d9d1bda-6f1b-4907-8e6b-f08ebc795448-catalog-content\") pod \"certified-operators-ct9lg\" (UID: \"0d9d1bda-6f1b-4907-8e6b-f08ebc795448\") " pod="openshift-marketplace/certified-operators-ct9lg" Feb 23 18:01:41 crc kubenswrapper[4724]: I0223 18:01:41.687155 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d9d1bda-6f1b-4907-8e6b-f08ebc795448-utilities\") pod \"certified-operators-ct9lg\" (UID: \"0d9d1bda-6f1b-4907-8e6b-f08ebc795448\") " pod="openshift-marketplace/certified-operators-ct9lg" Feb 23 18:01:41 crc kubenswrapper[4724]: I0223 18:01:41.687179 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjfwm\" (UniqueName: \"kubernetes.io/projected/0d9d1bda-6f1b-4907-8e6b-f08ebc795448-kube-api-access-gjfwm\") pod \"certified-operators-ct9lg\" (UID: \"0d9d1bda-6f1b-4907-8e6b-f08ebc795448\") " pod="openshift-marketplace/certified-operators-ct9lg" Feb 23 18:01:41 crc kubenswrapper[4724]: I0223 18:01:41.789118 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d9d1bda-6f1b-4907-8e6b-f08ebc795448-catalog-content\") pod \"certified-operators-ct9lg\" (UID: \"0d9d1bda-6f1b-4907-8e6b-f08ebc795448\") " pod="openshift-marketplace/certified-operators-ct9lg" Feb 23 18:01:41 crc kubenswrapper[4724]: I0223 18:01:41.789185 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjfwm\" (UniqueName: \"kubernetes.io/projected/0d9d1bda-6f1b-4907-8e6b-f08ebc795448-kube-api-access-gjfwm\") pod \"certified-operators-ct9lg\" (UID: \"0d9d1bda-6f1b-4907-8e6b-f08ebc795448\") " pod="openshift-marketplace/certified-operators-ct9lg" Feb 23 18:01:41 crc kubenswrapper[4724]: I0223 18:01:41.789216 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d9d1bda-6f1b-4907-8e6b-f08ebc795448-utilities\") pod \"certified-operators-ct9lg\" (UID: \"0d9d1bda-6f1b-4907-8e6b-f08ebc795448\") " pod="openshift-marketplace/certified-operators-ct9lg" Feb 23 18:01:41 crc kubenswrapper[4724]: I0223 18:01:41.789930 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d9d1bda-6f1b-4907-8e6b-f08ebc795448-catalog-content\") pod \"certified-operators-ct9lg\" (UID: \"0d9d1bda-6f1b-4907-8e6b-f08ebc795448\") " pod="openshift-marketplace/certified-operators-ct9lg" Feb 23 18:01:41 crc kubenswrapper[4724]: I0223 18:01:41.789981 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d9d1bda-6f1b-4907-8e6b-f08ebc795448-utilities\") pod \"certified-operators-ct9lg\" (UID: \"0d9d1bda-6f1b-4907-8e6b-f08ebc795448\") " pod="openshift-marketplace/certified-operators-ct9lg" Feb 23 18:01:41 crc kubenswrapper[4724]: I0223 18:01:41.809672 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjfwm\" (UniqueName: \"kubernetes.io/projected/0d9d1bda-6f1b-4907-8e6b-f08ebc795448-kube-api-access-gjfwm\") pod \"certified-operators-ct9lg\" (UID: \"0d9d1bda-6f1b-4907-8e6b-f08ebc795448\") " pod="openshift-marketplace/certified-operators-ct9lg" Feb 23 18:01:42 crc kubenswrapper[4724]: I0223 18:01:42.003269 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ct9lg" Feb 23 18:01:42 crc kubenswrapper[4724]: I0223 18:01:42.048874 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-5l6f9"] Feb 23 18:01:42 crc kubenswrapper[4724]: I0223 18:01:42.062463 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-5l6f9"] Feb 23 18:01:42 crc kubenswrapper[4724]: I0223 18:01:42.499434 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ct9lg"] Feb 23 18:01:42 crc kubenswrapper[4724]: I0223 18:01:42.961982 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e0a28a1-5db9-4546-836f-1cfa21d4f068" path="/var/lib/kubelet/pods/1e0a28a1-5db9-4546-836f-1cfa21d4f068/volumes" Feb 23 18:01:43 crc kubenswrapper[4724]: I0223 18:01:43.089466 4724 generic.go:334] "Generic (PLEG): container finished" podID="0d9d1bda-6f1b-4907-8e6b-f08ebc795448" containerID="e5cdc4ed3bbfcd325f8f7d19ade5871968c52c3b73d04b4fab05c0cf4f6424f5" exitCode=0 Feb 23 18:01:43 crc kubenswrapper[4724]: I0223 18:01:43.089502 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ct9lg" event={"ID":"0d9d1bda-6f1b-4907-8e6b-f08ebc795448","Type":"ContainerDied","Data":"e5cdc4ed3bbfcd325f8f7d19ade5871968c52c3b73d04b4fab05c0cf4f6424f5"} Feb 23 18:01:43 crc kubenswrapper[4724]: I0223 18:01:43.089543 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ct9lg" event={"ID":"0d9d1bda-6f1b-4907-8e6b-f08ebc795448","Type":"ContainerStarted","Data":"3dbb564963dcc31b3d0ce1625605152a0118f8f5d11bda0fe73d2b5116fc1f24"} Feb 23 18:01:44 crc kubenswrapper[4724]: I0223 18:01:44.098871 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ct9lg" event={"ID":"0d9d1bda-6f1b-4907-8e6b-f08ebc795448","Type":"ContainerStarted","Data":"7021dd17d13212be9842b1f6e24533c9514e05daf9daa1bfa65bc73a684058fd"} Feb 23 18:01:46 crc kubenswrapper[4724]: I0223 18:01:46.033629 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-msc2q"] Feb 23 18:01:46 crc kubenswrapper[4724]: I0223 18:01:46.042356 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-msc2q"] Feb 23 18:01:46 crc kubenswrapper[4724]: I0223 18:01:46.120626 4724 generic.go:334] "Generic (PLEG): container finished" podID="0d9d1bda-6f1b-4907-8e6b-f08ebc795448" containerID="7021dd17d13212be9842b1f6e24533c9514e05daf9daa1bfa65bc73a684058fd" exitCode=0 Feb 23 18:01:46 crc kubenswrapper[4724]: I0223 18:01:46.120692 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ct9lg" event={"ID":"0d9d1bda-6f1b-4907-8e6b-f08ebc795448","Type":"ContainerDied","Data":"7021dd17d13212be9842b1f6e24533c9514e05daf9daa1bfa65bc73a684058fd"} Feb 23 18:01:46 crc kubenswrapper[4724]: I0223 18:01:46.964593 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54049f9e-01f1-475b-b008-401152f8ca55" path="/var/lib/kubelet/pods/54049f9e-01f1-475b-b008-401152f8ca55/volumes" Feb 23 18:01:47 crc kubenswrapper[4724]: I0223 18:01:47.131898 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ct9lg" event={"ID":"0d9d1bda-6f1b-4907-8e6b-f08ebc795448","Type":"ContainerStarted","Data":"1ac121c7e78914f87ff1f5a40bf09c577e5a9b63b94328b9a1175744e021cdaf"} Feb 23 18:01:47 crc kubenswrapper[4724]: I0223 18:01:47.157424 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ct9lg" podStartSLOduration=2.764310755 podStartE2EDuration="6.157405688s" podCreationTimestamp="2026-02-23 18:01:41 +0000 UTC" firstStartedPulling="2026-02-23 18:01:43.091633765 +0000 UTC m=+1858.907833365" lastFinishedPulling="2026-02-23 18:01:46.484728698 +0000 UTC m=+1862.300928298" observedRunningTime="2026-02-23 18:01:47.149666222 +0000 UTC m=+1862.965865822" watchObservedRunningTime="2026-02-23 18:01:47.157405688 +0000 UTC m=+1862.973605288" Feb 23 18:01:49 crc kubenswrapper[4724]: I0223 18:01:49.569945 4724 scope.go:117] "RemoveContainer" containerID="ac0d1bb08699a831319f65e3d6736bc6b2fd07cff4e0f5772ed89ae287c00ba6" Feb 23 18:01:49 crc kubenswrapper[4724]: I0223 18:01:49.593246 4724 scope.go:117] "RemoveContainer" containerID="0eb9e394baa3796f1f9d644774bed6ef1faa27f119bd06eda4aefb9e9ac2ec76" Feb 23 18:01:49 crc kubenswrapper[4724]: I0223 18:01:49.660346 4724 scope.go:117] "RemoveContainer" containerID="92fda6dc4db7212bc07635629965d9904de02e3889e3476e966c0be6f0eca3f3" Feb 23 18:01:49 crc kubenswrapper[4724]: I0223 18:01:49.694115 4724 scope.go:117] "RemoveContainer" containerID="7b271015b5b623ab6defaec9155bea3f7ecb86ecbf35e24892b5669611884a1c" Feb 23 18:01:49 crc kubenswrapper[4724]: I0223 18:01:49.756296 4724 scope.go:117] "RemoveContainer" containerID="25afc649942830651d7c742d53f9086fc3a7d1a5807c43442496ded939842527" Feb 23 18:01:49 crc kubenswrapper[4724]: I0223 18:01:49.808103 4724 scope.go:117] "RemoveContainer" containerID="7695afa7c8a678e78d7c8de09aa13b51249481b2b5e92cb3f9e9b5255540d55c" Feb 23 18:01:49 crc kubenswrapper[4724]: I0223 18:01:49.875612 4724 scope.go:117] "RemoveContainer" containerID="b5c8f1ce7cacc65a9f809d2af43294f845ccae54956d141e9e38f6ecb6966019" Feb 23 18:01:49 crc kubenswrapper[4724]: I0223 18:01:49.904744 4724 scope.go:117] "RemoveContainer" containerID="ca28e295cb85e5acc6e5e2021f2a9f421f208b6649d54f537bda9a2fd7c5fd5a" Feb 23 18:01:52 crc kubenswrapper[4724]: I0223 18:01:52.004194 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ct9lg" Feb 23 18:01:52 crc kubenswrapper[4724]: I0223 18:01:52.004555 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ct9lg" Feb 23 18:01:52 crc kubenswrapper[4724]: I0223 18:01:52.059653 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ct9lg" Feb 23 18:01:52 crc kubenswrapper[4724]: I0223 18:01:52.260675 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ct9lg" Feb 23 18:01:53 crc kubenswrapper[4724]: I0223 18:01:53.951626 4724 scope.go:117] "RemoveContainer" containerID="a9d9154f54d232a189bc26a1a3f88396d8e7bd4b7bf9b2b3dcf38e1648177ac8" Feb 23 18:01:53 crc kubenswrapper[4724]: E0223 18:01:53.952284 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:01:54 crc kubenswrapper[4724]: I0223 18:01:54.394015 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ct9lg"] Feb 23 18:01:54 crc kubenswrapper[4724]: I0223 18:01:54.394247 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ct9lg" podUID="0d9d1bda-6f1b-4907-8e6b-f08ebc795448" containerName="registry-server" containerID="cri-o://1ac121c7e78914f87ff1f5a40bf09c577e5a9b63b94328b9a1175744e021cdaf" gracePeriod=2 Feb 23 18:01:54 crc kubenswrapper[4724]: I0223 18:01:54.836462 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ct9lg" Feb 23 18:01:55 crc kubenswrapper[4724]: I0223 18:01:55.013335 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d9d1bda-6f1b-4907-8e6b-f08ebc795448-utilities\") pod \"0d9d1bda-6f1b-4907-8e6b-f08ebc795448\" (UID: \"0d9d1bda-6f1b-4907-8e6b-f08ebc795448\") " Feb 23 18:01:55 crc kubenswrapper[4724]: I0223 18:01:55.013594 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d9d1bda-6f1b-4907-8e6b-f08ebc795448-catalog-content\") pod \"0d9d1bda-6f1b-4907-8e6b-f08ebc795448\" (UID: \"0d9d1bda-6f1b-4907-8e6b-f08ebc795448\") " Feb 23 18:01:55 crc kubenswrapper[4724]: I0223 18:01:55.013670 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjfwm\" (UniqueName: \"kubernetes.io/projected/0d9d1bda-6f1b-4907-8e6b-f08ebc795448-kube-api-access-gjfwm\") pod \"0d9d1bda-6f1b-4907-8e6b-f08ebc795448\" (UID: \"0d9d1bda-6f1b-4907-8e6b-f08ebc795448\") " Feb 23 18:01:55 crc kubenswrapper[4724]: I0223 18:01:55.014154 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d9d1bda-6f1b-4907-8e6b-f08ebc795448-utilities" (OuterVolumeSpecName: "utilities") pod "0d9d1bda-6f1b-4907-8e6b-f08ebc795448" (UID: "0d9d1bda-6f1b-4907-8e6b-f08ebc795448"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:01:55 crc kubenswrapper[4724]: I0223 18:01:55.014643 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d9d1bda-6f1b-4907-8e6b-f08ebc795448-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 18:01:55 crc kubenswrapper[4724]: I0223 18:01:55.019228 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d9d1bda-6f1b-4907-8e6b-f08ebc795448-kube-api-access-gjfwm" (OuterVolumeSpecName: "kube-api-access-gjfwm") pod "0d9d1bda-6f1b-4907-8e6b-f08ebc795448" (UID: "0d9d1bda-6f1b-4907-8e6b-f08ebc795448"). InnerVolumeSpecName "kube-api-access-gjfwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:01:55 crc kubenswrapper[4724]: I0223 18:01:55.064772 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d9d1bda-6f1b-4907-8e6b-f08ebc795448-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0d9d1bda-6f1b-4907-8e6b-f08ebc795448" (UID: "0d9d1bda-6f1b-4907-8e6b-f08ebc795448"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:01:55 crc kubenswrapper[4724]: I0223 18:01:55.116973 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d9d1bda-6f1b-4907-8e6b-f08ebc795448-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 18:01:55 crc kubenswrapper[4724]: I0223 18:01:55.117008 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjfwm\" (UniqueName: \"kubernetes.io/projected/0d9d1bda-6f1b-4907-8e6b-f08ebc795448-kube-api-access-gjfwm\") on node \"crc\" DevicePath \"\"" Feb 23 18:01:55 crc kubenswrapper[4724]: I0223 18:01:55.232990 4724 generic.go:334] "Generic (PLEG): container finished" podID="0d9d1bda-6f1b-4907-8e6b-f08ebc795448" containerID="1ac121c7e78914f87ff1f5a40bf09c577e5a9b63b94328b9a1175744e021cdaf" exitCode=0 Feb 23 18:01:55 crc kubenswrapper[4724]: I0223 18:01:55.233147 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ct9lg" event={"ID":"0d9d1bda-6f1b-4907-8e6b-f08ebc795448","Type":"ContainerDied","Data":"1ac121c7e78914f87ff1f5a40bf09c577e5a9b63b94328b9a1175744e021cdaf"} Feb 23 18:01:55 crc kubenswrapper[4724]: I0223 18:01:55.233246 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ct9lg" Feb 23 18:01:55 crc kubenswrapper[4724]: I0223 18:01:55.233355 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ct9lg" event={"ID":"0d9d1bda-6f1b-4907-8e6b-f08ebc795448","Type":"ContainerDied","Data":"3dbb564963dcc31b3d0ce1625605152a0118f8f5d11bda0fe73d2b5116fc1f24"} Feb 23 18:01:55 crc kubenswrapper[4724]: I0223 18:01:55.233384 4724 scope.go:117] "RemoveContainer" containerID="1ac121c7e78914f87ff1f5a40bf09c577e5a9b63b94328b9a1175744e021cdaf" Feb 23 18:01:55 crc kubenswrapper[4724]: I0223 18:01:55.252638 4724 scope.go:117] "RemoveContainer" containerID="7021dd17d13212be9842b1f6e24533c9514e05daf9daa1bfa65bc73a684058fd" Feb 23 18:01:55 crc kubenswrapper[4724]: I0223 18:01:55.266478 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ct9lg"] Feb 23 18:01:55 crc kubenswrapper[4724]: I0223 18:01:55.275845 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ct9lg"] Feb 23 18:01:55 crc kubenswrapper[4724]: I0223 18:01:55.286991 4724 scope.go:117] "RemoveContainer" containerID="e5cdc4ed3bbfcd325f8f7d19ade5871968c52c3b73d04b4fab05c0cf4f6424f5" Feb 23 18:01:55 crc kubenswrapper[4724]: I0223 18:01:55.322994 4724 scope.go:117] "RemoveContainer" containerID="1ac121c7e78914f87ff1f5a40bf09c577e5a9b63b94328b9a1175744e021cdaf" Feb 23 18:01:55 crc kubenswrapper[4724]: E0223 18:01:55.323477 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ac121c7e78914f87ff1f5a40bf09c577e5a9b63b94328b9a1175744e021cdaf\": container with ID starting with 1ac121c7e78914f87ff1f5a40bf09c577e5a9b63b94328b9a1175744e021cdaf not found: ID does not exist" containerID="1ac121c7e78914f87ff1f5a40bf09c577e5a9b63b94328b9a1175744e021cdaf" Feb 23 18:01:55 crc kubenswrapper[4724]: I0223 18:01:55.323533 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ac121c7e78914f87ff1f5a40bf09c577e5a9b63b94328b9a1175744e021cdaf"} err="failed to get container status \"1ac121c7e78914f87ff1f5a40bf09c577e5a9b63b94328b9a1175744e021cdaf\": rpc error: code = NotFound desc = could not find container \"1ac121c7e78914f87ff1f5a40bf09c577e5a9b63b94328b9a1175744e021cdaf\": container with ID starting with 1ac121c7e78914f87ff1f5a40bf09c577e5a9b63b94328b9a1175744e021cdaf not found: ID does not exist" Feb 23 18:01:55 crc kubenswrapper[4724]: I0223 18:01:55.323567 4724 scope.go:117] "RemoveContainer" containerID="7021dd17d13212be9842b1f6e24533c9514e05daf9daa1bfa65bc73a684058fd" Feb 23 18:01:55 crc kubenswrapper[4724]: E0223 18:01:55.323931 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7021dd17d13212be9842b1f6e24533c9514e05daf9daa1bfa65bc73a684058fd\": container with ID starting with 7021dd17d13212be9842b1f6e24533c9514e05daf9daa1bfa65bc73a684058fd not found: ID does not exist" containerID="7021dd17d13212be9842b1f6e24533c9514e05daf9daa1bfa65bc73a684058fd" Feb 23 18:01:55 crc kubenswrapper[4724]: I0223 18:01:55.323977 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7021dd17d13212be9842b1f6e24533c9514e05daf9daa1bfa65bc73a684058fd"} err="failed to get container status \"7021dd17d13212be9842b1f6e24533c9514e05daf9daa1bfa65bc73a684058fd\": rpc error: code = NotFound desc = could not find container \"7021dd17d13212be9842b1f6e24533c9514e05daf9daa1bfa65bc73a684058fd\": container with ID starting with 7021dd17d13212be9842b1f6e24533c9514e05daf9daa1bfa65bc73a684058fd not found: ID does not exist" Feb 23 18:01:55 crc kubenswrapper[4724]: I0223 18:01:55.323998 4724 scope.go:117] "RemoveContainer" containerID="e5cdc4ed3bbfcd325f8f7d19ade5871968c52c3b73d04b4fab05c0cf4f6424f5" Feb 23 18:01:55 crc kubenswrapper[4724]: E0223 18:01:55.324220 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5cdc4ed3bbfcd325f8f7d19ade5871968c52c3b73d04b4fab05c0cf4f6424f5\": container with ID starting with e5cdc4ed3bbfcd325f8f7d19ade5871968c52c3b73d04b4fab05c0cf4f6424f5 not found: ID does not exist" containerID="e5cdc4ed3bbfcd325f8f7d19ade5871968c52c3b73d04b4fab05c0cf4f6424f5" Feb 23 18:01:55 crc kubenswrapper[4724]: I0223 18:01:55.324246 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5cdc4ed3bbfcd325f8f7d19ade5871968c52c3b73d04b4fab05c0cf4f6424f5"} err="failed to get container status \"e5cdc4ed3bbfcd325f8f7d19ade5871968c52c3b73d04b4fab05c0cf4f6424f5\": rpc error: code = NotFound desc = could not find container \"e5cdc4ed3bbfcd325f8f7d19ade5871968c52c3b73d04b4fab05c0cf4f6424f5\": container with ID starting with e5cdc4ed3bbfcd325f8f7d19ade5871968c52c3b73d04b4fab05c0cf4f6424f5 not found: ID does not exist" Feb 23 18:01:56 crc kubenswrapper[4724]: I0223 18:01:56.962229 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d9d1bda-6f1b-4907-8e6b-f08ebc795448" path="/var/lib/kubelet/pods/0d9d1bda-6f1b-4907-8e6b-f08ebc795448/volumes" Feb 23 18:02:00 crc kubenswrapper[4724]: I0223 18:02:00.283996 4724 generic.go:334] "Generic (PLEG): container finished" podID="ccfb9295-92e0-4f3d-a25c-a3a7f433126e" containerID="bf3762f07aba7c8c4979c71a459b4a4a56664d32680e9d6c2255060b89c00b41" exitCode=0 Feb 23 18:02:00 crc kubenswrapper[4724]: I0223 18:02:00.285128 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ktllk" event={"ID":"ccfb9295-92e0-4f3d-a25c-a3a7f433126e","Type":"ContainerDied","Data":"bf3762f07aba7c8c4979c71a459b4a4a56664d32680e9d6c2255060b89c00b41"} Feb 23 18:02:01 crc kubenswrapper[4724]: I0223 18:02:01.700756 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ktllk" Feb 23 18:02:01 crc kubenswrapper[4724]: I0223 18:02:01.846540 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ccfb9295-92e0-4f3d-a25c-a3a7f433126e-inventory\") pod \"ccfb9295-92e0-4f3d-a25c-a3a7f433126e\" (UID: \"ccfb9295-92e0-4f3d-a25c-a3a7f433126e\") " Feb 23 18:02:01 crc kubenswrapper[4724]: I0223 18:02:01.846771 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lj8w\" (UniqueName: \"kubernetes.io/projected/ccfb9295-92e0-4f3d-a25c-a3a7f433126e-kube-api-access-4lj8w\") pod \"ccfb9295-92e0-4f3d-a25c-a3a7f433126e\" (UID: \"ccfb9295-92e0-4f3d-a25c-a3a7f433126e\") " Feb 23 18:02:01 crc kubenswrapper[4724]: I0223 18:02:01.846901 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ccfb9295-92e0-4f3d-a25c-a3a7f433126e-ssh-key-openstack-edpm-ipam\") pod \"ccfb9295-92e0-4f3d-a25c-a3a7f433126e\" (UID: \"ccfb9295-92e0-4f3d-a25c-a3a7f433126e\") " Feb 23 18:02:01 crc kubenswrapper[4724]: I0223 18:02:01.852025 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccfb9295-92e0-4f3d-a25c-a3a7f433126e-kube-api-access-4lj8w" (OuterVolumeSpecName: "kube-api-access-4lj8w") pod "ccfb9295-92e0-4f3d-a25c-a3a7f433126e" (UID: "ccfb9295-92e0-4f3d-a25c-a3a7f433126e"). InnerVolumeSpecName "kube-api-access-4lj8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:02:01 crc kubenswrapper[4724]: I0223 18:02:01.874001 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccfb9295-92e0-4f3d-a25c-a3a7f433126e-inventory" (OuterVolumeSpecName: "inventory") pod "ccfb9295-92e0-4f3d-a25c-a3a7f433126e" (UID: "ccfb9295-92e0-4f3d-a25c-a3a7f433126e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:02:01 crc kubenswrapper[4724]: I0223 18:02:01.875446 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccfb9295-92e0-4f3d-a25c-a3a7f433126e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ccfb9295-92e0-4f3d-a25c-a3a7f433126e" (UID: "ccfb9295-92e0-4f3d-a25c-a3a7f433126e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:02:01 crc kubenswrapper[4724]: I0223 18:02:01.949207 4724 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ccfb9295-92e0-4f3d-a25c-a3a7f433126e-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 18:02:01 crc kubenswrapper[4724]: I0223 18:02:01.949245 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lj8w\" (UniqueName: \"kubernetes.io/projected/ccfb9295-92e0-4f3d-a25c-a3a7f433126e-kube-api-access-4lj8w\") on node \"crc\" DevicePath \"\"" Feb 23 18:02:01 crc kubenswrapper[4724]: I0223 18:02:01.949256 4724 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ccfb9295-92e0-4f3d-a25c-a3a7f433126e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 18:02:02 crc kubenswrapper[4724]: I0223 18:02:02.303275 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ktllk" event={"ID":"ccfb9295-92e0-4f3d-a25c-a3a7f433126e","Type":"ContainerDied","Data":"c32dbb1db641c8c057875a3d3e1f35191210596372a57343bb51a004e9de24e4"} Feb 23 18:02:02 crc kubenswrapper[4724]: I0223 18:02:02.303331 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ktllk" Feb 23 18:02:02 crc kubenswrapper[4724]: I0223 18:02:02.303334 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c32dbb1db641c8c057875a3d3e1f35191210596372a57343bb51a004e9de24e4" Feb 23 18:02:02 crc kubenswrapper[4724]: I0223 18:02:02.392622 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-g4758"] Feb 23 18:02:02 crc kubenswrapper[4724]: E0223 18:02:02.393001 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d9d1bda-6f1b-4907-8e6b-f08ebc795448" containerName="extract-content" Feb 23 18:02:02 crc kubenswrapper[4724]: I0223 18:02:02.393017 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d9d1bda-6f1b-4907-8e6b-f08ebc795448" containerName="extract-content" Feb 23 18:02:02 crc kubenswrapper[4724]: E0223 18:02:02.393030 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccfb9295-92e0-4f3d-a25c-a3a7f433126e" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 23 18:02:02 crc kubenswrapper[4724]: I0223 18:02:02.393040 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccfb9295-92e0-4f3d-a25c-a3a7f433126e" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 23 18:02:02 crc kubenswrapper[4724]: E0223 18:02:02.393065 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d9d1bda-6f1b-4907-8e6b-f08ebc795448" containerName="extract-utilities" Feb 23 18:02:02 crc kubenswrapper[4724]: I0223 18:02:02.393070 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d9d1bda-6f1b-4907-8e6b-f08ebc795448" containerName="extract-utilities" Feb 23 18:02:02 crc kubenswrapper[4724]: E0223 18:02:02.393082 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d9d1bda-6f1b-4907-8e6b-f08ebc795448" containerName="registry-server" Feb 23 18:02:02 crc kubenswrapper[4724]: I0223 18:02:02.393087 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d9d1bda-6f1b-4907-8e6b-f08ebc795448" containerName="registry-server" Feb 23 18:02:02 crc kubenswrapper[4724]: I0223 18:02:02.393488 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d9d1bda-6f1b-4907-8e6b-f08ebc795448" containerName="registry-server" Feb 23 18:02:02 crc kubenswrapper[4724]: I0223 18:02:02.393511 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccfb9295-92e0-4f3d-a25c-a3a7f433126e" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 23 18:02:02 crc kubenswrapper[4724]: I0223 18:02:02.394113 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-g4758" Feb 23 18:02:02 crc kubenswrapper[4724]: I0223 18:02:02.399176 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 18:02:02 crc kubenswrapper[4724]: I0223 18:02:02.399334 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 18:02:02 crc kubenswrapper[4724]: I0223 18:02:02.399663 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 18:02:02 crc kubenswrapper[4724]: I0223 18:02:02.400663 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t8jff" Feb 23 18:02:02 crc kubenswrapper[4724]: I0223 18:02:02.408651 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-g4758"] Feb 23 18:02:02 crc kubenswrapper[4724]: I0223 18:02:02.560998 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6kd9\" (UniqueName: \"kubernetes.io/projected/78a23e2d-61b1-4393-95b0-e4872270628a-kube-api-access-n6kd9\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-g4758\" (UID: \"78a23e2d-61b1-4393-95b0-e4872270628a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-g4758" Feb 23 18:02:02 crc kubenswrapper[4724]: I0223 18:02:02.561059 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/78a23e2d-61b1-4393-95b0-e4872270628a-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-g4758\" (UID: \"78a23e2d-61b1-4393-95b0-e4872270628a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-g4758" Feb 23 18:02:02 crc kubenswrapper[4724]: I0223 18:02:02.561146 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/78a23e2d-61b1-4393-95b0-e4872270628a-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-g4758\" (UID: \"78a23e2d-61b1-4393-95b0-e4872270628a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-g4758" Feb 23 18:02:02 crc kubenswrapper[4724]: I0223 18:02:02.662742 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6kd9\" (UniqueName: \"kubernetes.io/projected/78a23e2d-61b1-4393-95b0-e4872270628a-kube-api-access-n6kd9\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-g4758\" (UID: \"78a23e2d-61b1-4393-95b0-e4872270628a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-g4758" Feb 23 18:02:02 crc kubenswrapper[4724]: I0223 18:02:02.662811 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/78a23e2d-61b1-4393-95b0-e4872270628a-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-g4758\" (UID: \"78a23e2d-61b1-4393-95b0-e4872270628a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-g4758" Feb 23 18:02:02 crc kubenswrapper[4724]: I0223 18:02:02.662869 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/78a23e2d-61b1-4393-95b0-e4872270628a-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-g4758\" (UID: \"78a23e2d-61b1-4393-95b0-e4872270628a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-g4758" Feb 23 18:02:02 crc kubenswrapper[4724]: I0223 18:02:02.666349 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/78a23e2d-61b1-4393-95b0-e4872270628a-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-g4758\" (UID: \"78a23e2d-61b1-4393-95b0-e4872270628a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-g4758" Feb 23 18:02:02 crc kubenswrapper[4724]: I0223 18:02:02.667126 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/78a23e2d-61b1-4393-95b0-e4872270628a-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-g4758\" (UID: \"78a23e2d-61b1-4393-95b0-e4872270628a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-g4758" Feb 23 18:02:02 crc kubenswrapper[4724]: I0223 18:02:02.682125 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6kd9\" (UniqueName: \"kubernetes.io/projected/78a23e2d-61b1-4393-95b0-e4872270628a-kube-api-access-n6kd9\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-g4758\" (UID: \"78a23e2d-61b1-4393-95b0-e4872270628a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-g4758" Feb 23 18:02:02 crc kubenswrapper[4724]: I0223 18:02:02.710369 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-g4758" Feb 23 18:02:03 crc kubenswrapper[4724]: I0223 18:02:03.210086 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-g4758"] Feb 23 18:02:03 crc kubenswrapper[4724]: I0223 18:02:03.326899 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-g4758" event={"ID":"78a23e2d-61b1-4393-95b0-e4872270628a","Type":"ContainerStarted","Data":"a7ec38f583963201d7349654a7bf7d5fa94d1edfb88862db71f91407bacf9bdd"} Feb 23 18:02:04 crc kubenswrapper[4724]: I0223 18:02:04.337297 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-g4758" event={"ID":"78a23e2d-61b1-4393-95b0-e4872270628a","Type":"ContainerStarted","Data":"b0eb16083de52d62628efacfe6f0294ec91dab6511b191a560ecb9505f4cabca"} Feb 23 18:02:04 crc kubenswrapper[4724]: I0223 18:02:04.361706 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-g4758" podStartSLOduration=1.974084692 podStartE2EDuration="2.361681943s" podCreationTimestamp="2026-02-23 18:02:02 +0000 UTC" firstStartedPulling="2026-02-23 18:02:03.217786547 +0000 UTC m=+1879.033986157" lastFinishedPulling="2026-02-23 18:02:03.605383808 +0000 UTC m=+1879.421583408" observedRunningTime="2026-02-23 18:02:04.353142147 +0000 UTC m=+1880.169341767" watchObservedRunningTime="2026-02-23 18:02:04.361681943 +0000 UTC m=+1880.177881543" Feb 23 18:02:05 crc kubenswrapper[4724]: I0223 18:02:05.952037 4724 scope.go:117] "RemoveContainer" containerID="a9d9154f54d232a189bc26a1a3f88396d8e7bd4b7bf9b2b3dcf38e1648177ac8" Feb 23 18:02:05 crc kubenswrapper[4724]: E0223 18:02:05.952666 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:02:19 crc kubenswrapper[4724]: I0223 18:02:19.951735 4724 scope.go:117] "RemoveContainer" containerID="a9d9154f54d232a189bc26a1a3f88396d8e7bd4b7bf9b2b3dcf38e1648177ac8" Feb 23 18:02:19 crc kubenswrapper[4724]: E0223 18:02:19.953612 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:02:26 crc kubenswrapper[4724]: I0223 18:02:26.049709 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-v8dph"] Feb 23 18:02:26 crc kubenswrapper[4724]: I0223 18:02:26.059070 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-v8dph"] Feb 23 18:02:26 crc kubenswrapper[4724]: I0223 18:02:26.964696 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f112391-decc-4aa2-a230-699a0015c306" path="/var/lib/kubelet/pods/8f112391-decc-4aa2-a230-699a0015c306/volumes" Feb 23 18:02:30 crc kubenswrapper[4724]: I0223 18:02:30.951901 4724 scope.go:117] "RemoveContainer" containerID="a9d9154f54d232a189bc26a1a3f88396d8e7bd4b7bf9b2b3dcf38e1648177ac8" Feb 23 18:02:30 crc kubenswrapper[4724]: E0223 18:02:30.952611 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:02:43 crc kubenswrapper[4724]: I0223 18:02:43.951357 4724 scope.go:117] "RemoveContainer" containerID="a9d9154f54d232a189bc26a1a3f88396d8e7bd4b7bf9b2b3dcf38e1648177ac8" Feb 23 18:02:43 crc kubenswrapper[4724]: E0223 18:02:43.952172 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:02:46 crc kubenswrapper[4724]: I0223 18:02:46.713699 4724 generic.go:334] "Generic (PLEG): container finished" podID="78a23e2d-61b1-4393-95b0-e4872270628a" containerID="b0eb16083de52d62628efacfe6f0294ec91dab6511b191a560ecb9505f4cabca" exitCode=0 Feb 23 18:02:46 crc kubenswrapper[4724]: I0223 18:02:46.713794 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-g4758" event={"ID":"78a23e2d-61b1-4393-95b0-e4872270628a","Type":"ContainerDied","Data":"b0eb16083de52d62628efacfe6f0294ec91dab6511b191a560ecb9505f4cabca"} Feb 23 18:02:48 crc kubenswrapper[4724]: I0223 18:02:48.105595 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-g4758" Feb 23 18:02:48 crc kubenswrapper[4724]: I0223 18:02:48.180150 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6kd9\" (UniqueName: \"kubernetes.io/projected/78a23e2d-61b1-4393-95b0-e4872270628a-kube-api-access-n6kd9\") pod \"78a23e2d-61b1-4393-95b0-e4872270628a\" (UID: \"78a23e2d-61b1-4393-95b0-e4872270628a\") " Feb 23 18:02:48 crc kubenswrapper[4724]: I0223 18:02:48.180213 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/78a23e2d-61b1-4393-95b0-e4872270628a-inventory\") pod \"78a23e2d-61b1-4393-95b0-e4872270628a\" (UID: \"78a23e2d-61b1-4393-95b0-e4872270628a\") " Feb 23 18:02:48 crc kubenswrapper[4724]: I0223 18:02:48.180275 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/78a23e2d-61b1-4393-95b0-e4872270628a-ssh-key-openstack-edpm-ipam\") pod \"78a23e2d-61b1-4393-95b0-e4872270628a\" (UID: \"78a23e2d-61b1-4393-95b0-e4872270628a\") " Feb 23 18:02:48 crc kubenswrapper[4724]: I0223 18:02:48.185503 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78a23e2d-61b1-4393-95b0-e4872270628a-kube-api-access-n6kd9" (OuterVolumeSpecName: "kube-api-access-n6kd9") pod "78a23e2d-61b1-4393-95b0-e4872270628a" (UID: "78a23e2d-61b1-4393-95b0-e4872270628a"). InnerVolumeSpecName "kube-api-access-n6kd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:02:48 crc kubenswrapper[4724]: I0223 18:02:48.205844 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78a23e2d-61b1-4393-95b0-e4872270628a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "78a23e2d-61b1-4393-95b0-e4872270628a" (UID: "78a23e2d-61b1-4393-95b0-e4872270628a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:02:48 crc kubenswrapper[4724]: I0223 18:02:48.212528 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78a23e2d-61b1-4393-95b0-e4872270628a-inventory" (OuterVolumeSpecName: "inventory") pod "78a23e2d-61b1-4393-95b0-e4872270628a" (UID: "78a23e2d-61b1-4393-95b0-e4872270628a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:02:48 crc kubenswrapper[4724]: I0223 18:02:48.283172 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6kd9\" (UniqueName: \"kubernetes.io/projected/78a23e2d-61b1-4393-95b0-e4872270628a-kube-api-access-n6kd9\") on node \"crc\" DevicePath \"\"" Feb 23 18:02:48 crc kubenswrapper[4724]: I0223 18:02:48.283205 4724 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/78a23e2d-61b1-4393-95b0-e4872270628a-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 18:02:48 crc kubenswrapper[4724]: I0223 18:02:48.283215 4724 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/78a23e2d-61b1-4393-95b0-e4872270628a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 18:02:48 crc kubenswrapper[4724]: I0223 18:02:48.735166 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-g4758" event={"ID":"78a23e2d-61b1-4393-95b0-e4872270628a","Type":"ContainerDied","Data":"a7ec38f583963201d7349654a7bf7d5fa94d1edfb88862db71f91407bacf9bdd"} Feb 23 18:02:48 crc kubenswrapper[4724]: I0223 18:02:48.735213 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7ec38f583963201d7349654a7bf7d5fa94d1edfb88862db71f91407bacf9bdd" Feb 23 18:02:48 crc kubenswrapper[4724]: I0223 18:02:48.735219 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-g4758" Feb 23 18:02:48 crc kubenswrapper[4724]: I0223 18:02:48.831697 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-xldw2"] Feb 23 18:02:48 crc kubenswrapper[4724]: E0223 18:02:48.832341 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78a23e2d-61b1-4393-95b0-e4872270628a" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 23 18:02:48 crc kubenswrapper[4724]: I0223 18:02:48.832363 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="78a23e2d-61b1-4393-95b0-e4872270628a" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 23 18:02:48 crc kubenswrapper[4724]: I0223 18:02:48.832607 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="78a23e2d-61b1-4393-95b0-e4872270628a" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 23 18:02:48 crc kubenswrapper[4724]: I0223 18:02:48.833292 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-xldw2" Feb 23 18:02:48 crc kubenswrapper[4724]: I0223 18:02:48.847733 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-xldw2"] Feb 23 18:02:48 crc kubenswrapper[4724]: I0223 18:02:48.853960 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 18:02:48 crc kubenswrapper[4724]: I0223 18:02:48.854074 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 18:02:48 crc kubenswrapper[4724]: I0223 18:02:48.855045 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t8jff" Feb 23 18:02:48 crc kubenswrapper[4724]: I0223 18:02:48.865688 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 18:02:48 crc kubenswrapper[4724]: I0223 18:02:48.895084 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8fgz\" (UniqueName: \"kubernetes.io/projected/3067abd3-b2db-458d-a71c-9f569c2a6bdc-kube-api-access-f8fgz\") pod \"ssh-known-hosts-edpm-deployment-xldw2\" (UID: \"3067abd3-b2db-458d-a71c-9f569c2a6bdc\") " pod="openstack/ssh-known-hosts-edpm-deployment-xldw2" Feb 23 18:02:48 crc kubenswrapper[4724]: I0223 18:02:48.895211 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3067abd3-b2db-458d-a71c-9f569c2a6bdc-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-xldw2\" (UID: \"3067abd3-b2db-458d-a71c-9f569c2a6bdc\") " pod="openstack/ssh-known-hosts-edpm-deployment-xldw2" Feb 23 18:02:48 crc kubenswrapper[4724]: I0223 18:02:48.895461 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/3067abd3-b2db-458d-a71c-9f569c2a6bdc-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-xldw2\" (UID: \"3067abd3-b2db-458d-a71c-9f569c2a6bdc\") " pod="openstack/ssh-known-hosts-edpm-deployment-xldw2" Feb 23 18:02:48 crc kubenswrapper[4724]: I0223 18:02:48.997620 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/3067abd3-b2db-458d-a71c-9f569c2a6bdc-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-xldw2\" (UID: \"3067abd3-b2db-458d-a71c-9f569c2a6bdc\") " pod="openstack/ssh-known-hosts-edpm-deployment-xldw2" Feb 23 18:02:48 crc kubenswrapper[4724]: I0223 18:02:48.998077 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8fgz\" (UniqueName: \"kubernetes.io/projected/3067abd3-b2db-458d-a71c-9f569c2a6bdc-kube-api-access-f8fgz\") pod \"ssh-known-hosts-edpm-deployment-xldw2\" (UID: \"3067abd3-b2db-458d-a71c-9f569c2a6bdc\") " pod="openstack/ssh-known-hosts-edpm-deployment-xldw2" Feb 23 18:02:48 crc kubenswrapper[4724]: I0223 18:02:48.998117 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3067abd3-b2db-458d-a71c-9f569c2a6bdc-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-xldw2\" (UID: \"3067abd3-b2db-458d-a71c-9f569c2a6bdc\") " pod="openstack/ssh-known-hosts-edpm-deployment-xldw2" Feb 23 18:02:49 crc kubenswrapper[4724]: I0223 18:02:49.001252 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/3067abd3-b2db-458d-a71c-9f569c2a6bdc-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-xldw2\" (UID: \"3067abd3-b2db-458d-a71c-9f569c2a6bdc\") " pod="openstack/ssh-known-hosts-edpm-deployment-xldw2" Feb 23 18:02:49 crc kubenswrapper[4724]: I0223 18:02:49.001674 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3067abd3-b2db-458d-a71c-9f569c2a6bdc-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-xldw2\" (UID: \"3067abd3-b2db-458d-a71c-9f569c2a6bdc\") " pod="openstack/ssh-known-hosts-edpm-deployment-xldw2" Feb 23 18:02:49 crc kubenswrapper[4724]: I0223 18:02:49.014116 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8fgz\" (UniqueName: \"kubernetes.io/projected/3067abd3-b2db-458d-a71c-9f569c2a6bdc-kube-api-access-f8fgz\") pod \"ssh-known-hosts-edpm-deployment-xldw2\" (UID: \"3067abd3-b2db-458d-a71c-9f569c2a6bdc\") " pod="openstack/ssh-known-hosts-edpm-deployment-xldw2" Feb 23 18:02:49 crc kubenswrapper[4724]: I0223 18:02:49.181328 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-xldw2" Feb 23 18:02:49 crc kubenswrapper[4724]: I0223 18:02:49.702776 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-xldw2"] Feb 23 18:02:49 crc kubenswrapper[4724]: I0223 18:02:49.706242 4724 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 18:02:49 crc kubenswrapper[4724]: I0223 18:02:49.746372 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-xldw2" event={"ID":"3067abd3-b2db-458d-a71c-9f569c2a6bdc","Type":"ContainerStarted","Data":"73088597a95bfdb38ef2d32830c53127642aba9915459efbded1abc33cedc224"} Feb 23 18:02:50 crc kubenswrapper[4724]: I0223 18:02:50.042324 4724 scope.go:117] "RemoveContainer" containerID="191490c7659fe0b2d6221d27ac21a7ad4e46db8bc840b8e8bfe775a2251f5c71" Feb 23 18:02:50 crc kubenswrapper[4724]: I0223 18:02:50.756113 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-xldw2" event={"ID":"3067abd3-b2db-458d-a71c-9f569c2a6bdc","Type":"ContainerStarted","Data":"0d74556c670a94b1512f6faf41c48076c5fd3596d73c149dbdefaeb45bb4e7d7"} Feb 23 18:02:50 crc kubenswrapper[4724]: I0223 18:02:50.783334 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-xldw2" podStartSLOduration=2.318232236 podStartE2EDuration="2.783296521s" podCreationTimestamp="2026-02-23 18:02:48 +0000 UTC" firstStartedPulling="2026-02-23 18:02:49.706050962 +0000 UTC m=+1925.522250562" lastFinishedPulling="2026-02-23 18:02:50.171115237 +0000 UTC m=+1925.987314847" observedRunningTime="2026-02-23 18:02:50.77710915 +0000 UTC m=+1926.593308740" watchObservedRunningTime="2026-02-23 18:02:50.783296521 +0000 UTC m=+1926.599496121" Feb 23 18:02:56 crc kubenswrapper[4724]: I0223 18:02:56.951271 4724 scope.go:117] "RemoveContainer" containerID="a9d9154f54d232a189bc26a1a3f88396d8e7bd4b7bf9b2b3dcf38e1648177ac8" Feb 23 18:02:56 crc kubenswrapper[4724]: E0223 18:02:56.952052 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:02:57 crc kubenswrapper[4724]: I0223 18:02:57.838746 4724 generic.go:334] "Generic (PLEG): container finished" podID="3067abd3-b2db-458d-a71c-9f569c2a6bdc" containerID="0d74556c670a94b1512f6faf41c48076c5fd3596d73c149dbdefaeb45bb4e7d7" exitCode=0 Feb 23 18:02:57 crc kubenswrapper[4724]: I0223 18:02:57.838805 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-xldw2" event={"ID":"3067abd3-b2db-458d-a71c-9f569c2a6bdc","Type":"ContainerDied","Data":"0d74556c670a94b1512f6faf41c48076c5fd3596d73c149dbdefaeb45bb4e7d7"} Feb 23 18:02:59 crc kubenswrapper[4724]: I0223 18:02:59.241119 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-xldw2" Feb 23 18:02:59 crc kubenswrapper[4724]: I0223 18:02:59.339548 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3067abd3-b2db-458d-a71c-9f569c2a6bdc-ssh-key-openstack-edpm-ipam\") pod \"3067abd3-b2db-458d-a71c-9f569c2a6bdc\" (UID: \"3067abd3-b2db-458d-a71c-9f569c2a6bdc\") " Feb 23 18:02:59 crc kubenswrapper[4724]: I0223 18:02:59.339604 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8fgz\" (UniqueName: \"kubernetes.io/projected/3067abd3-b2db-458d-a71c-9f569c2a6bdc-kube-api-access-f8fgz\") pod \"3067abd3-b2db-458d-a71c-9f569c2a6bdc\" (UID: \"3067abd3-b2db-458d-a71c-9f569c2a6bdc\") " Feb 23 18:02:59 crc kubenswrapper[4724]: I0223 18:02:59.339649 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/3067abd3-b2db-458d-a71c-9f569c2a6bdc-inventory-0\") pod \"3067abd3-b2db-458d-a71c-9f569c2a6bdc\" (UID: \"3067abd3-b2db-458d-a71c-9f569c2a6bdc\") " Feb 23 18:02:59 crc kubenswrapper[4724]: I0223 18:02:59.346290 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3067abd3-b2db-458d-a71c-9f569c2a6bdc-kube-api-access-f8fgz" (OuterVolumeSpecName: "kube-api-access-f8fgz") pod "3067abd3-b2db-458d-a71c-9f569c2a6bdc" (UID: "3067abd3-b2db-458d-a71c-9f569c2a6bdc"). InnerVolumeSpecName "kube-api-access-f8fgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:02:59 crc kubenswrapper[4724]: I0223 18:02:59.370636 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3067abd3-b2db-458d-a71c-9f569c2a6bdc-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3067abd3-b2db-458d-a71c-9f569c2a6bdc" (UID: "3067abd3-b2db-458d-a71c-9f569c2a6bdc"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:02:59 crc kubenswrapper[4724]: I0223 18:02:59.372008 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3067abd3-b2db-458d-a71c-9f569c2a6bdc-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "3067abd3-b2db-458d-a71c-9f569c2a6bdc" (UID: "3067abd3-b2db-458d-a71c-9f569c2a6bdc"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:02:59 crc kubenswrapper[4724]: I0223 18:02:59.441772 4724 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3067abd3-b2db-458d-a71c-9f569c2a6bdc-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 18:02:59 crc kubenswrapper[4724]: I0223 18:02:59.442016 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8fgz\" (UniqueName: \"kubernetes.io/projected/3067abd3-b2db-458d-a71c-9f569c2a6bdc-kube-api-access-f8fgz\") on node \"crc\" DevicePath \"\"" Feb 23 18:02:59 crc kubenswrapper[4724]: I0223 18:02:59.442029 4724 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/3067abd3-b2db-458d-a71c-9f569c2a6bdc-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 23 18:02:59 crc kubenswrapper[4724]: I0223 18:02:59.854745 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-xldw2" event={"ID":"3067abd3-b2db-458d-a71c-9f569c2a6bdc","Type":"ContainerDied","Data":"73088597a95bfdb38ef2d32830c53127642aba9915459efbded1abc33cedc224"} Feb 23 18:02:59 crc kubenswrapper[4724]: I0223 18:02:59.854793 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73088597a95bfdb38ef2d32830c53127642aba9915459efbded1abc33cedc224" Feb 23 18:02:59 crc kubenswrapper[4724]: I0223 18:02:59.854873 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-xldw2" Feb 23 18:02:59 crc kubenswrapper[4724]: I0223 18:02:59.931246 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-v2zdl"] Feb 23 18:02:59 crc kubenswrapper[4724]: E0223 18:02:59.931837 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3067abd3-b2db-458d-a71c-9f569c2a6bdc" containerName="ssh-known-hosts-edpm-deployment" Feb 23 18:02:59 crc kubenswrapper[4724]: I0223 18:02:59.931864 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="3067abd3-b2db-458d-a71c-9f569c2a6bdc" containerName="ssh-known-hosts-edpm-deployment" Feb 23 18:02:59 crc kubenswrapper[4724]: I0223 18:02:59.932108 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="3067abd3-b2db-458d-a71c-9f569c2a6bdc" containerName="ssh-known-hosts-edpm-deployment" Feb 23 18:02:59 crc kubenswrapper[4724]: I0223 18:02:59.933041 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-v2zdl" Feb 23 18:02:59 crc kubenswrapper[4724]: I0223 18:02:59.935869 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 18:02:59 crc kubenswrapper[4724]: I0223 18:02:59.936117 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 18:02:59 crc kubenswrapper[4724]: I0223 18:02:59.935880 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t8jff" Feb 23 18:02:59 crc kubenswrapper[4724]: I0223 18:02:59.938572 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 18:02:59 crc kubenswrapper[4724]: I0223 18:02:59.947945 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-v2zdl"] Feb 23 18:03:00 crc kubenswrapper[4724]: I0223 18:03:00.053135 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7rc5\" (UniqueName: \"kubernetes.io/projected/1a8e063f-7461-4365-bb92-a08b5d5c5b1f-kube-api-access-d7rc5\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-v2zdl\" (UID: \"1a8e063f-7461-4365-bb92-a08b5d5c5b1f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-v2zdl" Feb 23 18:03:00 crc kubenswrapper[4724]: I0223 18:03:00.053220 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1a8e063f-7461-4365-bb92-a08b5d5c5b1f-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-v2zdl\" (UID: \"1a8e063f-7461-4365-bb92-a08b5d5c5b1f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-v2zdl" Feb 23 18:03:00 crc kubenswrapper[4724]: I0223 18:03:00.053242 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a8e063f-7461-4365-bb92-a08b5d5c5b1f-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-v2zdl\" (UID: \"1a8e063f-7461-4365-bb92-a08b5d5c5b1f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-v2zdl" Feb 23 18:03:00 crc kubenswrapper[4724]: I0223 18:03:00.154996 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1a8e063f-7461-4365-bb92-a08b5d5c5b1f-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-v2zdl\" (UID: \"1a8e063f-7461-4365-bb92-a08b5d5c5b1f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-v2zdl" Feb 23 18:03:00 crc kubenswrapper[4724]: I0223 18:03:00.155363 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a8e063f-7461-4365-bb92-a08b5d5c5b1f-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-v2zdl\" (UID: \"1a8e063f-7461-4365-bb92-a08b5d5c5b1f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-v2zdl" Feb 23 18:03:00 crc kubenswrapper[4724]: I0223 18:03:00.155658 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7rc5\" (UniqueName: \"kubernetes.io/projected/1a8e063f-7461-4365-bb92-a08b5d5c5b1f-kube-api-access-d7rc5\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-v2zdl\" (UID: \"1a8e063f-7461-4365-bb92-a08b5d5c5b1f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-v2zdl" Feb 23 18:03:00 crc kubenswrapper[4724]: I0223 18:03:00.158839 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1a8e063f-7461-4365-bb92-a08b5d5c5b1f-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-v2zdl\" (UID: \"1a8e063f-7461-4365-bb92-a08b5d5c5b1f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-v2zdl" Feb 23 18:03:00 crc kubenswrapper[4724]: I0223 18:03:00.158978 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a8e063f-7461-4365-bb92-a08b5d5c5b1f-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-v2zdl\" (UID: \"1a8e063f-7461-4365-bb92-a08b5d5c5b1f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-v2zdl" Feb 23 18:03:00 crc kubenswrapper[4724]: I0223 18:03:00.172854 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7rc5\" (UniqueName: \"kubernetes.io/projected/1a8e063f-7461-4365-bb92-a08b5d5c5b1f-kube-api-access-d7rc5\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-v2zdl\" (UID: \"1a8e063f-7461-4365-bb92-a08b5d5c5b1f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-v2zdl" Feb 23 18:03:00 crc kubenswrapper[4724]: I0223 18:03:00.252686 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-v2zdl" Feb 23 18:03:00 crc kubenswrapper[4724]: I0223 18:03:00.771364 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-v2zdl"] Feb 23 18:03:00 crc kubenswrapper[4724]: I0223 18:03:00.863194 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-v2zdl" event={"ID":"1a8e063f-7461-4365-bb92-a08b5d5c5b1f","Type":"ContainerStarted","Data":"2129036d7d492fe780237c73b8f5ace9643cc65f1fd384c1f656c9ce99e514d4"} Feb 23 18:03:01 crc kubenswrapper[4724]: I0223 18:03:01.874079 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-v2zdl" event={"ID":"1a8e063f-7461-4365-bb92-a08b5d5c5b1f","Type":"ContainerStarted","Data":"acdfe1090843612a3f1428f94467f2151905ef790b44a61c2061c38d1089f2fd"} Feb 23 18:03:01 crc kubenswrapper[4724]: I0223 18:03:01.895801 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-v2zdl" podStartSLOduration=2.511343174 podStartE2EDuration="2.895784328s" podCreationTimestamp="2026-02-23 18:02:59 +0000 UTC" firstStartedPulling="2026-02-23 18:03:00.78097476 +0000 UTC m=+1936.597174360" lastFinishedPulling="2026-02-23 18:03:01.165415914 +0000 UTC m=+1936.981615514" observedRunningTime="2026-02-23 18:03:01.894483217 +0000 UTC m=+1937.710682817" watchObservedRunningTime="2026-02-23 18:03:01.895784328 +0000 UTC m=+1937.711983928" Feb 23 18:03:08 crc kubenswrapper[4724]: I0223 18:03:08.937688 4724 generic.go:334] "Generic (PLEG): container finished" podID="1a8e063f-7461-4365-bb92-a08b5d5c5b1f" containerID="acdfe1090843612a3f1428f94467f2151905ef790b44a61c2061c38d1089f2fd" exitCode=0 Feb 23 18:03:08 crc kubenswrapper[4724]: I0223 18:03:08.937772 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-v2zdl" event={"ID":"1a8e063f-7461-4365-bb92-a08b5d5c5b1f","Type":"ContainerDied","Data":"acdfe1090843612a3f1428f94467f2151905ef790b44a61c2061c38d1089f2fd"} Feb 23 18:03:10 crc kubenswrapper[4724]: I0223 18:03:10.338250 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-v2zdl" Feb 23 18:03:10 crc kubenswrapper[4724]: I0223 18:03:10.417421 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1a8e063f-7461-4365-bb92-a08b5d5c5b1f-ssh-key-openstack-edpm-ipam\") pod \"1a8e063f-7461-4365-bb92-a08b5d5c5b1f\" (UID: \"1a8e063f-7461-4365-bb92-a08b5d5c5b1f\") " Feb 23 18:03:10 crc kubenswrapper[4724]: I0223 18:03:10.417496 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7rc5\" (UniqueName: \"kubernetes.io/projected/1a8e063f-7461-4365-bb92-a08b5d5c5b1f-kube-api-access-d7rc5\") pod \"1a8e063f-7461-4365-bb92-a08b5d5c5b1f\" (UID: \"1a8e063f-7461-4365-bb92-a08b5d5c5b1f\") " Feb 23 18:03:10 crc kubenswrapper[4724]: I0223 18:03:10.417548 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a8e063f-7461-4365-bb92-a08b5d5c5b1f-inventory\") pod \"1a8e063f-7461-4365-bb92-a08b5d5c5b1f\" (UID: \"1a8e063f-7461-4365-bb92-a08b5d5c5b1f\") " Feb 23 18:03:10 crc kubenswrapper[4724]: I0223 18:03:10.423455 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a8e063f-7461-4365-bb92-a08b5d5c5b1f-kube-api-access-d7rc5" (OuterVolumeSpecName: "kube-api-access-d7rc5") pod "1a8e063f-7461-4365-bb92-a08b5d5c5b1f" (UID: "1a8e063f-7461-4365-bb92-a08b5d5c5b1f"). InnerVolumeSpecName "kube-api-access-d7rc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:03:10 crc kubenswrapper[4724]: I0223 18:03:10.450099 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a8e063f-7461-4365-bb92-a08b5d5c5b1f-inventory" (OuterVolumeSpecName: "inventory") pod "1a8e063f-7461-4365-bb92-a08b5d5c5b1f" (UID: "1a8e063f-7461-4365-bb92-a08b5d5c5b1f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:03:10 crc kubenswrapper[4724]: I0223 18:03:10.452507 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a8e063f-7461-4365-bb92-a08b5d5c5b1f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1a8e063f-7461-4365-bb92-a08b5d5c5b1f" (UID: "1a8e063f-7461-4365-bb92-a08b5d5c5b1f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:03:10 crc kubenswrapper[4724]: I0223 18:03:10.518947 4724 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1a8e063f-7461-4365-bb92-a08b5d5c5b1f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 18:03:10 crc kubenswrapper[4724]: I0223 18:03:10.519144 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7rc5\" (UniqueName: \"kubernetes.io/projected/1a8e063f-7461-4365-bb92-a08b5d5c5b1f-kube-api-access-d7rc5\") on node \"crc\" DevicePath \"\"" Feb 23 18:03:10 crc kubenswrapper[4724]: I0223 18:03:10.519217 4724 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a8e063f-7461-4365-bb92-a08b5d5c5b1f-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 18:03:10 crc kubenswrapper[4724]: I0223 18:03:10.951280 4724 scope.go:117] "RemoveContainer" containerID="a9d9154f54d232a189bc26a1a3f88396d8e7bd4b7bf9b2b3dcf38e1648177ac8" Feb 23 18:03:10 crc kubenswrapper[4724]: E0223 18:03:10.951773 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:03:10 crc kubenswrapper[4724]: I0223 18:03:10.959294 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-v2zdl" Feb 23 18:03:10 crc kubenswrapper[4724]: I0223 18:03:10.973546 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-v2zdl" event={"ID":"1a8e063f-7461-4365-bb92-a08b5d5c5b1f","Type":"ContainerDied","Data":"2129036d7d492fe780237c73b8f5ace9643cc65f1fd384c1f656c9ce99e514d4"} Feb 23 18:03:10 crc kubenswrapper[4724]: I0223 18:03:10.973595 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2129036d7d492fe780237c73b8f5ace9643cc65f1fd384c1f656c9ce99e514d4" Feb 23 18:03:11 crc kubenswrapper[4724]: I0223 18:03:11.032633 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g74wb"] Feb 23 18:03:11 crc kubenswrapper[4724]: E0223 18:03:11.033356 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a8e063f-7461-4365-bb92-a08b5d5c5b1f" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 23 18:03:11 crc kubenswrapper[4724]: I0223 18:03:11.033403 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a8e063f-7461-4365-bb92-a08b5d5c5b1f" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 23 18:03:11 crc kubenswrapper[4724]: I0223 18:03:11.033664 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a8e063f-7461-4365-bb92-a08b5d5c5b1f" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 23 18:03:11 crc kubenswrapper[4724]: I0223 18:03:11.035725 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g74wb" Feb 23 18:03:11 crc kubenswrapper[4724]: I0223 18:03:11.040281 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 18:03:11 crc kubenswrapper[4724]: I0223 18:03:11.040508 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 18:03:11 crc kubenswrapper[4724]: I0223 18:03:11.041509 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t8jff" Feb 23 18:03:11 crc kubenswrapper[4724]: I0223 18:03:11.041719 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 18:03:11 crc kubenswrapper[4724]: I0223 18:03:11.046010 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g74wb"] Feb 23 18:03:11 crc kubenswrapper[4724]: I0223 18:03:11.131439 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpd4k\" (UniqueName: \"kubernetes.io/projected/bb78bbf2-4067-4e58-b506-5dc2249d2aff-kube-api-access-xpd4k\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-g74wb\" (UID: \"bb78bbf2-4067-4e58-b506-5dc2249d2aff\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g74wb" Feb 23 18:03:11 crc kubenswrapper[4724]: I0223 18:03:11.131777 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bb78bbf2-4067-4e58-b506-5dc2249d2aff-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-g74wb\" (UID: \"bb78bbf2-4067-4e58-b506-5dc2249d2aff\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g74wb" Feb 23 18:03:11 crc kubenswrapper[4724]: I0223 18:03:11.131864 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bb78bbf2-4067-4e58-b506-5dc2249d2aff-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-g74wb\" (UID: \"bb78bbf2-4067-4e58-b506-5dc2249d2aff\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g74wb" Feb 23 18:03:11 crc kubenswrapper[4724]: I0223 18:03:11.233866 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpd4k\" (UniqueName: \"kubernetes.io/projected/bb78bbf2-4067-4e58-b506-5dc2249d2aff-kube-api-access-xpd4k\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-g74wb\" (UID: \"bb78bbf2-4067-4e58-b506-5dc2249d2aff\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g74wb" Feb 23 18:03:11 crc kubenswrapper[4724]: I0223 18:03:11.234023 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bb78bbf2-4067-4e58-b506-5dc2249d2aff-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-g74wb\" (UID: \"bb78bbf2-4067-4e58-b506-5dc2249d2aff\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g74wb" Feb 23 18:03:11 crc kubenswrapper[4724]: I0223 18:03:11.234055 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bb78bbf2-4067-4e58-b506-5dc2249d2aff-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-g74wb\" (UID: \"bb78bbf2-4067-4e58-b506-5dc2249d2aff\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g74wb" Feb 23 18:03:11 crc kubenswrapper[4724]: I0223 18:03:11.238911 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bb78bbf2-4067-4e58-b506-5dc2249d2aff-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-g74wb\" (UID: \"bb78bbf2-4067-4e58-b506-5dc2249d2aff\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g74wb" Feb 23 18:03:11 crc kubenswrapper[4724]: I0223 18:03:11.248557 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bb78bbf2-4067-4e58-b506-5dc2249d2aff-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-g74wb\" (UID: \"bb78bbf2-4067-4e58-b506-5dc2249d2aff\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g74wb" Feb 23 18:03:11 crc kubenswrapper[4724]: I0223 18:03:11.251584 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpd4k\" (UniqueName: \"kubernetes.io/projected/bb78bbf2-4067-4e58-b506-5dc2249d2aff-kube-api-access-xpd4k\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-g74wb\" (UID: \"bb78bbf2-4067-4e58-b506-5dc2249d2aff\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g74wb" Feb 23 18:03:11 crc kubenswrapper[4724]: I0223 18:03:11.367787 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g74wb" Feb 23 18:03:11 crc kubenswrapper[4724]: W0223 18:03:11.869855 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb78bbf2_4067_4e58_b506_5dc2249d2aff.slice/crio-045b28ba89467563a3d62840010862f9288c09882df0055f34a3e7c28b877188 WatchSource:0}: Error finding container 045b28ba89467563a3d62840010862f9288c09882df0055f34a3e7c28b877188: Status 404 returned error can't find the container with id 045b28ba89467563a3d62840010862f9288c09882df0055f34a3e7c28b877188 Feb 23 18:03:11 crc kubenswrapper[4724]: I0223 18:03:11.871822 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g74wb"] Feb 23 18:03:11 crc kubenswrapper[4724]: I0223 18:03:11.969715 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g74wb" event={"ID":"bb78bbf2-4067-4e58-b506-5dc2249d2aff","Type":"ContainerStarted","Data":"045b28ba89467563a3d62840010862f9288c09882df0055f34a3e7c28b877188"} Feb 23 18:03:14 crc kubenswrapper[4724]: I0223 18:03:14.003451 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g74wb" event={"ID":"bb78bbf2-4067-4e58-b506-5dc2249d2aff","Type":"ContainerStarted","Data":"2025d87fe81d2cae148b868b6d86aa233550441deba2bb9f3ecdc7437e58c57c"} Feb 23 18:03:14 crc kubenswrapper[4724]: I0223 18:03:14.027878 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g74wb" podStartSLOduration=1.859669314 podStartE2EDuration="3.027861826s" podCreationTimestamp="2026-02-23 18:03:11 +0000 UTC" firstStartedPulling="2026-02-23 18:03:11.872727466 +0000 UTC m=+1947.688927076" lastFinishedPulling="2026-02-23 18:03:13.040919988 +0000 UTC m=+1948.857119588" observedRunningTime="2026-02-23 18:03:14.023272344 +0000 UTC m=+1949.839471944" watchObservedRunningTime="2026-02-23 18:03:14.027861826 +0000 UTC m=+1949.844061426" Feb 23 18:03:21 crc kubenswrapper[4724]: I0223 18:03:21.950730 4724 scope.go:117] "RemoveContainer" containerID="a9d9154f54d232a189bc26a1a3f88396d8e7bd4b7bf9b2b3dcf38e1648177ac8" Feb 23 18:03:21 crc kubenswrapper[4724]: E0223 18:03:21.951665 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:03:22 crc kubenswrapper[4724]: I0223 18:03:22.070075 4724 generic.go:334] "Generic (PLEG): container finished" podID="bb78bbf2-4067-4e58-b506-5dc2249d2aff" containerID="2025d87fe81d2cae148b868b6d86aa233550441deba2bb9f3ecdc7437e58c57c" exitCode=0 Feb 23 18:03:22 crc kubenswrapper[4724]: I0223 18:03:22.070162 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g74wb" event={"ID":"bb78bbf2-4067-4e58-b506-5dc2249d2aff","Type":"ContainerDied","Data":"2025d87fe81d2cae148b868b6d86aa233550441deba2bb9f3ecdc7437e58c57c"} Feb 23 18:03:23 crc kubenswrapper[4724]: I0223 18:03:23.478688 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g74wb" Feb 23 18:03:23 crc kubenswrapper[4724]: I0223 18:03:23.579339 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bb78bbf2-4067-4e58-b506-5dc2249d2aff-inventory\") pod \"bb78bbf2-4067-4e58-b506-5dc2249d2aff\" (UID: \"bb78bbf2-4067-4e58-b506-5dc2249d2aff\") " Feb 23 18:03:23 crc kubenswrapper[4724]: I0223 18:03:23.579670 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bb78bbf2-4067-4e58-b506-5dc2249d2aff-ssh-key-openstack-edpm-ipam\") pod \"bb78bbf2-4067-4e58-b506-5dc2249d2aff\" (UID: \"bb78bbf2-4067-4e58-b506-5dc2249d2aff\") " Feb 23 18:03:23 crc kubenswrapper[4724]: I0223 18:03:23.579721 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpd4k\" (UniqueName: \"kubernetes.io/projected/bb78bbf2-4067-4e58-b506-5dc2249d2aff-kube-api-access-xpd4k\") pod \"bb78bbf2-4067-4e58-b506-5dc2249d2aff\" (UID: \"bb78bbf2-4067-4e58-b506-5dc2249d2aff\") " Feb 23 18:03:23 crc kubenswrapper[4724]: I0223 18:03:23.584946 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb78bbf2-4067-4e58-b506-5dc2249d2aff-kube-api-access-xpd4k" (OuterVolumeSpecName: "kube-api-access-xpd4k") pod "bb78bbf2-4067-4e58-b506-5dc2249d2aff" (UID: "bb78bbf2-4067-4e58-b506-5dc2249d2aff"). InnerVolumeSpecName "kube-api-access-xpd4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:03:23 crc kubenswrapper[4724]: I0223 18:03:23.612980 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb78bbf2-4067-4e58-b506-5dc2249d2aff-inventory" (OuterVolumeSpecName: "inventory") pod "bb78bbf2-4067-4e58-b506-5dc2249d2aff" (UID: "bb78bbf2-4067-4e58-b506-5dc2249d2aff"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:03:23 crc kubenswrapper[4724]: I0223 18:03:23.614808 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb78bbf2-4067-4e58-b506-5dc2249d2aff-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "bb78bbf2-4067-4e58-b506-5dc2249d2aff" (UID: "bb78bbf2-4067-4e58-b506-5dc2249d2aff"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:03:23 crc kubenswrapper[4724]: I0223 18:03:23.682336 4724 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bb78bbf2-4067-4e58-b506-5dc2249d2aff-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 18:03:23 crc kubenswrapper[4724]: I0223 18:03:23.682374 4724 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bb78bbf2-4067-4e58-b506-5dc2249d2aff-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 18:03:23 crc kubenswrapper[4724]: I0223 18:03:23.682402 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpd4k\" (UniqueName: \"kubernetes.io/projected/bb78bbf2-4067-4e58-b506-5dc2249d2aff-kube-api-access-xpd4k\") on node \"crc\" DevicePath \"\"" Feb 23 18:03:24 crc kubenswrapper[4724]: I0223 18:03:24.110338 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g74wb" event={"ID":"bb78bbf2-4067-4e58-b506-5dc2249d2aff","Type":"ContainerDied","Data":"045b28ba89467563a3d62840010862f9288c09882df0055f34a3e7c28b877188"} Feb 23 18:03:24 crc kubenswrapper[4724]: I0223 18:03:24.110417 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="045b28ba89467563a3d62840010862f9288c09882df0055f34a3e7c28b877188" Feb 23 18:03:24 crc kubenswrapper[4724]: I0223 18:03:24.110492 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g74wb" Feb 23 18:03:24 crc kubenswrapper[4724]: I0223 18:03:24.189282 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fsscq"] Feb 23 18:03:24 crc kubenswrapper[4724]: E0223 18:03:24.189703 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb78bbf2-4067-4e58-b506-5dc2249d2aff" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 23 18:03:24 crc kubenswrapper[4724]: I0223 18:03:24.189719 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb78bbf2-4067-4e58-b506-5dc2249d2aff" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 23 18:03:24 crc kubenswrapper[4724]: I0223 18:03:24.189964 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb78bbf2-4067-4e58-b506-5dc2249d2aff" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 23 18:03:24 crc kubenswrapper[4724]: I0223 18:03:24.190752 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fsscq" Feb 23 18:03:24 crc kubenswrapper[4724]: I0223 18:03:24.193301 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 23 18:03:24 crc kubenswrapper[4724]: I0223 18:03:24.193912 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 23 18:03:24 crc kubenswrapper[4724]: I0223 18:03:24.194156 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 18:03:24 crc kubenswrapper[4724]: I0223 18:03:24.194230 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 23 18:03:24 crc kubenswrapper[4724]: I0223 18:03:24.194265 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t8jff" Feb 23 18:03:24 crc kubenswrapper[4724]: I0223 18:03:24.194737 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 23 18:03:24 crc kubenswrapper[4724]: I0223 18:03:24.197985 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 18:03:24 crc kubenswrapper[4724]: I0223 18:03:24.198808 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 18:03:24 crc kubenswrapper[4724]: I0223 18:03:24.212056 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fsscq"] Feb 23 18:03:24 crc kubenswrapper[4724]: I0223 18:03:24.296169 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0e96ae5a-4689-4373-bfad-06a0f99345d2-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fsscq\" (UID: \"0e96ae5a-4689-4373-bfad-06a0f99345d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fsscq" Feb 23 18:03:24 crc kubenswrapper[4724]: I0223 18:03:24.296238 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0e96ae5a-4689-4373-bfad-06a0f99345d2-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fsscq\" (UID: \"0e96ae5a-4689-4373-bfad-06a0f99345d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fsscq" Feb 23 18:03:24 crc kubenswrapper[4724]: I0223 18:03:24.296268 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0e96ae5a-4689-4373-bfad-06a0f99345d2-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fsscq\" (UID: \"0e96ae5a-4689-4373-bfad-06a0f99345d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fsscq" Feb 23 18:03:24 crc kubenswrapper[4724]: I0223 18:03:24.296355 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0e96ae5a-4689-4373-bfad-06a0f99345d2-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fsscq\" (UID: \"0e96ae5a-4689-4373-bfad-06a0f99345d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fsscq" Feb 23 18:03:24 crc kubenswrapper[4724]: I0223 18:03:24.296414 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e96ae5a-4689-4373-bfad-06a0f99345d2-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fsscq\" (UID: \"0e96ae5a-4689-4373-bfad-06a0f99345d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fsscq" Feb 23 18:03:24 crc kubenswrapper[4724]: I0223 18:03:24.296450 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdlwr\" (UniqueName: \"kubernetes.io/projected/0e96ae5a-4689-4373-bfad-06a0f99345d2-kube-api-access-pdlwr\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fsscq\" (UID: \"0e96ae5a-4689-4373-bfad-06a0f99345d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fsscq" Feb 23 18:03:24 crc kubenswrapper[4724]: I0223 18:03:24.296476 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e96ae5a-4689-4373-bfad-06a0f99345d2-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fsscq\" (UID: \"0e96ae5a-4689-4373-bfad-06a0f99345d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fsscq" Feb 23 18:03:24 crc kubenswrapper[4724]: I0223 18:03:24.296499 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e96ae5a-4689-4373-bfad-06a0f99345d2-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fsscq\" (UID: \"0e96ae5a-4689-4373-bfad-06a0f99345d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fsscq" Feb 23 18:03:24 crc kubenswrapper[4724]: I0223 18:03:24.296520 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e96ae5a-4689-4373-bfad-06a0f99345d2-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fsscq\" (UID: \"0e96ae5a-4689-4373-bfad-06a0f99345d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fsscq" Feb 23 18:03:24 crc kubenswrapper[4724]: I0223 18:03:24.296549 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e96ae5a-4689-4373-bfad-06a0f99345d2-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fsscq\" (UID: \"0e96ae5a-4689-4373-bfad-06a0f99345d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fsscq" Feb 23 18:03:24 crc kubenswrapper[4724]: I0223 18:03:24.296984 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e96ae5a-4689-4373-bfad-06a0f99345d2-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fsscq\" (UID: \"0e96ae5a-4689-4373-bfad-06a0f99345d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fsscq" Feb 23 18:03:24 crc kubenswrapper[4724]: I0223 18:03:24.297093 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e96ae5a-4689-4373-bfad-06a0f99345d2-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fsscq\" (UID: \"0e96ae5a-4689-4373-bfad-06a0f99345d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fsscq" Feb 23 18:03:24 crc kubenswrapper[4724]: I0223 18:03:24.297180 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e96ae5a-4689-4373-bfad-06a0f99345d2-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fsscq\" (UID: \"0e96ae5a-4689-4373-bfad-06a0f99345d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fsscq" Feb 23 18:03:24 crc kubenswrapper[4724]: I0223 18:03:24.297334 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0e96ae5a-4689-4373-bfad-06a0f99345d2-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fsscq\" (UID: \"0e96ae5a-4689-4373-bfad-06a0f99345d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fsscq" Feb 23 18:03:24 crc kubenswrapper[4724]: I0223 18:03:24.399843 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e96ae5a-4689-4373-bfad-06a0f99345d2-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fsscq\" (UID: \"0e96ae5a-4689-4373-bfad-06a0f99345d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fsscq" Feb 23 18:03:24 crc kubenswrapper[4724]: I0223 18:03:24.399905 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e96ae5a-4689-4373-bfad-06a0f99345d2-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fsscq\" (UID: \"0e96ae5a-4689-4373-bfad-06a0f99345d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fsscq" Feb 23 18:03:24 crc kubenswrapper[4724]: I0223 18:03:24.399944 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e96ae5a-4689-4373-bfad-06a0f99345d2-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fsscq\" (UID: \"0e96ae5a-4689-4373-bfad-06a0f99345d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fsscq" Feb 23 18:03:24 crc kubenswrapper[4724]: I0223 18:03:24.399981 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0e96ae5a-4689-4373-bfad-06a0f99345d2-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fsscq\" (UID: \"0e96ae5a-4689-4373-bfad-06a0f99345d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fsscq" Feb 23 18:03:24 crc kubenswrapper[4724]: I0223 18:03:24.400110 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0e96ae5a-4689-4373-bfad-06a0f99345d2-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fsscq\" (UID: \"0e96ae5a-4689-4373-bfad-06a0f99345d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fsscq" Feb 23 18:03:24 crc kubenswrapper[4724]: I0223 18:03:24.400137 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0e96ae5a-4689-4373-bfad-06a0f99345d2-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fsscq\" (UID: \"0e96ae5a-4689-4373-bfad-06a0f99345d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fsscq" Feb 23 18:03:24 crc kubenswrapper[4724]: I0223 18:03:24.400163 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0e96ae5a-4689-4373-bfad-06a0f99345d2-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fsscq\" (UID: \"0e96ae5a-4689-4373-bfad-06a0f99345d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fsscq" Feb 23 18:03:24 crc kubenswrapper[4724]: I0223 18:03:24.400257 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0e96ae5a-4689-4373-bfad-06a0f99345d2-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fsscq\" (UID: \"0e96ae5a-4689-4373-bfad-06a0f99345d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fsscq" Feb 23 18:03:24 crc kubenswrapper[4724]: I0223 18:03:24.400310 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e96ae5a-4689-4373-bfad-06a0f99345d2-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fsscq\" (UID: \"0e96ae5a-4689-4373-bfad-06a0f99345d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fsscq" Feb 23 18:03:24 crc kubenswrapper[4724]: I0223 18:03:24.400344 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdlwr\" (UniqueName: \"kubernetes.io/projected/0e96ae5a-4689-4373-bfad-06a0f99345d2-kube-api-access-pdlwr\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fsscq\" (UID: \"0e96ae5a-4689-4373-bfad-06a0f99345d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fsscq" Feb 23 18:03:24 crc kubenswrapper[4724]: I0223 18:03:24.400366 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e96ae5a-4689-4373-bfad-06a0f99345d2-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fsscq\" (UID: \"0e96ae5a-4689-4373-bfad-06a0f99345d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fsscq" Feb 23 18:03:24 crc kubenswrapper[4724]: I0223 18:03:24.400405 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e96ae5a-4689-4373-bfad-06a0f99345d2-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fsscq\" (UID: \"0e96ae5a-4689-4373-bfad-06a0f99345d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fsscq" Feb 23 18:03:24 crc kubenswrapper[4724]: I0223 18:03:24.400425 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e96ae5a-4689-4373-bfad-06a0f99345d2-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fsscq\" (UID: \"0e96ae5a-4689-4373-bfad-06a0f99345d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fsscq" Feb 23 18:03:24 crc kubenswrapper[4724]: I0223 18:03:24.400452 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e96ae5a-4689-4373-bfad-06a0f99345d2-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fsscq\" (UID: \"0e96ae5a-4689-4373-bfad-06a0f99345d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fsscq" Feb 23 18:03:24 crc kubenswrapper[4724]: I0223 18:03:24.405189 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e96ae5a-4689-4373-bfad-06a0f99345d2-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fsscq\" (UID: \"0e96ae5a-4689-4373-bfad-06a0f99345d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fsscq" Feb 23 18:03:24 crc kubenswrapper[4724]: I0223 18:03:24.405222 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e96ae5a-4689-4373-bfad-06a0f99345d2-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fsscq\" (UID: \"0e96ae5a-4689-4373-bfad-06a0f99345d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fsscq" Feb 23 18:03:24 crc kubenswrapper[4724]: I0223 18:03:24.405582 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e96ae5a-4689-4373-bfad-06a0f99345d2-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fsscq\" (UID: \"0e96ae5a-4689-4373-bfad-06a0f99345d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fsscq" Feb 23 18:03:24 crc kubenswrapper[4724]: I0223 18:03:24.406209 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e96ae5a-4689-4373-bfad-06a0f99345d2-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fsscq\" (UID: \"0e96ae5a-4689-4373-bfad-06a0f99345d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fsscq" Feb 23 18:03:24 crc kubenswrapper[4724]: I0223 18:03:24.407105 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e96ae5a-4689-4373-bfad-06a0f99345d2-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fsscq\" (UID: \"0e96ae5a-4689-4373-bfad-06a0f99345d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fsscq" Feb 23 18:03:24 crc kubenswrapper[4724]: I0223 18:03:24.407480 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0e96ae5a-4689-4373-bfad-06a0f99345d2-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fsscq\" (UID: \"0e96ae5a-4689-4373-bfad-06a0f99345d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fsscq" Feb 23 18:03:24 crc kubenswrapper[4724]: I0223 18:03:24.408041 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0e96ae5a-4689-4373-bfad-06a0f99345d2-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fsscq\" (UID: \"0e96ae5a-4689-4373-bfad-06a0f99345d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fsscq" Feb 23 18:03:24 crc kubenswrapper[4724]: I0223 18:03:24.408213 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e96ae5a-4689-4373-bfad-06a0f99345d2-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fsscq\" (UID: \"0e96ae5a-4689-4373-bfad-06a0f99345d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fsscq" Feb 23 18:03:24 crc kubenswrapper[4724]: I0223 18:03:24.409129 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0e96ae5a-4689-4373-bfad-06a0f99345d2-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fsscq\" (UID: \"0e96ae5a-4689-4373-bfad-06a0f99345d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fsscq" Feb 23 18:03:24 crc kubenswrapper[4724]: I0223 18:03:24.410366 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0e96ae5a-4689-4373-bfad-06a0f99345d2-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fsscq\" (UID: \"0e96ae5a-4689-4373-bfad-06a0f99345d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fsscq" Feb 23 18:03:24 crc kubenswrapper[4724]: I0223 18:03:24.410468 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0e96ae5a-4689-4373-bfad-06a0f99345d2-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fsscq\" (UID: \"0e96ae5a-4689-4373-bfad-06a0f99345d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fsscq" Feb 23 18:03:24 crc kubenswrapper[4724]: I0223 18:03:24.411848 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e96ae5a-4689-4373-bfad-06a0f99345d2-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fsscq\" (UID: \"0e96ae5a-4689-4373-bfad-06a0f99345d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fsscq" Feb 23 18:03:24 crc kubenswrapper[4724]: I0223 18:03:24.414019 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e96ae5a-4689-4373-bfad-06a0f99345d2-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fsscq\" (UID: \"0e96ae5a-4689-4373-bfad-06a0f99345d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fsscq" Feb 23 18:03:24 crc kubenswrapper[4724]: I0223 18:03:24.426707 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdlwr\" (UniqueName: \"kubernetes.io/projected/0e96ae5a-4689-4373-bfad-06a0f99345d2-kube-api-access-pdlwr\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fsscq\" (UID: \"0e96ae5a-4689-4373-bfad-06a0f99345d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fsscq" Feb 23 18:03:24 crc kubenswrapper[4724]: I0223 18:03:24.515845 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fsscq" Feb 23 18:03:25 crc kubenswrapper[4724]: I0223 18:03:25.030527 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fsscq"] Feb 23 18:03:25 crc kubenswrapper[4724]: I0223 18:03:25.119213 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fsscq" event={"ID":"0e96ae5a-4689-4373-bfad-06a0f99345d2","Type":"ContainerStarted","Data":"7fa9a02eea7a3a540a946c3b7df37b949e5cb3907f2c0f1bb7584654e315e9d4"} Feb 23 18:03:26 crc kubenswrapper[4724]: I0223 18:03:26.129635 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fsscq" event={"ID":"0e96ae5a-4689-4373-bfad-06a0f99345d2","Type":"ContainerStarted","Data":"3ae6b117bc0c0d26c20cb145eac1f91d839baf0c6ca697716b3a056141a20a9e"} Feb 23 18:03:26 crc kubenswrapper[4724]: I0223 18:03:26.151286 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fsscq" podStartSLOduration=1.76702601 podStartE2EDuration="2.151269002s" podCreationTimestamp="2026-02-23 18:03:24 +0000 UTC" firstStartedPulling="2026-02-23 18:03:25.032450063 +0000 UTC m=+1960.848649663" lastFinishedPulling="2026-02-23 18:03:25.416693055 +0000 UTC m=+1961.232892655" observedRunningTime="2026-02-23 18:03:26.144508603 +0000 UTC m=+1961.960708203" watchObservedRunningTime="2026-02-23 18:03:26.151269002 +0000 UTC m=+1961.967468602" Feb 23 18:03:35 crc kubenswrapper[4724]: I0223 18:03:35.951476 4724 scope.go:117] "RemoveContainer" containerID="a9d9154f54d232a189bc26a1a3f88396d8e7bd4b7bf9b2b3dcf38e1648177ac8" Feb 23 18:03:35 crc kubenswrapper[4724]: E0223 18:03:35.952253 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:03:50 crc kubenswrapper[4724]: I0223 18:03:50.951495 4724 scope.go:117] "RemoveContainer" containerID="a9d9154f54d232a189bc26a1a3f88396d8e7bd4b7bf9b2b3dcf38e1648177ac8" Feb 23 18:03:50 crc kubenswrapper[4724]: E0223 18:03:50.952251 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:04:01 crc kubenswrapper[4724]: I0223 18:04:01.523740 4724 generic.go:334] "Generic (PLEG): container finished" podID="0e96ae5a-4689-4373-bfad-06a0f99345d2" containerID="3ae6b117bc0c0d26c20cb145eac1f91d839baf0c6ca697716b3a056141a20a9e" exitCode=0 Feb 23 18:04:01 crc kubenswrapper[4724]: I0223 18:04:01.523880 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fsscq" event={"ID":"0e96ae5a-4689-4373-bfad-06a0f99345d2","Type":"ContainerDied","Data":"3ae6b117bc0c0d26c20cb145eac1f91d839baf0c6ca697716b3a056141a20a9e"} Feb 23 18:04:01 crc kubenswrapper[4724]: I0223 18:04:01.951222 4724 scope.go:117] "RemoveContainer" containerID="a9d9154f54d232a189bc26a1a3f88396d8e7bd4b7bf9b2b3dcf38e1648177ac8" Feb 23 18:04:01 crc kubenswrapper[4724]: E0223 18:04:01.951524 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:04:02 crc kubenswrapper[4724]: I0223 18:04:02.948759 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fsscq" Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.105623 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0e96ae5a-4689-4373-bfad-06a0f99345d2-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"0e96ae5a-4689-4373-bfad-06a0f99345d2\" (UID: \"0e96ae5a-4689-4373-bfad-06a0f99345d2\") " Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.106259 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e96ae5a-4689-4373-bfad-06a0f99345d2-telemetry-combined-ca-bundle\") pod \"0e96ae5a-4689-4373-bfad-06a0f99345d2\" (UID: \"0e96ae5a-4689-4373-bfad-06a0f99345d2\") " Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.106443 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0e96ae5a-4689-4373-bfad-06a0f99345d2-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"0e96ae5a-4689-4373-bfad-06a0f99345d2\" (UID: \"0e96ae5a-4689-4373-bfad-06a0f99345d2\") " Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.106595 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0e96ae5a-4689-4373-bfad-06a0f99345d2-ssh-key-openstack-edpm-ipam\") pod \"0e96ae5a-4689-4373-bfad-06a0f99345d2\" (UID: \"0e96ae5a-4689-4373-bfad-06a0f99345d2\") " Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.106748 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e96ae5a-4689-4373-bfad-06a0f99345d2-libvirt-combined-ca-bundle\") pod \"0e96ae5a-4689-4373-bfad-06a0f99345d2\" (UID: \"0e96ae5a-4689-4373-bfad-06a0f99345d2\") " Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.106909 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0e96ae5a-4689-4373-bfad-06a0f99345d2-openstack-edpm-ipam-ovn-default-certs-0\") pod \"0e96ae5a-4689-4373-bfad-06a0f99345d2\" (UID: \"0e96ae5a-4689-4373-bfad-06a0f99345d2\") " Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.107146 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e96ae5a-4689-4373-bfad-06a0f99345d2-neutron-metadata-combined-ca-bundle\") pod \"0e96ae5a-4689-4373-bfad-06a0f99345d2\" (UID: \"0e96ae5a-4689-4373-bfad-06a0f99345d2\") " Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.107298 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e96ae5a-4689-4373-bfad-06a0f99345d2-bootstrap-combined-ca-bundle\") pod \"0e96ae5a-4689-4373-bfad-06a0f99345d2\" (UID: \"0e96ae5a-4689-4373-bfad-06a0f99345d2\") " Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.107499 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e96ae5a-4689-4373-bfad-06a0f99345d2-ovn-combined-ca-bundle\") pod \"0e96ae5a-4689-4373-bfad-06a0f99345d2\" (UID: \"0e96ae5a-4689-4373-bfad-06a0f99345d2\") " Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.108019 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdlwr\" (UniqueName: \"kubernetes.io/projected/0e96ae5a-4689-4373-bfad-06a0f99345d2-kube-api-access-pdlwr\") pod \"0e96ae5a-4689-4373-bfad-06a0f99345d2\" (UID: \"0e96ae5a-4689-4373-bfad-06a0f99345d2\") " Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.108173 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0e96ae5a-4689-4373-bfad-06a0f99345d2-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"0e96ae5a-4689-4373-bfad-06a0f99345d2\" (UID: \"0e96ae5a-4689-4373-bfad-06a0f99345d2\") " Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.108326 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e96ae5a-4689-4373-bfad-06a0f99345d2-repo-setup-combined-ca-bundle\") pod \"0e96ae5a-4689-4373-bfad-06a0f99345d2\" (UID: \"0e96ae5a-4689-4373-bfad-06a0f99345d2\") " Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.108702 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e96ae5a-4689-4373-bfad-06a0f99345d2-inventory\") pod \"0e96ae5a-4689-4373-bfad-06a0f99345d2\" (UID: \"0e96ae5a-4689-4373-bfad-06a0f99345d2\") " Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.108945 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e96ae5a-4689-4373-bfad-06a0f99345d2-nova-combined-ca-bundle\") pod \"0e96ae5a-4689-4373-bfad-06a0f99345d2\" (UID: \"0e96ae5a-4689-4373-bfad-06a0f99345d2\") " Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.114544 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e96ae5a-4689-4373-bfad-06a0f99345d2-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "0e96ae5a-4689-4373-bfad-06a0f99345d2" (UID: "0e96ae5a-4689-4373-bfad-06a0f99345d2"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.115223 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e96ae5a-4689-4373-bfad-06a0f99345d2-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "0e96ae5a-4689-4373-bfad-06a0f99345d2" (UID: "0e96ae5a-4689-4373-bfad-06a0f99345d2"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.115323 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e96ae5a-4689-4373-bfad-06a0f99345d2-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "0e96ae5a-4689-4373-bfad-06a0f99345d2" (UID: "0e96ae5a-4689-4373-bfad-06a0f99345d2"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.117138 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e96ae5a-4689-4373-bfad-06a0f99345d2-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "0e96ae5a-4689-4373-bfad-06a0f99345d2" (UID: "0e96ae5a-4689-4373-bfad-06a0f99345d2"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.117376 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e96ae5a-4689-4373-bfad-06a0f99345d2-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "0e96ae5a-4689-4373-bfad-06a0f99345d2" (UID: "0e96ae5a-4689-4373-bfad-06a0f99345d2"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.118172 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e96ae5a-4689-4373-bfad-06a0f99345d2-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "0e96ae5a-4689-4373-bfad-06a0f99345d2" (UID: "0e96ae5a-4689-4373-bfad-06a0f99345d2"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.118226 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e96ae5a-4689-4373-bfad-06a0f99345d2-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "0e96ae5a-4689-4373-bfad-06a0f99345d2" (UID: "0e96ae5a-4689-4373-bfad-06a0f99345d2"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.118601 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e96ae5a-4689-4373-bfad-06a0f99345d2-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "0e96ae5a-4689-4373-bfad-06a0f99345d2" (UID: "0e96ae5a-4689-4373-bfad-06a0f99345d2"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.118655 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e96ae5a-4689-4373-bfad-06a0f99345d2-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "0e96ae5a-4689-4373-bfad-06a0f99345d2" (UID: "0e96ae5a-4689-4373-bfad-06a0f99345d2"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.119301 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e96ae5a-4689-4373-bfad-06a0f99345d2-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "0e96ae5a-4689-4373-bfad-06a0f99345d2" (UID: "0e96ae5a-4689-4373-bfad-06a0f99345d2"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.119997 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e96ae5a-4689-4373-bfad-06a0f99345d2-kube-api-access-pdlwr" (OuterVolumeSpecName: "kube-api-access-pdlwr") pod "0e96ae5a-4689-4373-bfad-06a0f99345d2" (UID: "0e96ae5a-4689-4373-bfad-06a0f99345d2"). InnerVolumeSpecName "kube-api-access-pdlwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.212376 4724 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e96ae5a-4689-4373-bfad-06a0f99345d2-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.212613 4724 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0e96ae5a-4689-4373-bfad-06a0f99345d2-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.212678 4724 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e96ae5a-4689-4373-bfad-06a0f99345d2-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.212739 4724 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0e96ae5a-4689-4373-bfad-06a0f99345d2-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.212799 4724 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e96ae5a-4689-4373-bfad-06a0f99345d2-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.212858 4724 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e96ae5a-4689-4373-bfad-06a0f99345d2-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.212920 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdlwr\" (UniqueName: \"kubernetes.io/projected/0e96ae5a-4689-4373-bfad-06a0f99345d2-kube-api-access-pdlwr\") on node \"crc\" DevicePath \"\"" Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.212972 4724 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0e96ae5a-4689-4373-bfad-06a0f99345d2-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.213070 4724 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e96ae5a-4689-4373-bfad-06a0f99345d2-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.213156 4724 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e96ae5a-4689-4373-bfad-06a0f99345d2-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.213225 4724 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0e96ae5a-4689-4373-bfad-06a0f99345d2-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.439306 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e96ae5a-4689-4373-bfad-06a0f99345d2-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "0e96ae5a-4689-4373-bfad-06a0f99345d2" (UID: "0e96ae5a-4689-4373-bfad-06a0f99345d2"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.448174 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e96ae5a-4689-4373-bfad-06a0f99345d2-inventory" (OuterVolumeSpecName: "inventory") pod "0e96ae5a-4689-4373-bfad-06a0f99345d2" (UID: "0e96ae5a-4689-4373-bfad-06a0f99345d2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.448864 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e96ae5a-4689-4373-bfad-06a0f99345d2-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0e96ae5a-4689-4373-bfad-06a0f99345d2" (UID: "0e96ae5a-4689-4373-bfad-06a0f99345d2"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.519652 4724 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e96ae5a-4689-4373-bfad-06a0f99345d2-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.519948 4724 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e96ae5a-4689-4373-bfad-06a0f99345d2-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.520010 4724 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0e96ae5a-4689-4373-bfad-06a0f99345d2-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.546629 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fsscq" event={"ID":"0e96ae5a-4689-4373-bfad-06a0f99345d2","Type":"ContainerDied","Data":"7fa9a02eea7a3a540a946c3b7df37b949e5cb3907f2c0f1bb7584654e315e9d4"} Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.546680 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fa9a02eea7a3a540a946c3b7df37b949e5cb3907f2c0f1bb7584654e315e9d4" Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.546714 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fsscq" Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.630707 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-wn74p"] Feb 23 18:04:03 crc kubenswrapper[4724]: E0223 18:04:03.631088 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e96ae5a-4689-4373-bfad-06a0f99345d2" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.631106 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e96ae5a-4689-4373-bfad-06a0f99345d2" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.631322 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e96ae5a-4689-4373-bfad-06a0f99345d2" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.632008 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wn74p" Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.634481 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.634633 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.635123 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.635162 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t8jff" Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.635125 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.648382 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-wn74p"] Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.722888 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e7e7627-560c-4959-8d79-7999e31db5be-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wn74p\" (UID: \"5e7e7627-560c-4959-8d79-7999e31db5be\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wn74p" Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.722962 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv2td\" (UniqueName: \"kubernetes.io/projected/5e7e7627-560c-4959-8d79-7999e31db5be-kube-api-access-gv2td\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wn74p\" (UID: \"5e7e7627-560c-4959-8d79-7999e31db5be\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wn74p" Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.723109 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/5e7e7627-560c-4959-8d79-7999e31db5be-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wn74p\" (UID: \"5e7e7627-560c-4959-8d79-7999e31db5be\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wn74p" Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.723131 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e7e7627-560c-4959-8d79-7999e31db5be-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wn74p\" (UID: \"5e7e7627-560c-4959-8d79-7999e31db5be\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wn74p" Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.723184 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5e7e7627-560c-4959-8d79-7999e31db5be-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wn74p\" (UID: \"5e7e7627-560c-4959-8d79-7999e31db5be\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wn74p" Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.825316 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/5e7e7627-560c-4959-8d79-7999e31db5be-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wn74p\" (UID: \"5e7e7627-560c-4959-8d79-7999e31db5be\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wn74p" Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.825379 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e7e7627-560c-4959-8d79-7999e31db5be-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wn74p\" (UID: \"5e7e7627-560c-4959-8d79-7999e31db5be\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wn74p" Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.825493 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5e7e7627-560c-4959-8d79-7999e31db5be-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wn74p\" (UID: \"5e7e7627-560c-4959-8d79-7999e31db5be\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wn74p" Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.825633 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e7e7627-560c-4959-8d79-7999e31db5be-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wn74p\" (UID: \"5e7e7627-560c-4959-8d79-7999e31db5be\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wn74p" Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.825720 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv2td\" (UniqueName: \"kubernetes.io/projected/5e7e7627-560c-4959-8d79-7999e31db5be-kube-api-access-gv2td\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wn74p\" (UID: \"5e7e7627-560c-4959-8d79-7999e31db5be\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wn74p" Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.826673 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/5e7e7627-560c-4959-8d79-7999e31db5be-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wn74p\" (UID: \"5e7e7627-560c-4959-8d79-7999e31db5be\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wn74p" Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.829788 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e7e7627-560c-4959-8d79-7999e31db5be-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wn74p\" (UID: \"5e7e7627-560c-4959-8d79-7999e31db5be\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wn74p" Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.829841 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5e7e7627-560c-4959-8d79-7999e31db5be-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wn74p\" (UID: \"5e7e7627-560c-4959-8d79-7999e31db5be\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wn74p" Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.830329 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e7e7627-560c-4959-8d79-7999e31db5be-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wn74p\" (UID: \"5e7e7627-560c-4959-8d79-7999e31db5be\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wn74p" Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.843292 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv2td\" (UniqueName: \"kubernetes.io/projected/5e7e7627-560c-4959-8d79-7999e31db5be-kube-api-access-gv2td\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wn74p\" (UID: \"5e7e7627-560c-4959-8d79-7999e31db5be\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wn74p" Feb 23 18:04:03 crc kubenswrapper[4724]: I0223 18:04:03.947176 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wn74p" Feb 23 18:04:04 crc kubenswrapper[4724]: I0223 18:04:04.473052 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-wn74p"] Feb 23 18:04:04 crc kubenswrapper[4724]: I0223 18:04:04.555120 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wn74p" event={"ID":"5e7e7627-560c-4959-8d79-7999e31db5be","Type":"ContainerStarted","Data":"f33f4c0ba574470b73eef0ec80e6b4ce53f1599a49e9390179cddb74bf526b70"} Feb 23 18:04:05 crc kubenswrapper[4724]: I0223 18:04:05.574337 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wn74p" event={"ID":"5e7e7627-560c-4959-8d79-7999e31db5be","Type":"ContainerStarted","Data":"40e60a2248e0cdae566408465a838e8aa54f9c37dc75b0effd45898a07d6db2d"} Feb 23 18:04:05 crc kubenswrapper[4724]: I0223 18:04:05.598070 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wn74p" podStartSLOduration=2.135767182 podStartE2EDuration="2.598055293s" podCreationTimestamp="2026-02-23 18:04:03 +0000 UTC" firstStartedPulling="2026-02-23 18:04:04.47685031 +0000 UTC m=+2000.293049910" lastFinishedPulling="2026-02-23 18:04:04.939138421 +0000 UTC m=+2000.755338021" observedRunningTime="2026-02-23 18:04:05.593515993 +0000 UTC m=+2001.409715593" watchObservedRunningTime="2026-02-23 18:04:05.598055293 +0000 UTC m=+2001.414254893" Feb 23 18:04:14 crc kubenswrapper[4724]: I0223 18:04:14.963709 4724 scope.go:117] "RemoveContainer" containerID="a9d9154f54d232a189bc26a1a3f88396d8e7bd4b7bf9b2b3dcf38e1648177ac8" Feb 23 18:04:14 crc kubenswrapper[4724]: E0223 18:04:14.965689 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:04:25 crc kubenswrapper[4724]: I0223 18:04:25.951126 4724 scope.go:117] "RemoveContainer" containerID="a9d9154f54d232a189bc26a1a3f88396d8e7bd4b7bf9b2b3dcf38e1648177ac8" Feb 23 18:04:25 crc kubenswrapper[4724]: E0223 18:04:25.951916 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:04:36 crc kubenswrapper[4724]: I0223 18:04:36.951470 4724 scope.go:117] "RemoveContainer" containerID="a9d9154f54d232a189bc26a1a3f88396d8e7bd4b7bf9b2b3dcf38e1648177ac8" Feb 23 18:04:36 crc kubenswrapper[4724]: E0223 18:04:36.952301 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:04:49 crc kubenswrapper[4724]: I0223 18:04:49.951212 4724 scope.go:117] "RemoveContainer" containerID="a9d9154f54d232a189bc26a1a3f88396d8e7bd4b7bf9b2b3dcf38e1648177ac8" Feb 23 18:04:49 crc kubenswrapper[4724]: E0223 18:04:49.951997 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:05:01 crc kubenswrapper[4724]: I0223 18:05:01.952205 4724 scope.go:117] "RemoveContainer" containerID="a9d9154f54d232a189bc26a1a3f88396d8e7bd4b7bf9b2b3dcf38e1648177ac8" Feb 23 18:05:02 crc kubenswrapper[4724]: I0223 18:05:02.059886 4724 generic.go:334] "Generic (PLEG): container finished" podID="5e7e7627-560c-4959-8d79-7999e31db5be" containerID="40e60a2248e0cdae566408465a838e8aa54f9c37dc75b0effd45898a07d6db2d" exitCode=0 Feb 23 18:05:02 crc kubenswrapper[4724]: I0223 18:05:02.059928 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wn74p" event={"ID":"5e7e7627-560c-4959-8d79-7999e31db5be","Type":"ContainerDied","Data":"40e60a2248e0cdae566408465a838e8aa54f9c37dc75b0effd45898a07d6db2d"} Feb 23 18:05:03 crc kubenswrapper[4724]: I0223 18:05:03.069966 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" event={"ID":"a065b197-b354-4d9b-b2e9-7d4882a3d1a2","Type":"ContainerStarted","Data":"d8e7d3776c6fbb48ad76fa72eb5bbe6d210efd516c146bb3a54891b4dbd9d170"} Feb 23 18:05:03 crc kubenswrapper[4724]: I0223 18:05:03.609440 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wn74p" Feb 23 18:05:03 crc kubenswrapper[4724]: I0223 18:05:03.659588 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/5e7e7627-560c-4959-8d79-7999e31db5be-ovncontroller-config-0\") pod \"5e7e7627-560c-4959-8d79-7999e31db5be\" (UID: \"5e7e7627-560c-4959-8d79-7999e31db5be\") " Feb 23 18:05:03 crc kubenswrapper[4724]: I0223 18:05:03.659784 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gv2td\" (UniqueName: \"kubernetes.io/projected/5e7e7627-560c-4959-8d79-7999e31db5be-kube-api-access-gv2td\") pod \"5e7e7627-560c-4959-8d79-7999e31db5be\" (UID: \"5e7e7627-560c-4959-8d79-7999e31db5be\") " Feb 23 18:05:03 crc kubenswrapper[4724]: I0223 18:05:03.659834 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5e7e7627-560c-4959-8d79-7999e31db5be-ssh-key-openstack-edpm-ipam\") pod \"5e7e7627-560c-4959-8d79-7999e31db5be\" (UID: \"5e7e7627-560c-4959-8d79-7999e31db5be\") " Feb 23 18:05:03 crc kubenswrapper[4724]: I0223 18:05:03.659903 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e7e7627-560c-4959-8d79-7999e31db5be-inventory\") pod \"5e7e7627-560c-4959-8d79-7999e31db5be\" (UID: \"5e7e7627-560c-4959-8d79-7999e31db5be\") " Feb 23 18:05:03 crc kubenswrapper[4724]: I0223 18:05:03.660643 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e7e7627-560c-4959-8d79-7999e31db5be-ovn-combined-ca-bundle\") pod \"5e7e7627-560c-4959-8d79-7999e31db5be\" (UID: \"5e7e7627-560c-4959-8d79-7999e31db5be\") " Feb 23 18:05:03 crc kubenswrapper[4724]: I0223 18:05:03.667473 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e7e7627-560c-4959-8d79-7999e31db5be-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "5e7e7627-560c-4959-8d79-7999e31db5be" (UID: "5e7e7627-560c-4959-8d79-7999e31db5be"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:05:03 crc kubenswrapper[4724]: I0223 18:05:03.668101 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e7e7627-560c-4959-8d79-7999e31db5be-kube-api-access-gv2td" (OuterVolumeSpecName: "kube-api-access-gv2td") pod "5e7e7627-560c-4959-8d79-7999e31db5be" (UID: "5e7e7627-560c-4959-8d79-7999e31db5be"). InnerVolumeSpecName "kube-api-access-gv2td". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:05:03 crc kubenswrapper[4724]: I0223 18:05:03.687787 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e7e7627-560c-4959-8d79-7999e31db5be-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "5e7e7627-560c-4959-8d79-7999e31db5be" (UID: "5e7e7627-560c-4959-8d79-7999e31db5be"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:05:03 crc kubenswrapper[4724]: I0223 18:05:03.690222 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e7e7627-560c-4959-8d79-7999e31db5be-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5e7e7627-560c-4959-8d79-7999e31db5be" (UID: "5e7e7627-560c-4959-8d79-7999e31db5be"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:05:03 crc kubenswrapper[4724]: I0223 18:05:03.692932 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e7e7627-560c-4959-8d79-7999e31db5be-inventory" (OuterVolumeSpecName: "inventory") pod "5e7e7627-560c-4959-8d79-7999e31db5be" (UID: "5e7e7627-560c-4959-8d79-7999e31db5be"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:05:03 crc kubenswrapper[4724]: I0223 18:05:03.764305 4724 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/5e7e7627-560c-4959-8d79-7999e31db5be-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 23 18:05:03 crc kubenswrapper[4724]: I0223 18:05:03.764344 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gv2td\" (UniqueName: \"kubernetes.io/projected/5e7e7627-560c-4959-8d79-7999e31db5be-kube-api-access-gv2td\") on node \"crc\" DevicePath \"\"" Feb 23 18:05:03 crc kubenswrapper[4724]: I0223 18:05:03.764353 4724 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5e7e7627-560c-4959-8d79-7999e31db5be-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 18:05:03 crc kubenswrapper[4724]: I0223 18:05:03.764364 4724 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e7e7627-560c-4959-8d79-7999e31db5be-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 18:05:03 crc kubenswrapper[4724]: I0223 18:05:03.764377 4724 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e7e7627-560c-4959-8d79-7999e31db5be-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:05:04 crc kubenswrapper[4724]: I0223 18:05:04.082092 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wn74p" event={"ID":"5e7e7627-560c-4959-8d79-7999e31db5be","Type":"ContainerDied","Data":"f33f4c0ba574470b73eef0ec80e6b4ce53f1599a49e9390179cddb74bf526b70"} Feb 23 18:05:04 crc kubenswrapper[4724]: I0223 18:05:04.082552 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f33f4c0ba574470b73eef0ec80e6b4ce53f1599a49e9390179cddb74bf526b70" Feb 23 18:05:04 crc kubenswrapper[4724]: I0223 18:05:04.082269 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wn74p" Feb 23 18:05:04 crc kubenswrapper[4724]: I0223 18:05:04.222225 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fxkgw"] Feb 23 18:05:04 crc kubenswrapper[4724]: E0223 18:05:04.222713 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e7e7627-560c-4959-8d79-7999e31db5be" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 23 18:05:04 crc kubenswrapper[4724]: I0223 18:05:04.222741 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e7e7627-560c-4959-8d79-7999e31db5be" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 23 18:05:04 crc kubenswrapper[4724]: I0223 18:05:04.222969 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e7e7627-560c-4959-8d79-7999e31db5be" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 23 18:05:04 crc kubenswrapper[4724]: I0223 18:05:04.223835 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fxkgw" Feb 23 18:05:04 crc kubenswrapper[4724]: I0223 18:05:04.225971 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 23 18:05:04 crc kubenswrapper[4724]: I0223 18:05:04.227216 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 18:05:04 crc kubenswrapper[4724]: I0223 18:05:04.228490 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 18:05:04 crc kubenswrapper[4724]: I0223 18:05:04.228490 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 23 18:05:04 crc kubenswrapper[4724]: I0223 18:05:04.228883 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t8jff" Feb 23 18:05:04 crc kubenswrapper[4724]: I0223 18:05:04.237070 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fxkgw"] Feb 23 18:05:04 crc kubenswrapper[4724]: I0223 18:05:04.237467 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 18:05:04 crc kubenswrapper[4724]: I0223 18:05:04.274056 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffe67500-5244-403d-8a50-59aa76582492-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fxkgw\" (UID: \"ffe67500-5244-403d-8a50-59aa76582492\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fxkgw" Feb 23 18:05:04 crc kubenswrapper[4724]: I0223 18:05:04.274120 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffe67500-5244-403d-8a50-59aa76582492-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fxkgw\" (UID: \"ffe67500-5244-403d-8a50-59aa76582492\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fxkgw" Feb 23 18:05:04 crc kubenswrapper[4724]: I0223 18:05:04.274207 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ffe67500-5244-403d-8a50-59aa76582492-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fxkgw\" (UID: \"ffe67500-5244-403d-8a50-59aa76582492\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fxkgw" Feb 23 18:05:04 crc kubenswrapper[4724]: I0223 18:05:04.274285 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ffe67500-5244-403d-8a50-59aa76582492-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fxkgw\" (UID: \"ffe67500-5244-403d-8a50-59aa76582492\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fxkgw" Feb 23 18:05:04 crc kubenswrapper[4724]: I0223 18:05:04.274370 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkgsd\" (UniqueName: \"kubernetes.io/projected/ffe67500-5244-403d-8a50-59aa76582492-kube-api-access-vkgsd\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fxkgw\" (UID: \"ffe67500-5244-403d-8a50-59aa76582492\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fxkgw" Feb 23 18:05:04 crc kubenswrapper[4724]: I0223 18:05:04.274435 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ffe67500-5244-403d-8a50-59aa76582492-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fxkgw\" (UID: \"ffe67500-5244-403d-8a50-59aa76582492\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fxkgw" Feb 23 18:05:04 crc kubenswrapper[4724]: I0223 18:05:04.375855 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ffe67500-5244-403d-8a50-59aa76582492-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fxkgw\" (UID: \"ffe67500-5244-403d-8a50-59aa76582492\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fxkgw" Feb 23 18:05:04 crc kubenswrapper[4724]: I0223 18:05:04.375993 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ffe67500-5244-403d-8a50-59aa76582492-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fxkgw\" (UID: \"ffe67500-5244-403d-8a50-59aa76582492\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fxkgw" Feb 23 18:05:04 crc kubenswrapper[4724]: I0223 18:05:04.376897 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkgsd\" (UniqueName: \"kubernetes.io/projected/ffe67500-5244-403d-8a50-59aa76582492-kube-api-access-vkgsd\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fxkgw\" (UID: \"ffe67500-5244-403d-8a50-59aa76582492\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fxkgw" Feb 23 18:05:04 crc kubenswrapper[4724]: I0223 18:05:04.377025 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ffe67500-5244-403d-8a50-59aa76582492-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fxkgw\" (UID: \"ffe67500-5244-403d-8a50-59aa76582492\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fxkgw" Feb 23 18:05:04 crc kubenswrapper[4724]: I0223 18:05:04.377136 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffe67500-5244-403d-8a50-59aa76582492-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fxkgw\" (UID: \"ffe67500-5244-403d-8a50-59aa76582492\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fxkgw" Feb 23 18:05:04 crc kubenswrapper[4724]: I0223 18:05:04.377188 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffe67500-5244-403d-8a50-59aa76582492-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fxkgw\" (UID: \"ffe67500-5244-403d-8a50-59aa76582492\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fxkgw" Feb 23 18:05:04 crc kubenswrapper[4724]: I0223 18:05:04.385215 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ffe67500-5244-403d-8a50-59aa76582492-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fxkgw\" (UID: \"ffe67500-5244-403d-8a50-59aa76582492\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fxkgw" Feb 23 18:05:04 crc kubenswrapper[4724]: I0223 18:05:04.385611 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ffe67500-5244-403d-8a50-59aa76582492-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fxkgw\" (UID: \"ffe67500-5244-403d-8a50-59aa76582492\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fxkgw" Feb 23 18:05:04 crc kubenswrapper[4724]: I0223 18:05:04.385990 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ffe67500-5244-403d-8a50-59aa76582492-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fxkgw\" (UID: \"ffe67500-5244-403d-8a50-59aa76582492\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fxkgw" Feb 23 18:05:04 crc kubenswrapper[4724]: I0223 18:05:04.386795 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffe67500-5244-403d-8a50-59aa76582492-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fxkgw\" (UID: \"ffe67500-5244-403d-8a50-59aa76582492\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fxkgw" Feb 23 18:05:04 crc kubenswrapper[4724]: I0223 18:05:04.390458 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffe67500-5244-403d-8a50-59aa76582492-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fxkgw\" (UID: \"ffe67500-5244-403d-8a50-59aa76582492\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fxkgw" Feb 23 18:05:04 crc kubenswrapper[4724]: I0223 18:05:04.397554 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkgsd\" (UniqueName: \"kubernetes.io/projected/ffe67500-5244-403d-8a50-59aa76582492-kube-api-access-vkgsd\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fxkgw\" (UID: \"ffe67500-5244-403d-8a50-59aa76582492\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fxkgw" Feb 23 18:05:04 crc kubenswrapper[4724]: I0223 18:05:04.552293 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fxkgw" Feb 23 18:05:05 crc kubenswrapper[4724]: I0223 18:05:05.040304 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fxkgw"] Feb 23 18:05:05 crc kubenswrapper[4724]: W0223 18:05:05.044522 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffe67500_5244_403d_8a50_59aa76582492.slice/crio-446a9422cd778126df7c9e62c7d5c230033e0308cdde6b75c674c36cfc5aab8f WatchSource:0}: Error finding container 446a9422cd778126df7c9e62c7d5c230033e0308cdde6b75c674c36cfc5aab8f: Status 404 returned error can't find the container with id 446a9422cd778126df7c9e62c7d5c230033e0308cdde6b75c674c36cfc5aab8f Feb 23 18:05:05 crc kubenswrapper[4724]: I0223 18:05:05.090802 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fxkgw" event={"ID":"ffe67500-5244-403d-8a50-59aa76582492","Type":"ContainerStarted","Data":"446a9422cd778126df7c9e62c7d5c230033e0308cdde6b75c674c36cfc5aab8f"} Feb 23 18:05:06 crc kubenswrapper[4724]: I0223 18:05:06.101966 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fxkgw" event={"ID":"ffe67500-5244-403d-8a50-59aa76582492","Type":"ContainerStarted","Data":"99d43228dd1cb8575c93f1c3246b4649eb734ebf5f58f4b7e5f719fb93c328d7"} Feb 23 18:05:06 crc kubenswrapper[4724]: I0223 18:05:06.144610 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fxkgw" podStartSLOduration=1.616945453 podStartE2EDuration="2.144585029s" podCreationTimestamp="2026-02-23 18:05:04 +0000 UTC" firstStartedPulling="2026-02-23 18:05:05.047513452 +0000 UTC m=+2060.863713052" lastFinishedPulling="2026-02-23 18:05:05.575153028 +0000 UTC m=+2061.391352628" observedRunningTime="2026-02-23 18:05:06.129901115 +0000 UTC m=+2061.946100715" watchObservedRunningTime="2026-02-23 18:05:06.144585029 +0000 UTC m=+2061.960784639" Feb 23 18:05:49 crc kubenswrapper[4724]: I0223 18:05:49.518260 4724 generic.go:334] "Generic (PLEG): container finished" podID="ffe67500-5244-403d-8a50-59aa76582492" containerID="99d43228dd1cb8575c93f1c3246b4649eb734ebf5f58f4b7e5f719fb93c328d7" exitCode=0 Feb 23 18:05:49 crc kubenswrapper[4724]: I0223 18:05:49.519009 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fxkgw" event={"ID":"ffe67500-5244-403d-8a50-59aa76582492","Type":"ContainerDied","Data":"99d43228dd1cb8575c93f1c3246b4649eb734ebf5f58f4b7e5f719fb93c328d7"} Feb 23 18:05:50 crc kubenswrapper[4724]: I0223 18:05:50.943857 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fxkgw" Feb 23 18:05:51 crc kubenswrapper[4724]: I0223 18:05:51.069835 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkgsd\" (UniqueName: \"kubernetes.io/projected/ffe67500-5244-403d-8a50-59aa76582492-kube-api-access-vkgsd\") pod \"ffe67500-5244-403d-8a50-59aa76582492\" (UID: \"ffe67500-5244-403d-8a50-59aa76582492\") " Feb 23 18:05:51 crc kubenswrapper[4724]: I0223 18:05:51.069924 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffe67500-5244-403d-8a50-59aa76582492-neutron-metadata-combined-ca-bundle\") pod \"ffe67500-5244-403d-8a50-59aa76582492\" (UID: \"ffe67500-5244-403d-8a50-59aa76582492\") " Feb 23 18:05:51 crc kubenswrapper[4724]: I0223 18:05:51.069982 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffe67500-5244-403d-8a50-59aa76582492-inventory\") pod \"ffe67500-5244-403d-8a50-59aa76582492\" (UID: \"ffe67500-5244-403d-8a50-59aa76582492\") " Feb 23 18:05:51 crc kubenswrapper[4724]: I0223 18:05:51.070077 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ffe67500-5244-403d-8a50-59aa76582492-nova-metadata-neutron-config-0\") pod \"ffe67500-5244-403d-8a50-59aa76582492\" (UID: \"ffe67500-5244-403d-8a50-59aa76582492\") " Feb 23 18:05:51 crc kubenswrapper[4724]: I0223 18:05:51.070131 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ffe67500-5244-403d-8a50-59aa76582492-neutron-ovn-metadata-agent-neutron-config-0\") pod \"ffe67500-5244-403d-8a50-59aa76582492\" (UID: \"ffe67500-5244-403d-8a50-59aa76582492\") " Feb 23 18:05:51 crc kubenswrapper[4724]: I0223 18:05:51.070250 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ffe67500-5244-403d-8a50-59aa76582492-ssh-key-openstack-edpm-ipam\") pod \"ffe67500-5244-403d-8a50-59aa76582492\" (UID: \"ffe67500-5244-403d-8a50-59aa76582492\") " Feb 23 18:05:51 crc kubenswrapper[4724]: I0223 18:05:51.090099 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffe67500-5244-403d-8a50-59aa76582492-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "ffe67500-5244-403d-8a50-59aa76582492" (UID: "ffe67500-5244-403d-8a50-59aa76582492"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:05:51 crc kubenswrapper[4724]: I0223 18:05:51.090342 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffe67500-5244-403d-8a50-59aa76582492-kube-api-access-vkgsd" (OuterVolumeSpecName: "kube-api-access-vkgsd") pod "ffe67500-5244-403d-8a50-59aa76582492" (UID: "ffe67500-5244-403d-8a50-59aa76582492"). InnerVolumeSpecName "kube-api-access-vkgsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:05:51 crc kubenswrapper[4724]: I0223 18:05:51.103146 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffe67500-5244-403d-8a50-59aa76582492-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "ffe67500-5244-403d-8a50-59aa76582492" (UID: "ffe67500-5244-403d-8a50-59aa76582492"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:05:51 crc kubenswrapper[4724]: I0223 18:05:51.103896 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffe67500-5244-403d-8a50-59aa76582492-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "ffe67500-5244-403d-8a50-59aa76582492" (UID: "ffe67500-5244-403d-8a50-59aa76582492"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:05:51 crc kubenswrapper[4724]: I0223 18:05:51.105112 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffe67500-5244-403d-8a50-59aa76582492-inventory" (OuterVolumeSpecName: "inventory") pod "ffe67500-5244-403d-8a50-59aa76582492" (UID: "ffe67500-5244-403d-8a50-59aa76582492"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:05:51 crc kubenswrapper[4724]: I0223 18:05:51.121435 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffe67500-5244-403d-8a50-59aa76582492-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ffe67500-5244-403d-8a50-59aa76582492" (UID: "ffe67500-5244-403d-8a50-59aa76582492"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:05:51 crc kubenswrapper[4724]: I0223 18:05:51.172781 4724 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ffe67500-5244-403d-8a50-59aa76582492-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 23 18:05:51 crc kubenswrapper[4724]: I0223 18:05:51.172817 4724 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ffe67500-5244-403d-8a50-59aa76582492-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 23 18:05:51 crc kubenswrapper[4724]: I0223 18:05:51.172863 4724 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ffe67500-5244-403d-8a50-59aa76582492-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 18:05:51 crc kubenswrapper[4724]: I0223 18:05:51.172874 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkgsd\" (UniqueName: \"kubernetes.io/projected/ffe67500-5244-403d-8a50-59aa76582492-kube-api-access-vkgsd\") on node \"crc\" DevicePath \"\"" Feb 23 18:05:51 crc kubenswrapper[4724]: I0223 18:05:51.172883 4724 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffe67500-5244-403d-8a50-59aa76582492-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:05:51 crc kubenswrapper[4724]: I0223 18:05:51.172897 4724 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffe67500-5244-403d-8a50-59aa76582492-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 18:05:51 crc kubenswrapper[4724]: I0223 18:05:51.538447 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fxkgw" event={"ID":"ffe67500-5244-403d-8a50-59aa76582492","Type":"ContainerDied","Data":"446a9422cd778126df7c9e62c7d5c230033e0308cdde6b75c674c36cfc5aab8f"} Feb 23 18:05:51 crc kubenswrapper[4724]: I0223 18:05:51.538499 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="446a9422cd778126df7c9e62c7d5c230033e0308cdde6b75c674c36cfc5aab8f" Feb 23 18:05:51 crc kubenswrapper[4724]: I0223 18:05:51.538560 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fxkgw" Feb 23 18:05:51 crc kubenswrapper[4724]: I0223 18:05:51.643112 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tp7qm"] Feb 23 18:05:51 crc kubenswrapper[4724]: E0223 18:05:51.643829 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffe67500-5244-403d-8a50-59aa76582492" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 23 18:05:51 crc kubenswrapper[4724]: I0223 18:05:51.643848 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffe67500-5244-403d-8a50-59aa76582492" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 23 18:05:51 crc kubenswrapper[4724]: I0223 18:05:51.644037 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffe67500-5244-403d-8a50-59aa76582492" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 23 18:05:51 crc kubenswrapper[4724]: I0223 18:05:51.644694 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tp7qm" Feb 23 18:05:51 crc kubenswrapper[4724]: I0223 18:05:51.648718 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 23 18:05:51 crc kubenswrapper[4724]: I0223 18:05:51.648867 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 18:05:51 crc kubenswrapper[4724]: I0223 18:05:51.649123 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 18:05:51 crc kubenswrapper[4724]: I0223 18:05:51.649238 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t8jff" Feb 23 18:05:51 crc kubenswrapper[4724]: I0223 18:05:51.649580 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 18:05:51 crc kubenswrapper[4724]: I0223 18:05:51.653801 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tp7qm"] Feb 23 18:05:51 crc kubenswrapper[4724]: I0223 18:05:51.784412 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3f5fa243-d790-4006-9c4c-7a1bf93a56b4-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tp7qm\" (UID: \"3f5fa243-d790-4006-9c4c-7a1bf93a56b4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tp7qm" Feb 23 18:05:51 crc kubenswrapper[4724]: I0223 18:05:51.784659 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f5fa243-d790-4006-9c4c-7a1bf93a56b4-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tp7qm\" (UID: \"3f5fa243-d790-4006-9c4c-7a1bf93a56b4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tp7qm" Feb 23 18:05:51 crc kubenswrapper[4724]: I0223 18:05:51.784728 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3f5fa243-d790-4006-9c4c-7a1bf93a56b4-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tp7qm\" (UID: \"3f5fa243-d790-4006-9c4c-7a1bf93a56b4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tp7qm" Feb 23 18:05:51 crc kubenswrapper[4724]: I0223 18:05:51.784822 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r25p\" (UniqueName: \"kubernetes.io/projected/3f5fa243-d790-4006-9c4c-7a1bf93a56b4-kube-api-access-6r25p\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tp7qm\" (UID: \"3f5fa243-d790-4006-9c4c-7a1bf93a56b4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tp7qm" Feb 23 18:05:51 crc kubenswrapper[4724]: I0223 18:05:51.784885 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/3f5fa243-d790-4006-9c4c-7a1bf93a56b4-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tp7qm\" (UID: \"3f5fa243-d790-4006-9c4c-7a1bf93a56b4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tp7qm" Feb 23 18:05:51 crc kubenswrapper[4724]: I0223 18:05:51.886354 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3f5fa243-d790-4006-9c4c-7a1bf93a56b4-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tp7qm\" (UID: \"3f5fa243-d790-4006-9c4c-7a1bf93a56b4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tp7qm" Feb 23 18:05:51 crc kubenswrapper[4724]: I0223 18:05:51.886477 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6r25p\" (UniqueName: \"kubernetes.io/projected/3f5fa243-d790-4006-9c4c-7a1bf93a56b4-kube-api-access-6r25p\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tp7qm\" (UID: \"3f5fa243-d790-4006-9c4c-7a1bf93a56b4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tp7qm" Feb 23 18:05:51 crc kubenswrapper[4724]: I0223 18:05:51.886538 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/3f5fa243-d790-4006-9c4c-7a1bf93a56b4-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tp7qm\" (UID: \"3f5fa243-d790-4006-9c4c-7a1bf93a56b4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tp7qm" Feb 23 18:05:51 crc kubenswrapper[4724]: I0223 18:05:51.886601 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3f5fa243-d790-4006-9c4c-7a1bf93a56b4-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tp7qm\" (UID: \"3f5fa243-d790-4006-9c4c-7a1bf93a56b4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tp7qm" Feb 23 18:05:51 crc kubenswrapper[4724]: I0223 18:05:51.886705 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f5fa243-d790-4006-9c4c-7a1bf93a56b4-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tp7qm\" (UID: \"3f5fa243-d790-4006-9c4c-7a1bf93a56b4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tp7qm" Feb 23 18:05:51 crc kubenswrapper[4724]: I0223 18:05:51.891543 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3f5fa243-d790-4006-9c4c-7a1bf93a56b4-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tp7qm\" (UID: \"3f5fa243-d790-4006-9c4c-7a1bf93a56b4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tp7qm" Feb 23 18:05:51 crc kubenswrapper[4724]: I0223 18:05:51.891837 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3f5fa243-d790-4006-9c4c-7a1bf93a56b4-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tp7qm\" (UID: \"3f5fa243-d790-4006-9c4c-7a1bf93a56b4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tp7qm" Feb 23 18:05:51 crc kubenswrapper[4724]: I0223 18:05:51.892544 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f5fa243-d790-4006-9c4c-7a1bf93a56b4-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tp7qm\" (UID: \"3f5fa243-d790-4006-9c4c-7a1bf93a56b4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tp7qm" Feb 23 18:05:51 crc kubenswrapper[4724]: I0223 18:05:51.896071 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/3f5fa243-d790-4006-9c4c-7a1bf93a56b4-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tp7qm\" (UID: \"3f5fa243-d790-4006-9c4c-7a1bf93a56b4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tp7qm" Feb 23 18:05:51 crc kubenswrapper[4724]: I0223 18:05:51.905626 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6r25p\" (UniqueName: \"kubernetes.io/projected/3f5fa243-d790-4006-9c4c-7a1bf93a56b4-kube-api-access-6r25p\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tp7qm\" (UID: \"3f5fa243-d790-4006-9c4c-7a1bf93a56b4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tp7qm" Feb 23 18:05:51 crc kubenswrapper[4724]: I0223 18:05:51.971345 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tp7qm" Feb 23 18:05:52 crc kubenswrapper[4724]: I0223 18:05:52.486923 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tp7qm"] Feb 23 18:05:52 crc kubenswrapper[4724]: I0223 18:05:52.548384 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tp7qm" event={"ID":"3f5fa243-d790-4006-9c4c-7a1bf93a56b4","Type":"ContainerStarted","Data":"a4937d6900a0b5118203e8684be97c3061328a01a4f0627faac2160a7a95e924"} Feb 23 18:05:54 crc kubenswrapper[4724]: I0223 18:05:54.566528 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tp7qm" event={"ID":"3f5fa243-d790-4006-9c4c-7a1bf93a56b4","Type":"ContainerStarted","Data":"f1f6b0a789c1f425c47dc1e4de7fbae8eb79dfbc88db0c20690e112b1c49a232"} Feb 23 18:05:54 crc kubenswrapper[4724]: I0223 18:05:54.588499 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tp7qm" podStartSLOduration=2.805990472 podStartE2EDuration="3.588481542s" podCreationTimestamp="2026-02-23 18:05:51 +0000 UTC" firstStartedPulling="2026-02-23 18:05:52.495781033 +0000 UTC m=+2108.311980633" lastFinishedPulling="2026-02-23 18:05:53.278272103 +0000 UTC m=+2109.094471703" observedRunningTime="2026-02-23 18:05:54.584721709 +0000 UTC m=+2110.400921329" watchObservedRunningTime="2026-02-23 18:05:54.588481542 +0000 UTC m=+2110.404681142" Feb 23 18:06:08 crc kubenswrapper[4724]: I0223 18:06:08.916755 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vmfzq"] Feb 23 18:06:08 crc kubenswrapper[4724]: I0223 18:06:08.921440 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vmfzq" Feb 23 18:06:08 crc kubenswrapper[4724]: I0223 18:06:08.931204 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vmfzq"] Feb 23 18:06:09 crc kubenswrapper[4724]: I0223 18:06:09.046269 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8720f0c5-7219-4973-9af8-143d9725ac76-utilities\") pod \"redhat-operators-vmfzq\" (UID: \"8720f0c5-7219-4973-9af8-143d9725ac76\") " pod="openshift-marketplace/redhat-operators-vmfzq" Feb 23 18:06:09 crc kubenswrapper[4724]: I0223 18:06:09.046328 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxkdh\" (UniqueName: \"kubernetes.io/projected/8720f0c5-7219-4973-9af8-143d9725ac76-kube-api-access-nxkdh\") pod \"redhat-operators-vmfzq\" (UID: \"8720f0c5-7219-4973-9af8-143d9725ac76\") " pod="openshift-marketplace/redhat-operators-vmfzq" Feb 23 18:06:09 crc kubenswrapper[4724]: I0223 18:06:09.046727 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8720f0c5-7219-4973-9af8-143d9725ac76-catalog-content\") pod \"redhat-operators-vmfzq\" (UID: \"8720f0c5-7219-4973-9af8-143d9725ac76\") " pod="openshift-marketplace/redhat-operators-vmfzq" Feb 23 18:06:09 crc kubenswrapper[4724]: I0223 18:06:09.149033 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8720f0c5-7219-4973-9af8-143d9725ac76-catalog-content\") pod \"redhat-operators-vmfzq\" (UID: \"8720f0c5-7219-4973-9af8-143d9725ac76\") " pod="openshift-marketplace/redhat-operators-vmfzq" Feb 23 18:06:09 crc kubenswrapper[4724]: I0223 18:06:09.149242 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8720f0c5-7219-4973-9af8-143d9725ac76-utilities\") pod \"redhat-operators-vmfzq\" (UID: \"8720f0c5-7219-4973-9af8-143d9725ac76\") " pod="openshift-marketplace/redhat-operators-vmfzq" Feb 23 18:06:09 crc kubenswrapper[4724]: I0223 18:06:09.149294 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxkdh\" (UniqueName: \"kubernetes.io/projected/8720f0c5-7219-4973-9af8-143d9725ac76-kube-api-access-nxkdh\") pod \"redhat-operators-vmfzq\" (UID: \"8720f0c5-7219-4973-9af8-143d9725ac76\") " pod="openshift-marketplace/redhat-operators-vmfzq" Feb 23 18:06:09 crc kubenswrapper[4724]: I0223 18:06:09.150050 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8720f0c5-7219-4973-9af8-143d9725ac76-catalog-content\") pod \"redhat-operators-vmfzq\" (UID: \"8720f0c5-7219-4973-9af8-143d9725ac76\") " pod="openshift-marketplace/redhat-operators-vmfzq" Feb 23 18:06:09 crc kubenswrapper[4724]: I0223 18:06:09.150155 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8720f0c5-7219-4973-9af8-143d9725ac76-utilities\") pod \"redhat-operators-vmfzq\" (UID: \"8720f0c5-7219-4973-9af8-143d9725ac76\") " pod="openshift-marketplace/redhat-operators-vmfzq" Feb 23 18:06:09 crc kubenswrapper[4724]: I0223 18:06:09.170610 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxkdh\" (UniqueName: \"kubernetes.io/projected/8720f0c5-7219-4973-9af8-143d9725ac76-kube-api-access-nxkdh\") pod \"redhat-operators-vmfzq\" (UID: \"8720f0c5-7219-4973-9af8-143d9725ac76\") " pod="openshift-marketplace/redhat-operators-vmfzq" Feb 23 18:06:09 crc kubenswrapper[4724]: I0223 18:06:09.244297 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vmfzq" Feb 23 18:06:09 crc kubenswrapper[4724]: I0223 18:06:09.727152 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vmfzq"] Feb 23 18:06:09 crc kubenswrapper[4724]: W0223 18:06:09.734637 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8720f0c5_7219_4973_9af8_143d9725ac76.slice/crio-937df34a907885eadf98381a8d779b3e4ec94863010e4dd52a31311f2a3f0a17 WatchSource:0}: Error finding container 937df34a907885eadf98381a8d779b3e4ec94863010e4dd52a31311f2a3f0a17: Status 404 returned error can't find the container with id 937df34a907885eadf98381a8d779b3e4ec94863010e4dd52a31311f2a3f0a17 Feb 23 18:06:10 crc kubenswrapper[4724]: I0223 18:06:10.703003 4724 generic.go:334] "Generic (PLEG): container finished" podID="8720f0c5-7219-4973-9af8-143d9725ac76" containerID="6a968122ec96a247dbe2dab0948972f317a48367f2b21a34dd121afb09bb2ea1" exitCode=0 Feb 23 18:06:10 crc kubenswrapper[4724]: I0223 18:06:10.703070 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmfzq" event={"ID":"8720f0c5-7219-4973-9af8-143d9725ac76","Type":"ContainerDied","Data":"6a968122ec96a247dbe2dab0948972f317a48367f2b21a34dd121afb09bb2ea1"} Feb 23 18:06:10 crc kubenswrapper[4724]: I0223 18:06:10.703288 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmfzq" event={"ID":"8720f0c5-7219-4973-9af8-143d9725ac76","Type":"ContainerStarted","Data":"937df34a907885eadf98381a8d779b3e4ec94863010e4dd52a31311f2a3f0a17"} Feb 23 18:06:12 crc kubenswrapper[4724]: I0223 18:06:12.727823 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmfzq" event={"ID":"8720f0c5-7219-4973-9af8-143d9725ac76","Type":"ContainerStarted","Data":"7fb287e1a1d2bf53b48ff08774c3408c57b9556f738813ff1e73269a89346e40"} Feb 23 18:06:17 crc kubenswrapper[4724]: I0223 18:06:17.789674 4724 generic.go:334] "Generic (PLEG): container finished" podID="8720f0c5-7219-4973-9af8-143d9725ac76" containerID="7fb287e1a1d2bf53b48ff08774c3408c57b9556f738813ff1e73269a89346e40" exitCode=0 Feb 23 18:06:17 crc kubenswrapper[4724]: I0223 18:06:17.789850 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmfzq" event={"ID":"8720f0c5-7219-4973-9af8-143d9725ac76","Type":"ContainerDied","Data":"7fb287e1a1d2bf53b48ff08774c3408c57b9556f738813ff1e73269a89346e40"} Feb 23 18:06:19 crc kubenswrapper[4724]: I0223 18:06:19.813229 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmfzq" event={"ID":"8720f0c5-7219-4973-9af8-143d9725ac76","Type":"ContainerStarted","Data":"36d52a1ee35b45bcf4ec6d7f6dc8ab4be3fce3614a097ec59fe7240983411a1f"} Feb 23 18:06:19 crc kubenswrapper[4724]: I0223 18:06:19.837590 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vmfzq" podStartSLOduration=3.314172587 podStartE2EDuration="11.837566025s" podCreationTimestamp="2026-02-23 18:06:08 +0000 UTC" firstStartedPulling="2026-02-23 18:06:10.705440432 +0000 UTC m=+2126.521640032" lastFinishedPulling="2026-02-23 18:06:19.22883387 +0000 UTC m=+2135.045033470" observedRunningTime="2026-02-23 18:06:19.83628243 +0000 UTC m=+2135.652482030" watchObservedRunningTime="2026-02-23 18:06:19.837566025 +0000 UTC m=+2135.653765625" Feb 23 18:06:29 crc kubenswrapper[4724]: I0223 18:06:29.244756 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vmfzq" Feb 23 18:06:29 crc kubenswrapper[4724]: I0223 18:06:29.245165 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vmfzq" Feb 23 18:06:30 crc kubenswrapper[4724]: I0223 18:06:30.295800 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vmfzq" podUID="8720f0c5-7219-4973-9af8-143d9725ac76" containerName="registry-server" probeResult="failure" output=< Feb 23 18:06:30 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 23 18:06:30 crc kubenswrapper[4724]: > Feb 23 18:06:39 crc kubenswrapper[4724]: I0223 18:06:39.302077 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vmfzq" Feb 23 18:06:39 crc kubenswrapper[4724]: I0223 18:06:39.361440 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vmfzq" Feb 23 18:06:39 crc kubenswrapper[4724]: I0223 18:06:39.554976 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vmfzq"] Feb 23 18:06:41 crc kubenswrapper[4724]: I0223 18:06:41.013313 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vmfzq" podUID="8720f0c5-7219-4973-9af8-143d9725ac76" containerName="registry-server" containerID="cri-o://36d52a1ee35b45bcf4ec6d7f6dc8ab4be3fce3614a097ec59fe7240983411a1f" gracePeriod=2 Feb 23 18:06:41 crc kubenswrapper[4724]: I0223 18:06:41.616874 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vmfzq" Feb 23 18:06:41 crc kubenswrapper[4724]: I0223 18:06:41.632519 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8720f0c5-7219-4973-9af8-143d9725ac76-utilities\") pod \"8720f0c5-7219-4973-9af8-143d9725ac76\" (UID: \"8720f0c5-7219-4973-9af8-143d9725ac76\") " Feb 23 18:06:41 crc kubenswrapper[4724]: I0223 18:06:41.632558 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8720f0c5-7219-4973-9af8-143d9725ac76-catalog-content\") pod \"8720f0c5-7219-4973-9af8-143d9725ac76\" (UID: \"8720f0c5-7219-4973-9af8-143d9725ac76\") " Feb 23 18:06:41 crc kubenswrapper[4724]: I0223 18:06:41.632682 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxkdh\" (UniqueName: \"kubernetes.io/projected/8720f0c5-7219-4973-9af8-143d9725ac76-kube-api-access-nxkdh\") pod \"8720f0c5-7219-4973-9af8-143d9725ac76\" (UID: \"8720f0c5-7219-4973-9af8-143d9725ac76\") " Feb 23 18:06:41 crc kubenswrapper[4724]: I0223 18:06:41.634409 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8720f0c5-7219-4973-9af8-143d9725ac76-utilities" (OuterVolumeSpecName: "utilities") pod "8720f0c5-7219-4973-9af8-143d9725ac76" (UID: "8720f0c5-7219-4973-9af8-143d9725ac76"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:06:41 crc kubenswrapper[4724]: I0223 18:06:41.656897 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8720f0c5-7219-4973-9af8-143d9725ac76-kube-api-access-nxkdh" (OuterVolumeSpecName: "kube-api-access-nxkdh") pod "8720f0c5-7219-4973-9af8-143d9725ac76" (UID: "8720f0c5-7219-4973-9af8-143d9725ac76"). InnerVolumeSpecName "kube-api-access-nxkdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:06:41 crc kubenswrapper[4724]: I0223 18:06:41.734132 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8720f0c5-7219-4973-9af8-143d9725ac76-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 18:06:41 crc kubenswrapper[4724]: I0223 18:06:41.734167 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxkdh\" (UniqueName: \"kubernetes.io/projected/8720f0c5-7219-4973-9af8-143d9725ac76-kube-api-access-nxkdh\") on node \"crc\" DevicePath \"\"" Feb 23 18:06:41 crc kubenswrapper[4724]: I0223 18:06:41.800507 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8720f0c5-7219-4973-9af8-143d9725ac76-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8720f0c5-7219-4973-9af8-143d9725ac76" (UID: "8720f0c5-7219-4973-9af8-143d9725ac76"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:06:41 crc kubenswrapper[4724]: I0223 18:06:41.836694 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8720f0c5-7219-4973-9af8-143d9725ac76-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 18:06:42 crc kubenswrapper[4724]: I0223 18:06:42.028770 4724 generic.go:334] "Generic (PLEG): container finished" podID="8720f0c5-7219-4973-9af8-143d9725ac76" containerID="36d52a1ee35b45bcf4ec6d7f6dc8ab4be3fce3614a097ec59fe7240983411a1f" exitCode=0 Feb 23 18:06:42 crc kubenswrapper[4724]: I0223 18:06:42.028813 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmfzq" event={"ID":"8720f0c5-7219-4973-9af8-143d9725ac76","Type":"ContainerDied","Data":"36d52a1ee35b45bcf4ec6d7f6dc8ab4be3fce3614a097ec59fe7240983411a1f"} Feb 23 18:06:42 crc kubenswrapper[4724]: I0223 18:06:42.028841 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmfzq" event={"ID":"8720f0c5-7219-4973-9af8-143d9725ac76","Type":"ContainerDied","Data":"937df34a907885eadf98381a8d779b3e4ec94863010e4dd52a31311f2a3f0a17"} Feb 23 18:06:42 crc kubenswrapper[4724]: I0223 18:06:42.028858 4724 scope.go:117] "RemoveContainer" containerID="36d52a1ee35b45bcf4ec6d7f6dc8ab4be3fce3614a097ec59fe7240983411a1f" Feb 23 18:06:42 crc kubenswrapper[4724]: I0223 18:06:42.028981 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vmfzq" Feb 23 18:06:42 crc kubenswrapper[4724]: I0223 18:06:42.054129 4724 scope.go:117] "RemoveContainer" containerID="7fb287e1a1d2bf53b48ff08774c3408c57b9556f738813ff1e73269a89346e40" Feb 23 18:06:42 crc kubenswrapper[4724]: I0223 18:06:42.058130 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vmfzq"] Feb 23 18:06:42 crc kubenswrapper[4724]: I0223 18:06:42.065831 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vmfzq"] Feb 23 18:06:42 crc kubenswrapper[4724]: I0223 18:06:42.093562 4724 scope.go:117] "RemoveContainer" containerID="6a968122ec96a247dbe2dab0948972f317a48367f2b21a34dd121afb09bb2ea1" Feb 23 18:06:42 crc kubenswrapper[4724]: I0223 18:06:42.115039 4724 scope.go:117] "RemoveContainer" containerID="36d52a1ee35b45bcf4ec6d7f6dc8ab4be3fce3614a097ec59fe7240983411a1f" Feb 23 18:06:42 crc kubenswrapper[4724]: E0223 18:06:42.115537 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36d52a1ee35b45bcf4ec6d7f6dc8ab4be3fce3614a097ec59fe7240983411a1f\": container with ID starting with 36d52a1ee35b45bcf4ec6d7f6dc8ab4be3fce3614a097ec59fe7240983411a1f not found: ID does not exist" containerID="36d52a1ee35b45bcf4ec6d7f6dc8ab4be3fce3614a097ec59fe7240983411a1f" Feb 23 18:06:42 crc kubenswrapper[4724]: I0223 18:06:42.115584 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36d52a1ee35b45bcf4ec6d7f6dc8ab4be3fce3614a097ec59fe7240983411a1f"} err="failed to get container status \"36d52a1ee35b45bcf4ec6d7f6dc8ab4be3fce3614a097ec59fe7240983411a1f\": rpc error: code = NotFound desc = could not find container \"36d52a1ee35b45bcf4ec6d7f6dc8ab4be3fce3614a097ec59fe7240983411a1f\": container with ID starting with 36d52a1ee35b45bcf4ec6d7f6dc8ab4be3fce3614a097ec59fe7240983411a1f not found: ID does not exist" Feb 23 18:06:42 crc kubenswrapper[4724]: I0223 18:06:42.115613 4724 scope.go:117] "RemoveContainer" containerID="7fb287e1a1d2bf53b48ff08774c3408c57b9556f738813ff1e73269a89346e40" Feb 23 18:06:42 crc kubenswrapper[4724]: E0223 18:06:42.116012 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fb287e1a1d2bf53b48ff08774c3408c57b9556f738813ff1e73269a89346e40\": container with ID starting with 7fb287e1a1d2bf53b48ff08774c3408c57b9556f738813ff1e73269a89346e40 not found: ID does not exist" containerID="7fb287e1a1d2bf53b48ff08774c3408c57b9556f738813ff1e73269a89346e40" Feb 23 18:06:42 crc kubenswrapper[4724]: I0223 18:06:42.116044 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fb287e1a1d2bf53b48ff08774c3408c57b9556f738813ff1e73269a89346e40"} err="failed to get container status \"7fb287e1a1d2bf53b48ff08774c3408c57b9556f738813ff1e73269a89346e40\": rpc error: code = NotFound desc = could not find container \"7fb287e1a1d2bf53b48ff08774c3408c57b9556f738813ff1e73269a89346e40\": container with ID starting with 7fb287e1a1d2bf53b48ff08774c3408c57b9556f738813ff1e73269a89346e40 not found: ID does not exist" Feb 23 18:06:42 crc kubenswrapper[4724]: I0223 18:06:42.116059 4724 scope.go:117] "RemoveContainer" containerID="6a968122ec96a247dbe2dab0948972f317a48367f2b21a34dd121afb09bb2ea1" Feb 23 18:06:42 crc kubenswrapper[4724]: E0223 18:06:42.116468 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a968122ec96a247dbe2dab0948972f317a48367f2b21a34dd121afb09bb2ea1\": container with ID starting with 6a968122ec96a247dbe2dab0948972f317a48367f2b21a34dd121afb09bb2ea1 not found: ID does not exist" containerID="6a968122ec96a247dbe2dab0948972f317a48367f2b21a34dd121afb09bb2ea1" Feb 23 18:06:42 crc kubenswrapper[4724]: I0223 18:06:42.116511 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a968122ec96a247dbe2dab0948972f317a48367f2b21a34dd121afb09bb2ea1"} err="failed to get container status \"6a968122ec96a247dbe2dab0948972f317a48367f2b21a34dd121afb09bb2ea1\": rpc error: code = NotFound desc = could not find container \"6a968122ec96a247dbe2dab0948972f317a48367f2b21a34dd121afb09bb2ea1\": container with ID starting with 6a968122ec96a247dbe2dab0948972f317a48367f2b21a34dd121afb09bb2ea1 not found: ID does not exist" Feb 23 18:06:42 crc kubenswrapper[4724]: I0223 18:06:42.961260 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8720f0c5-7219-4973-9af8-143d9725ac76" path="/var/lib/kubelet/pods/8720f0c5-7219-4973-9af8-143d9725ac76/volumes" Feb 23 18:07:03 crc kubenswrapper[4724]: I0223 18:07:03.817172 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ckhwl"] Feb 23 18:07:03 crc kubenswrapper[4724]: E0223 18:07:03.820411 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8720f0c5-7219-4973-9af8-143d9725ac76" containerName="registry-server" Feb 23 18:07:03 crc kubenswrapper[4724]: I0223 18:07:03.820446 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="8720f0c5-7219-4973-9af8-143d9725ac76" containerName="registry-server" Feb 23 18:07:03 crc kubenswrapper[4724]: E0223 18:07:03.820460 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8720f0c5-7219-4973-9af8-143d9725ac76" containerName="extract-content" Feb 23 18:07:03 crc kubenswrapper[4724]: I0223 18:07:03.820469 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="8720f0c5-7219-4973-9af8-143d9725ac76" containerName="extract-content" Feb 23 18:07:03 crc kubenswrapper[4724]: E0223 18:07:03.820692 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8720f0c5-7219-4973-9af8-143d9725ac76" containerName="extract-utilities" Feb 23 18:07:03 crc kubenswrapper[4724]: I0223 18:07:03.820700 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="8720f0c5-7219-4973-9af8-143d9725ac76" containerName="extract-utilities" Feb 23 18:07:03 crc kubenswrapper[4724]: I0223 18:07:03.820952 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="8720f0c5-7219-4973-9af8-143d9725ac76" containerName="registry-server" Feb 23 18:07:03 crc kubenswrapper[4724]: I0223 18:07:03.822789 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ckhwl" Feb 23 18:07:03 crc kubenswrapper[4724]: I0223 18:07:03.832099 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ckhwl"] Feb 23 18:07:03 crc kubenswrapper[4724]: I0223 18:07:03.977696 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6vp8\" (UniqueName: \"kubernetes.io/projected/42810e8d-cfd8-4629-9cbb-3ebc85683364-kube-api-access-c6vp8\") pod \"redhat-marketplace-ckhwl\" (UID: \"42810e8d-cfd8-4629-9cbb-3ebc85683364\") " pod="openshift-marketplace/redhat-marketplace-ckhwl" Feb 23 18:07:03 crc kubenswrapper[4724]: I0223 18:07:03.978333 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42810e8d-cfd8-4629-9cbb-3ebc85683364-utilities\") pod \"redhat-marketplace-ckhwl\" (UID: \"42810e8d-cfd8-4629-9cbb-3ebc85683364\") " pod="openshift-marketplace/redhat-marketplace-ckhwl" Feb 23 18:07:03 crc kubenswrapper[4724]: I0223 18:07:03.978442 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42810e8d-cfd8-4629-9cbb-3ebc85683364-catalog-content\") pod \"redhat-marketplace-ckhwl\" (UID: \"42810e8d-cfd8-4629-9cbb-3ebc85683364\") " pod="openshift-marketplace/redhat-marketplace-ckhwl" Feb 23 18:07:04 crc kubenswrapper[4724]: I0223 18:07:04.081401 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42810e8d-cfd8-4629-9cbb-3ebc85683364-catalog-content\") pod \"redhat-marketplace-ckhwl\" (UID: \"42810e8d-cfd8-4629-9cbb-3ebc85683364\") " pod="openshift-marketplace/redhat-marketplace-ckhwl" Feb 23 18:07:04 crc kubenswrapper[4724]: I0223 18:07:04.081498 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6vp8\" (UniqueName: \"kubernetes.io/projected/42810e8d-cfd8-4629-9cbb-3ebc85683364-kube-api-access-c6vp8\") pod \"redhat-marketplace-ckhwl\" (UID: \"42810e8d-cfd8-4629-9cbb-3ebc85683364\") " pod="openshift-marketplace/redhat-marketplace-ckhwl" Feb 23 18:07:04 crc kubenswrapper[4724]: I0223 18:07:04.081580 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42810e8d-cfd8-4629-9cbb-3ebc85683364-utilities\") pod \"redhat-marketplace-ckhwl\" (UID: \"42810e8d-cfd8-4629-9cbb-3ebc85683364\") " pod="openshift-marketplace/redhat-marketplace-ckhwl" Feb 23 18:07:04 crc kubenswrapper[4724]: I0223 18:07:04.081936 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42810e8d-cfd8-4629-9cbb-3ebc85683364-catalog-content\") pod \"redhat-marketplace-ckhwl\" (UID: \"42810e8d-cfd8-4629-9cbb-3ebc85683364\") " pod="openshift-marketplace/redhat-marketplace-ckhwl" Feb 23 18:07:04 crc kubenswrapper[4724]: I0223 18:07:04.082263 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42810e8d-cfd8-4629-9cbb-3ebc85683364-utilities\") pod \"redhat-marketplace-ckhwl\" (UID: \"42810e8d-cfd8-4629-9cbb-3ebc85683364\") " pod="openshift-marketplace/redhat-marketplace-ckhwl" Feb 23 18:07:04 crc kubenswrapper[4724]: I0223 18:07:04.103891 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6vp8\" (UniqueName: \"kubernetes.io/projected/42810e8d-cfd8-4629-9cbb-3ebc85683364-kube-api-access-c6vp8\") pod \"redhat-marketplace-ckhwl\" (UID: \"42810e8d-cfd8-4629-9cbb-3ebc85683364\") " pod="openshift-marketplace/redhat-marketplace-ckhwl" Feb 23 18:07:04 crc kubenswrapper[4724]: I0223 18:07:04.155107 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ckhwl" Feb 23 18:07:04 crc kubenswrapper[4724]: I0223 18:07:04.668878 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ckhwl"] Feb 23 18:07:05 crc kubenswrapper[4724]: I0223 18:07:05.234247 4724 generic.go:334] "Generic (PLEG): container finished" podID="42810e8d-cfd8-4629-9cbb-3ebc85683364" containerID="3ee067cb18b3d07529081d2c44904e3f120df2cd6144be112b6d6326206ac29b" exitCode=0 Feb 23 18:07:05 crc kubenswrapper[4724]: I0223 18:07:05.234355 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ckhwl" event={"ID":"42810e8d-cfd8-4629-9cbb-3ebc85683364","Type":"ContainerDied","Data":"3ee067cb18b3d07529081d2c44904e3f120df2cd6144be112b6d6326206ac29b"} Feb 23 18:07:05 crc kubenswrapper[4724]: I0223 18:07:05.235125 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ckhwl" event={"ID":"42810e8d-cfd8-4629-9cbb-3ebc85683364","Type":"ContainerStarted","Data":"b3c4ee43d8fc969a4dde79bd59eb7b0205c388a3bb4bbac80919ad2bf99f569a"} Feb 23 18:07:07 crc kubenswrapper[4724]: I0223 18:07:07.262466 4724 generic.go:334] "Generic (PLEG): container finished" podID="42810e8d-cfd8-4629-9cbb-3ebc85683364" containerID="9c296b6523ee1acb77a5e9e0995e2df633c000ccf73c3cf44568d392fbf324bc" exitCode=0 Feb 23 18:07:07 crc kubenswrapper[4724]: I0223 18:07:07.262546 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ckhwl" event={"ID":"42810e8d-cfd8-4629-9cbb-3ebc85683364","Type":"ContainerDied","Data":"9c296b6523ee1acb77a5e9e0995e2df633c000ccf73c3cf44568d392fbf324bc"} Feb 23 18:07:08 crc kubenswrapper[4724]: I0223 18:07:08.280469 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ckhwl" event={"ID":"42810e8d-cfd8-4629-9cbb-3ebc85683364","Type":"ContainerStarted","Data":"f532bd604e4ce11186750749db0a5e0ec35a94119965d8890a8854e50749da5b"} Feb 23 18:07:08 crc kubenswrapper[4724]: I0223 18:07:08.314611 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ckhwl" podStartSLOduration=2.705140993 podStartE2EDuration="5.314592843s" podCreationTimestamp="2026-02-23 18:07:03 +0000 UTC" firstStartedPulling="2026-02-23 18:07:05.236366571 +0000 UTC m=+2181.052566171" lastFinishedPulling="2026-02-23 18:07:07.845818421 +0000 UTC m=+2183.662018021" observedRunningTime="2026-02-23 18:07:08.304588393 +0000 UTC m=+2184.120788013" watchObservedRunningTime="2026-02-23 18:07:08.314592843 +0000 UTC m=+2184.130792443" Feb 23 18:07:14 crc kubenswrapper[4724]: I0223 18:07:14.156273 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ckhwl" Feb 23 18:07:14 crc kubenswrapper[4724]: I0223 18:07:14.156923 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ckhwl" Feb 23 18:07:14 crc kubenswrapper[4724]: I0223 18:07:14.202321 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ckhwl" Feb 23 18:07:14 crc kubenswrapper[4724]: I0223 18:07:14.378262 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ckhwl" Feb 23 18:07:14 crc kubenswrapper[4724]: I0223 18:07:14.440425 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ckhwl"] Feb 23 18:07:16 crc kubenswrapper[4724]: I0223 18:07:16.352043 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ckhwl" podUID="42810e8d-cfd8-4629-9cbb-3ebc85683364" containerName="registry-server" containerID="cri-o://f532bd604e4ce11186750749db0a5e0ec35a94119965d8890a8854e50749da5b" gracePeriod=2 Feb 23 18:07:16 crc kubenswrapper[4724]: I0223 18:07:16.822946 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ckhwl" Feb 23 18:07:16 crc kubenswrapper[4724]: I0223 18:07:16.943983 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6vp8\" (UniqueName: \"kubernetes.io/projected/42810e8d-cfd8-4629-9cbb-3ebc85683364-kube-api-access-c6vp8\") pod \"42810e8d-cfd8-4629-9cbb-3ebc85683364\" (UID: \"42810e8d-cfd8-4629-9cbb-3ebc85683364\") " Feb 23 18:07:16 crc kubenswrapper[4724]: I0223 18:07:16.944139 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42810e8d-cfd8-4629-9cbb-3ebc85683364-catalog-content\") pod \"42810e8d-cfd8-4629-9cbb-3ebc85683364\" (UID: \"42810e8d-cfd8-4629-9cbb-3ebc85683364\") " Feb 23 18:07:16 crc kubenswrapper[4724]: I0223 18:07:16.944401 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42810e8d-cfd8-4629-9cbb-3ebc85683364-utilities\") pod \"42810e8d-cfd8-4629-9cbb-3ebc85683364\" (UID: \"42810e8d-cfd8-4629-9cbb-3ebc85683364\") " Feb 23 18:07:16 crc kubenswrapper[4724]: I0223 18:07:16.945913 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42810e8d-cfd8-4629-9cbb-3ebc85683364-utilities" (OuterVolumeSpecName: "utilities") pod "42810e8d-cfd8-4629-9cbb-3ebc85683364" (UID: "42810e8d-cfd8-4629-9cbb-3ebc85683364"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:07:16 crc kubenswrapper[4724]: I0223 18:07:16.950636 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42810e8d-cfd8-4629-9cbb-3ebc85683364-kube-api-access-c6vp8" (OuterVolumeSpecName: "kube-api-access-c6vp8") pod "42810e8d-cfd8-4629-9cbb-3ebc85683364" (UID: "42810e8d-cfd8-4629-9cbb-3ebc85683364"). InnerVolumeSpecName "kube-api-access-c6vp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:07:17 crc kubenswrapper[4724]: I0223 18:07:17.046588 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42810e8d-cfd8-4629-9cbb-3ebc85683364-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 18:07:17 crc kubenswrapper[4724]: I0223 18:07:17.046623 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6vp8\" (UniqueName: \"kubernetes.io/projected/42810e8d-cfd8-4629-9cbb-3ebc85683364-kube-api-access-c6vp8\") on node \"crc\" DevicePath \"\"" Feb 23 18:07:17 crc kubenswrapper[4724]: I0223 18:07:17.059647 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42810e8d-cfd8-4629-9cbb-3ebc85683364-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "42810e8d-cfd8-4629-9cbb-3ebc85683364" (UID: "42810e8d-cfd8-4629-9cbb-3ebc85683364"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:07:17 crc kubenswrapper[4724]: I0223 18:07:17.148003 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42810e8d-cfd8-4629-9cbb-3ebc85683364-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 18:07:17 crc kubenswrapper[4724]: I0223 18:07:17.362862 4724 generic.go:334] "Generic (PLEG): container finished" podID="42810e8d-cfd8-4629-9cbb-3ebc85683364" containerID="f532bd604e4ce11186750749db0a5e0ec35a94119965d8890a8854e50749da5b" exitCode=0 Feb 23 18:07:17 crc kubenswrapper[4724]: I0223 18:07:17.362909 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ckhwl" event={"ID":"42810e8d-cfd8-4629-9cbb-3ebc85683364","Type":"ContainerDied","Data":"f532bd604e4ce11186750749db0a5e0ec35a94119965d8890a8854e50749da5b"} Feb 23 18:07:17 crc kubenswrapper[4724]: I0223 18:07:17.362923 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ckhwl" Feb 23 18:07:17 crc kubenswrapper[4724]: I0223 18:07:17.362941 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ckhwl" event={"ID":"42810e8d-cfd8-4629-9cbb-3ebc85683364","Type":"ContainerDied","Data":"b3c4ee43d8fc969a4dde79bd59eb7b0205c388a3bb4bbac80919ad2bf99f569a"} Feb 23 18:07:17 crc kubenswrapper[4724]: I0223 18:07:17.362964 4724 scope.go:117] "RemoveContainer" containerID="f532bd604e4ce11186750749db0a5e0ec35a94119965d8890a8854e50749da5b" Feb 23 18:07:17 crc kubenswrapper[4724]: I0223 18:07:17.385352 4724 scope.go:117] "RemoveContainer" containerID="9c296b6523ee1acb77a5e9e0995e2df633c000ccf73c3cf44568d392fbf324bc" Feb 23 18:07:17 crc kubenswrapper[4724]: I0223 18:07:17.402058 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ckhwl"] Feb 23 18:07:17 crc kubenswrapper[4724]: I0223 18:07:17.415082 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ckhwl"] Feb 23 18:07:17 crc kubenswrapper[4724]: I0223 18:07:17.417670 4724 scope.go:117] "RemoveContainer" containerID="3ee067cb18b3d07529081d2c44904e3f120df2cd6144be112b6d6326206ac29b" Feb 23 18:07:17 crc kubenswrapper[4724]: I0223 18:07:17.499725 4724 scope.go:117] "RemoveContainer" containerID="f532bd604e4ce11186750749db0a5e0ec35a94119965d8890a8854e50749da5b" Feb 23 18:07:17 crc kubenswrapper[4724]: E0223 18:07:17.500155 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f532bd604e4ce11186750749db0a5e0ec35a94119965d8890a8854e50749da5b\": container with ID starting with f532bd604e4ce11186750749db0a5e0ec35a94119965d8890a8854e50749da5b not found: ID does not exist" containerID="f532bd604e4ce11186750749db0a5e0ec35a94119965d8890a8854e50749da5b" Feb 23 18:07:17 crc kubenswrapper[4724]: I0223 18:07:17.500191 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f532bd604e4ce11186750749db0a5e0ec35a94119965d8890a8854e50749da5b"} err="failed to get container status \"f532bd604e4ce11186750749db0a5e0ec35a94119965d8890a8854e50749da5b\": rpc error: code = NotFound desc = could not find container \"f532bd604e4ce11186750749db0a5e0ec35a94119965d8890a8854e50749da5b\": container with ID starting with f532bd604e4ce11186750749db0a5e0ec35a94119965d8890a8854e50749da5b not found: ID does not exist" Feb 23 18:07:17 crc kubenswrapper[4724]: I0223 18:07:17.500217 4724 scope.go:117] "RemoveContainer" containerID="9c296b6523ee1acb77a5e9e0995e2df633c000ccf73c3cf44568d392fbf324bc" Feb 23 18:07:17 crc kubenswrapper[4724]: E0223 18:07:17.500541 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c296b6523ee1acb77a5e9e0995e2df633c000ccf73c3cf44568d392fbf324bc\": container with ID starting with 9c296b6523ee1acb77a5e9e0995e2df633c000ccf73c3cf44568d392fbf324bc not found: ID does not exist" containerID="9c296b6523ee1acb77a5e9e0995e2df633c000ccf73c3cf44568d392fbf324bc" Feb 23 18:07:17 crc kubenswrapper[4724]: I0223 18:07:17.500569 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c296b6523ee1acb77a5e9e0995e2df633c000ccf73c3cf44568d392fbf324bc"} err="failed to get container status \"9c296b6523ee1acb77a5e9e0995e2df633c000ccf73c3cf44568d392fbf324bc\": rpc error: code = NotFound desc = could not find container \"9c296b6523ee1acb77a5e9e0995e2df633c000ccf73c3cf44568d392fbf324bc\": container with ID starting with 9c296b6523ee1acb77a5e9e0995e2df633c000ccf73c3cf44568d392fbf324bc not found: ID does not exist" Feb 23 18:07:17 crc kubenswrapper[4724]: I0223 18:07:17.500591 4724 scope.go:117] "RemoveContainer" containerID="3ee067cb18b3d07529081d2c44904e3f120df2cd6144be112b6d6326206ac29b" Feb 23 18:07:17 crc kubenswrapper[4724]: E0223 18:07:17.500935 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ee067cb18b3d07529081d2c44904e3f120df2cd6144be112b6d6326206ac29b\": container with ID starting with 3ee067cb18b3d07529081d2c44904e3f120df2cd6144be112b6d6326206ac29b not found: ID does not exist" containerID="3ee067cb18b3d07529081d2c44904e3f120df2cd6144be112b6d6326206ac29b" Feb 23 18:07:17 crc kubenswrapper[4724]: I0223 18:07:17.500959 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ee067cb18b3d07529081d2c44904e3f120df2cd6144be112b6d6326206ac29b"} err="failed to get container status \"3ee067cb18b3d07529081d2c44904e3f120df2cd6144be112b6d6326206ac29b\": rpc error: code = NotFound desc = could not find container \"3ee067cb18b3d07529081d2c44904e3f120df2cd6144be112b6d6326206ac29b\": container with ID starting with 3ee067cb18b3d07529081d2c44904e3f120df2cd6144be112b6d6326206ac29b not found: ID does not exist" Feb 23 18:07:18 crc kubenswrapper[4724]: I0223 18:07:18.961547 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42810e8d-cfd8-4629-9cbb-3ebc85683364" path="/var/lib/kubelet/pods/42810e8d-cfd8-4629-9cbb-3ebc85683364/volumes" Feb 23 18:07:27 crc kubenswrapper[4724]: I0223 18:07:27.752547 4724 patch_prober.go:28] interesting pod/machine-config-daemon-rw78r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 18:07:27 crc kubenswrapper[4724]: I0223 18:07:27.753049 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 18:07:57 crc kubenswrapper[4724]: I0223 18:07:57.752101 4724 patch_prober.go:28] interesting pod/machine-config-daemon-rw78r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 18:07:57 crc kubenswrapper[4724]: I0223 18:07:57.752650 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 18:08:27 crc kubenswrapper[4724]: I0223 18:08:27.751722 4724 patch_prober.go:28] interesting pod/machine-config-daemon-rw78r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 18:08:27 crc kubenswrapper[4724]: I0223 18:08:27.752264 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 18:08:27 crc kubenswrapper[4724]: I0223 18:08:27.752314 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" Feb 23 18:08:27 crc kubenswrapper[4724]: I0223 18:08:27.753093 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d8e7d3776c6fbb48ad76fa72eb5bbe6d210efd516c146bb3a54891b4dbd9d170"} pod="openshift-machine-config-operator/machine-config-daemon-rw78r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 18:08:27 crc kubenswrapper[4724]: I0223 18:08:27.753152 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" containerID="cri-o://d8e7d3776c6fbb48ad76fa72eb5bbe6d210efd516c146bb3a54891b4dbd9d170" gracePeriod=600 Feb 23 18:08:27 crc kubenswrapper[4724]: E0223 18:08:27.970920 4724 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda065b197_b354_4d9b_b2e9_7d4882a3d1a2.slice/crio-conmon-d8e7d3776c6fbb48ad76fa72eb5bbe6d210efd516c146bb3a54891b4dbd9d170.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda065b197_b354_4d9b_b2e9_7d4882a3d1a2.slice/crio-d8e7d3776c6fbb48ad76fa72eb5bbe6d210efd516c146bb3a54891b4dbd9d170.scope\": RecentStats: unable to find data in memory cache]" Feb 23 18:08:28 crc kubenswrapper[4724]: I0223 18:08:28.028335 4724 generic.go:334] "Generic (PLEG): container finished" podID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerID="d8e7d3776c6fbb48ad76fa72eb5bbe6d210efd516c146bb3a54891b4dbd9d170" exitCode=0 Feb 23 18:08:28 crc kubenswrapper[4724]: I0223 18:08:28.028405 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" event={"ID":"a065b197-b354-4d9b-b2e9-7d4882a3d1a2","Type":"ContainerDied","Data":"d8e7d3776c6fbb48ad76fa72eb5bbe6d210efd516c146bb3a54891b4dbd9d170"} Feb 23 18:08:28 crc kubenswrapper[4724]: I0223 18:08:28.028448 4724 scope.go:117] "RemoveContainer" containerID="a9d9154f54d232a189bc26a1a3f88396d8e7bd4b7bf9b2b3dcf38e1648177ac8" Feb 23 18:08:29 crc kubenswrapper[4724]: I0223 18:08:29.039774 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" event={"ID":"a065b197-b354-4d9b-b2e9-7d4882a3d1a2","Type":"ContainerStarted","Data":"3c33c3a693d61e5bd20fa1e2524ac802f18500488318f285cffccb9cc71efde6"} Feb 23 18:09:47 crc kubenswrapper[4724]: I0223 18:09:47.819333 4724 generic.go:334] "Generic (PLEG): container finished" podID="3f5fa243-d790-4006-9c4c-7a1bf93a56b4" containerID="f1f6b0a789c1f425c47dc1e4de7fbae8eb79dfbc88db0c20690e112b1c49a232" exitCode=0 Feb 23 18:09:47 crc kubenswrapper[4724]: I0223 18:09:47.819430 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tp7qm" event={"ID":"3f5fa243-d790-4006-9c4c-7a1bf93a56b4","Type":"ContainerDied","Data":"f1f6b0a789c1f425c47dc1e4de7fbae8eb79dfbc88db0c20690e112b1c49a232"} Feb 23 18:09:49 crc kubenswrapper[4724]: I0223 18:09:49.255214 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tp7qm" Feb 23 18:09:49 crc kubenswrapper[4724]: I0223 18:09:49.330991 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6r25p\" (UniqueName: \"kubernetes.io/projected/3f5fa243-d790-4006-9c4c-7a1bf93a56b4-kube-api-access-6r25p\") pod \"3f5fa243-d790-4006-9c4c-7a1bf93a56b4\" (UID: \"3f5fa243-d790-4006-9c4c-7a1bf93a56b4\") " Feb 23 18:09:49 crc kubenswrapper[4724]: I0223 18:09:49.331151 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f5fa243-d790-4006-9c4c-7a1bf93a56b4-libvirt-combined-ca-bundle\") pod \"3f5fa243-d790-4006-9c4c-7a1bf93a56b4\" (UID: \"3f5fa243-d790-4006-9c4c-7a1bf93a56b4\") " Feb 23 18:09:49 crc kubenswrapper[4724]: I0223 18:09:49.331193 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/3f5fa243-d790-4006-9c4c-7a1bf93a56b4-libvirt-secret-0\") pod \"3f5fa243-d790-4006-9c4c-7a1bf93a56b4\" (UID: \"3f5fa243-d790-4006-9c4c-7a1bf93a56b4\") " Feb 23 18:09:49 crc kubenswrapper[4724]: I0223 18:09:49.331220 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3f5fa243-d790-4006-9c4c-7a1bf93a56b4-inventory\") pod \"3f5fa243-d790-4006-9c4c-7a1bf93a56b4\" (UID: \"3f5fa243-d790-4006-9c4c-7a1bf93a56b4\") " Feb 23 18:09:49 crc kubenswrapper[4724]: I0223 18:09:49.331246 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3f5fa243-d790-4006-9c4c-7a1bf93a56b4-ssh-key-openstack-edpm-ipam\") pod \"3f5fa243-d790-4006-9c4c-7a1bf93a56b4\" (UID: \"3f5fa243-d790-4006-9c4c-7a1bf93a56b4\") " Feb 23 18:09:49 crc kubenswrapper[4724]: I0223 18:09:49.336370 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f5fa243-d790-4006-9c4c-7a1bf93a56b4-kube-api-access-6r25p" (OuterVolumeSpecName: "kube-api-access-6r25p") pod "3f5fa243-d790-4006-9c4c-7a1bf93a56b4" (UID: "3f5fa243-d790-4006-9c4c-7a1bf93a56b4"). InnerVolumeSpecName "kube-api-access-6r25p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:09:49 crc kubenswrapper[4724]: I0223 18:09:49.337523 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f5fa243-d790-4006-9c4c-7a1bf93a56b4-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "3f5fa243-d790-4006-9c4c-7a1bf93a56b4" (UID: "3f5fa243-d790-4006-9c4c-7a1bf93a56b4"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:09:49 crc kubenswrapper[4724]: I0223 18:09:49.357877 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f5fa243-d790-4006-9c4c-7a1bf93a56b4-inventory" (OuterVolumeSpecName: "inventory") pod "3f5fa243-d790-4006-9c4c-7a1bf93a56b4" (UID: "3f5fa243-d790-4006-9c4c-7a1bf93a56b4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:09:49 crc kubenswrapper[4724]: I0223 18:09:49.359741 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f5fa243-d790-4006-9c4c-7a1bf93a56b4-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "3f5fa243-d790-4006-9c4c-7a1bf93a56b4" (UID: "3f5fa243-d790-4006-9c4c-7a1bf93a56b4"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:09:49 crc kubenswrapper[4724]: I0223 18:09:49.359760 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f5fa243-d790-4006-9c4c-7a1bf93a56b4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3f5fa243-d790-4006-9c4c-7a1bf93a56b4" (UID: "3f5fa243-d790-4006-9c4c-7a1bf93a56b4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:09:49 crc kubenswrapper[4724]: I0223 18:09:49.432966 4724 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f5fa243-d790-4006-9c4c-7a1bf93a56b4-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:09:49 crc kubenswrapper[4724]: I0223 18:09:49.433000 4724 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/3f5fa243-d790-4006-9c4c-7a1bf93a56b4-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 23 18:09:49 crc kubenswrapper[4724]: I0223 18:09:49.433010 4724 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3f5fa243-d790-4006-9c4c-7a1bf93a56b4-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 18:09:49 crc kubenswrapper[4724]: I0223 18:09:49.433018 4724 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3f5fa243-d790-4006-9c4c-7a1bf93a56b4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 18:09:49 crc kubenswrapper[4724]: I0223 18:09:49.433026 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6r25p\" (UniqueName: \"kubernetes.io/projected/3f5fa243-d790-4006-9c4c-7a1bf93a56b4-kube-api-access-6r25p\") on node \"crc\" DevicePath \"\"" Feb 23 18:09:49 crc kubenswrapper[4724]: I0223 18:09:49.836833 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tp7qm" event={"ID":"3f5fa243-d790-4006-9c4c-7a1bf93a56b4","Type":"ContainerDied","Data":"a4937d6900a0b5118203e8684be97c3061328a01a4f0627faac2160a7a95e924"} Feb 23 18:09:49 crc kubenswrapper[4724]: I0223 18:09:49.836870 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4937d6900a0b5118203e8684be97c3061328a01a4f0627faac2160a7a95e924" Feb 23 18:09:49 crc kubenswrapper[4724]: I0223 18:09:49.836877 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tp7qm" Feb 23 18:09:49 crc kubenswrapper[4724]: I0223 18:09:49.943131 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-t898c"] Feb 23 18:09:49 crc kubenswrapper[4724]: E0223 18:09:49.943657 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42810e8d-cfd8-4629-9cbb-3ebc85683364" containerName="registry-server" Feb 23 18:09:49 crc kubenswrapper[4724]: I0223 18:09:49.943675 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="42810e8d-cfd8-4629-9cbb-3ebc85683364" containerName="registry-server" Feb 23 18:09:49 crc kubenswrapper[4724]: E0223 18:09:49.943704 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42810e8d-cfd8-4629-9cbb-3ebc85683364" containerName="extract-content" Feb 23 18:09:49 crc kubenswrapper[4724]: I0223 18:09:49.943711 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="42810e8d-cfd8-4629-9cbb-3ebc85683364" containerName="extract-content" Feb 23 18:09:49 crc kubenswrapper[4724]: E0223 18:09:49.943729 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42810e8d-cfd8-4629-9cbb-3ebc85683364" containerName="extract-utilities" Feb 23 18:09:49 crc kubenswrapper[4724]: I0223 18:09:49.943737 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="42810e8d-cfd8-4629-9cbb-3ebc85683364" containerName="extract-utilities" Feb 23 18:09:49 crc kubenswrapper[4724]: E0223 18:09:49.943748 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f5fa243-d790-4006-9c4c-7a1bf93a56b4" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 23 18:09:49 crc kubenswrapper[4724]: I0223 18:09:49.943755 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f5fa243-d790-4006-9c4c-7a1bf93a56b4" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 23 18:09:49 crc kubenswrapper[4724]: I0223 18:09:49.943961 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f5fa243-d790-4006-9c4c-7a1bf93a56b4" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 23 18:09:49 crc kubenswrapper[4724]: I0223 18:09:49.943983 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="42810e8d-cfd8-4629-9cbb-3ebc85683364" containerName="registry-server" Feb 23 18:09:49 crc kubenswrapper[4724]: I0223 18:09:49.944860 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t898c" Feb 23 18:09:49 crc kubenswrapper[4724]: I0223 18:09:49.946662 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 18:09:49 crc kubenswrapper[4724]: I0223 18:09:49.947005 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 18:09:49 crc kubenswrapper[4724]: I0223 18:09:49.947617 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t8jff" Feb 23 18:09:49 crc kubenswrapper[4724]: I0223 18:09:49.947698 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 23 18:09:49 crc kubenswrapper[4724]: I0223 18:09:49.948003 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 23 18:09:49 crc kubenswrapper[4724]: I0223 18:09:49.948164 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 23 18:09:49 crc kubenswrapper[4724]: I0223 18:09:49.958054 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 18:09:49 crc kubenswrapper[4724]: I0223 18:09:49.962353 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-t898c"] Feb 23 18:09:49 crc kubenswrapper[4724]: E0223 18:09:49.998354 4724 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f5fa243_d790_4006_9c4c_7a1bf93a56b4.slice/crio-a4937d6900a0b5118203e8684be97c3061328a01a4f0627faac2160a7a95e924\": RecentStats: unable to find data in memory cache]" Feb 23 18:09:50 crc kubenswrapper[4724]: I0223 18:09:50.042589 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/28de6808-9434-463a-9b7f-cd4236c51c29-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t898c\" (UID: \"28de6808-9434-463a-9b7f-cd4236c51c29\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t898c" Feb 23 18:09:50 crc kubenswrapper[4724]: I0223 18:09:50.042907 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28de6808-9434-463a-9b7f-cd4236c51c29-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t898c\" (UID: \"28de6808-9434-463a-9b7f-cd4236c51c29\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t898c" Feb 23 18:09:50 crc kubenswrapper[4724]: I0223 18:09:50.042932 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/28de6808-9434-463a-9b7f-cd4236c51c29-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t898c\" (UID: \"28de6808-9434-463a-9b7f-cd4236c51c29\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t898c" Feb 23 18:09:50 crc kubenswrapper[4724]: I0223 18:09:50.043004 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/28de6808-9434-463a-9b7f-cd4236c51c29-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t898c\" (UID: \"28de6808-9434-463a-9b7f-cd4236c51c29\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t898c" Feb 23 18:09:50 crc kubenswrapper[4724]: I0223 18:09:50.043038 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28de6808-9434-463a-9b7f-cd4236c51c29-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t898c\" (UID: \"28de6808-9434-463a-9b7f-cd4236c51c29\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t898c" Feb 23 18:09:50 crc kubenswrapper[4724]: I0223 18:09:50.043089 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/28de6808-9434-463a-9b7f-cd4236c51c29-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t898c\" (UID: \"28de6808-9434-463a-9b7f-cd4236c51c29\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t898c" Feb 23 18:09:50 crc kubenswrapper[4724]: I0223 18:09:50.043157 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/28de6808-9434-463a-9b7f-cd4236c51c29-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t898c\" (UID: \"28de6808-9434-463a-9b7f-cd4236c51c29\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t898c" Feb 23 18:09:50 crc kubenswrapper[4724]: I0223 18:09:50.043203 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/28de6808-9434-463a-9b7f-cd4236c51c29-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t898c\" (UID: \"28de6808-9434-463a-9b7f-cd4236c51c29\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t898c" Feb 23 18:09:50 crc kubenswrapper[4724]: I0223 18:09:50.043236 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/28de6808-9434-463a-9b7f-cd4236c51c29-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t898c\" (UID: \"28de6808-9434-463a-9b7f-cd4236c51c29\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t898c" Feb 23 18:09:50 crc kubenswrapper[4724]: I0223 18:09:50.043284 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8v9j\" (UniqueName: \"kubernetes.io/projected/28de6808-9434-463a-9b7f-cd4236c51c29-kube-api-access-b8v9j\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t898c\" (UID: \"28de6808-9434-463a-9b7f-cd4236c51c29\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t898c" Feb 23 18:09:50 crc kubenswrapper[4724]: I0223 18:09:50.043348 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/28de6808-9434-463a-9b7f-cd4236c51c29-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t898c\" (UID: \"28de6808-9434-463a-9b7f-cd4236c51c29\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t898c" Feb 23 18:09:50 crc kubenswrapper[4724]: I0223 18:09:50.146545 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/28de6808-9434-463a-9b7f-cd4236c51c29-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t898c\" (UID: \"28de6808-9434-463a-9b7f-cd4236c51c29\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t898c" Feb 23 18:09:50 crc kubenswrapper[4724]: I0223 18:09:50.146647 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28de6808-9434-463a-9b7f-cd4236c51c29-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t898c\" (UID: \"28de6808-9434-463a-9b7f-cd4236c51c29\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t898c" Feb 23 18:09:50 crc kubenswrapper[4724]: I0223 18:09:50.146720 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/28de6808-9434-463a-9b7f-cd4236c51c29-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t898c\" (UID: \"28de6808-9434-463a-9b7f-cd4236c51c29\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t898c" Feb 23 18:09:50 crc kubenswrapper[4724]: I0223 18:09:50.146773 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/28de6808-9434-463a-9b7f-cd4236c51c29-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t898c\" (UID: \"28de6808-9434-463a-9b7f-cd4236c51c29\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t898c" Feb 23 18:09:50 crc kubenswrapper[4724]: I0223 18:09:50.146828 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/28de6808-9434-463a-9b7f-cd4236c51c29-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t898c\" (UID: \"28de6808-9434-463a-9b7f-cd4236c51c29\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t898c" Feb 23 18:09:50 crc kubenswrapper[4724]: I0223 18:09:50.146871 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/28de6808-9434-463a-9b7f-cd4236c51c29-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t898c\" (UID: \"28de6808-9434-463a-9b7f-cd4236c51c29\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t898c" Feb 23 18:09:50 crc kubenswrapper[4724]: I0223 18:09:50.146918 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8v9j\" (UniqueName: \"kubernetes.io/projected/28de6808-9434-463a-9b7f-cd4236c51c29-kube-api-access-b8v9j\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t898c\" (UID: \"28de6808-9434-463a-9b7f-cd4236c51c29\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t898c" Feb 23 18:09:50 crc kubenswrapper[4724]: I0223 18:09:50.147014 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/28de6808-9434-463a-9b7f-cd4236c51c29-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t898c\" (UID: \"28de6808-9434-463a-9b7f-cd4236c51c29\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t898c" Feb 23 18:09:50 crc kubenswrapper[4724]: I0223 18:09:50.147057 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/28de6808-9434-463a-9b7f-cd4236c51c29-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t898c\" (UID: \"28de6808-9434-463a-9b7f-cd4236c51c29\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t898c" Feb 23 18:09:50 crc kubenswrapper[4724]: I0223 18:09:50.147114 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28de6808-9434-463a-9b7f-cd4236c51c29-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t898c\" (UID: \"28de6808-9434-463a-9b7f-cd4236c51c29\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t898c" Feb 23 18:09:50 crc kubenswrapper[4724]: I0223 18:09:50.147185 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/28de6808-9434-463a-9b7f-cd4236c51c29-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t898c\" (UID: \"28de6808-9434-463a-9b7f-cd4236c51c29\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t898c" Feb 23 18:09:50 crc kubenswrapper[4724]: I0223 18:09:50.149272 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/28de6808-9434-463a-9b7f-cd4236c51c29-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t898c\" (UID: \"28de6808-9434-463a-9b7f-cd4236c51c29\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t898c" Feb 23 18:09:50 crc kubenswrapper[4724]: I0223 18:09:50.152294 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/28de6808-9434-463a-9b7f-cd4236c51c29-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t898c\" (UID: \"28de6808-9434-463a-9b7f-cd4236c51c29\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t898c" Feb 23 18:09:50 crc kubenswrapper[4724]: I0223 18:09:50.152434 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/28de6808-9434-463a-9b7f-cd4236c51c29-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t898c\" (UID: \"28de6808-9434-463a-9b7f-cd4236c51c29\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t898c" Feb 23 18:09:50 crc kubenswrapper[4724]: I0223 18:09:50.152677 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/28de6808-9434-463a-9b7f-cd4236c51c29-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t898c\" (UID: \"28de6808-9434-463a-9b7f-cd4236c51c29\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t898c" Feb 23 18:09:50 crc kubenswrapper[4724]: I0223 18:09:50.153067 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/28de6808-9434-463a-9b7f-cd4236c51c29-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t898c\" (UID: \"28de6808-9434-463a-9b7f-cd4236c51c29\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t898c" Feb 23 18:09:50 crc kubenswrapper[4724]: I0223 18:09:50.153421 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28de6808-9434-463a-9b7f-cd4236c51c29-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t898c\" (UID: \"28de6808-9434-463a-9b7f-cd4236c51c29\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t898c" Feb 23 18:09:50 crc kubenswrapper[4724]: I0223 18:09:50.153731 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28de6808-9434-463a-9b7f-cd4236c51c29-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t898c\" (UID: \"28de6808-9434-463a-9b7f-cd4236c51c29\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t898c" Feb 23 18:09:50 crc kubenswrapper[4724]: I0223 18:09:50.153609 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/28de6808-9434-463a-9b7f-cd4236c51c29-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t898c\" (UID: \"28de6808-9434-463a-9b7f-cd4236c51c29\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t898c" Feb 23 18:09:50 crc kubenswrapper[4724]: I0223 18:09:50.159984 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/28de6808-9434-463a-9b7f-cd4236c51c29-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t898c\" (UID: \"28de6808-9434-463a-9b7f-cd4236c51c29\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t898c" Feb 23 18:09:50 crc kubenswrapper[4724]: I0223 18:09:50.160226 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/28de6808-9434-463a-9b7f-cd4236c51c29-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t898c\" (UID: \"28de6808-9434-463a-9b7f-cd4236c51c29\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t898c" Feb 23 18:09:50 crc kubenswrapper[4724]: I0223 18:09:50.169185 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8v9j\" (UniqueName: \"kubernetes.io/projected/28de6808-9434-463a-9b7f-cd4236c51c29-kube-api-access-b8v9j\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t898c\" (UID: \"28de6808-9434-463a-9b7f-cd4236c51c29\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t898c" Feb 23 18:09:50 crc kubenswrapper[4724]: I0223 18:09:50.268199 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t898c" Feb 23 18:09:50 crc kubenswrapper[4724]: I0223 18:09:50.840046 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-t898c"] Feb 23 18:09:50 crc kubenswrapper[4724]: I0223 18:09:50.840998 4724 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 18:09:50 crc kubenswrapper[4724]: I0223 18:09:50.851775 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t898c" event={"ID":"28de6808-9434-463a-9b7f-cd4236c51c29","Type":"ContainerStarted","Data":"e8f0c94f16e4dab55ef550ab0215bb3bfa32cc3d0e7cc6e5d61996f50f54222d"} Feb 23 18:09:51 crc kubenswrapper[4724]: I0223 18:09:51.861105 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t898c" event={"ID":"28de6808-9434-463a-9b7f-cd4236c51c29","Type":"ContainerStarted","Data":"58054b4d0dde29ad78c2ec8a4b9e7e86bf9a1bcb663314b29f1eebee9052895c"} Feb 23 18:09:51 crc kubenswrapper[4724]: I0223 18:09:51.885059 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t898c" podStartSLOduration=2.436937555 podStartE2EDuration="2.885039247s" podCreationTimestamp="2026-02-23 18:09:49 +0000 UTC" firstStartedPulling="2026-02-23 18:09:50.840722351 +0000 UTC m=+2346.656921951" lastFinishedPulling="2026-02-23 18:09:51.288824043 +0000 UTC m=+2347.105023643" observedRunningTime="2026-02-23 18:09:51.879805498 +0000 UTC m=+2347.696005098" watchObservedRunningTime="2026-02-23 18:09:51.885039247 +0000 UTC m=+2347.701238857" Feb 23 18:10:49 crc kubenswrapper[4724]: I0223 18:10:49.952310 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-brj4v"] Feb 23 18:10:49 crc kubenswrapper[4724]: I0223 18:10:49.957479 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-brj4v" Feb 23 18:10:49 crc kubenswrapper[4724]: I0223 18:10:49.968135 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-brj4v"] Feb 23 18:10:50 crc kubenswrapper[4724]: I0223 18:10:50.134041 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6aa1a66f-e4ce-45f1-a6a7-b610cb79c399-catalog-content\") pod \"community-operators-brj4v\" (UID: \"6aa1a66f-e4ce-45f1-a6a7-b610cb79c399\") " pod="openshift-marketplace/community-operators-brj4v" Feb 23 18:10:50 crc kubenswrapper[4724]: I0223 18:10:50.134216 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sfv2\" (UniqueName: \"kubernetes.io/projected/6aa1a66f-e4ce-45f1-a6a7-b610cb79c399-kube-api-access-9sfv2\") pod \"community-operators-brj4v\" (UID: \"6aa1a66f-e4ce-45f1-a6a7-b610cb79c399\") " pod="openshift-marketplace/community-operators-brj4v" Feb 23 18:10:50 crc kubenswrapper[4724]: I0223 18:10:50.134313 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6aa1a66f-e4ce-45f1-a6a7-b610cb79c399-utilities\") pod \"community-operators-brj4v\" (UID: \"6aa1a66f-e4ce-45f1-a6a7-b610cb79c399\") " pod="openshift-marketplace/community-operators-brj4v" Feb 23 18:10:50 crc kubenswrapper[4724]: I0223 18:10:50.236566 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sfv2\" (UniqueName: \"kubernetes.io/projected/6aa1a66f-e4ce-45f1-a6a7-b610cb79c399-kube-api-access-9sfv2\") pod \"community-operators-brj4v\" (UID: \"6aa1a66f-e4ce-45f1-a6a7-b610cb79c399\") " pod="openshift-marketplace/community-operators-brj4v" Feb 23 18:10:50 crc kubenswrapper[4724]: I0223 18:10:50.236654 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6aa1a66f-e4ce-45f1-a6a7-b610cb79c399-utilities\") pod \"community-operators-brj4v\" (UID: \"6aa1a66f-e4ce-45f1-a6a7-b610cb79c399\") " pod="openshift-marketplace/community-operators-brj4v" Feb 23 18:10:50 crc kubenswrapper[4724]: I0223 18:10:50.236724 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6aa1a66f-e4ce-45f1-a6a7-b610cb79c399-catalog-content\") pod \"community-operators-brj4v\" (UID: \"6aa1a66f-e4ce-45f1-a6a7-b610cb79c399\") " pod="openshift-marketplace/community-operators-brj4v" Feb 23 18:10:50 crc kubenswrapper[4724]: I0223 18:10:50.237212 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6aa1a66f-e4ce-45f1-a6a7-b610cb79c399-catalog-content\") pod \"community-operators-brj4v\" (UID: \"6aa1a66f-e4ce-45f1-a6a7-b610cb79c399\") " pod="openshift-marketplace/community-operators-brj4v" Feb 23 18:10:50 crc kubenswrapper[4724]: I0223 18:10:50.237871 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6aa1a66f-e4ce-45f1-a6a7-b610cb79c399-utilities\") pod \"community-operators-brj4v\" (UID: \"6aa1a66f-e4ce-45f1-a6a7-b610cb79c399\") " pod="openshift-marketplace/community-operators-brj4v" Feb 23 18:10:50 crc kubenswrapper[4724]: I0223 18:10:50.260872 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sfv2\" (UniqueName: \"kubernetes.io/projected/6aa1a66f-e4ce-45f1-a6a7-b610cb79c399-kube-api-access-9sfv2\") pod \"community-operators-brj4v\" (UID: \"6aa1a66f-e4ce-45f1-a6a7-b610cb79c399\") " pod="openshift-marketplace/community-operators-brj4v" Feb 23 18:10:50 crc kubenswrapper[4724]: I0223 18:10:50.276902 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-brj4v" Feb 23 18:10:50 crc kubenswrapper[4724]: I0223 18:10:50.830098 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-brj4v"] Feb 23 18:10:51 crc kubenswrapper[4724]: I0223 18:10:51.452577 4724 generic.go:334] "Generic (PLEG): container finished" podID="6aa1a66f-e4ce-45f1-a6a7-b610cb79c399" containerID="d22811a50864e92b1feff0ae3fe0d77e78a336d5b78cd13aacf721aed6229a8a" exitCode=0 Feb 23 18:10:51 crc kubenswrapper[4724]: I0223 18:10:51.452641 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brj4v" event={"ID":"6aa1a66f-e4ce-45f1-a6a7-b610cb79c399","Type":"ContainerDied","Data":"d22811a50864e92b1feff0ae3fe0d77e78a336d5b78cd13aacf721aed6229a8a"} Feb 23 18:10:51 crc kubenswrapper[4724]: I0223 18:10:51.452925 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brj4v" event={"ID":"6aa1a66f-e4ce-45f1-a6a7-b610cb79c399","Type":"ContainerStarted","Data":"213062d5041749b8b13c6b1e5508d91c33e85e9d5ac3c47d585a6d8eaf0bd543"} Feb 23 18:10:52 crc kubenswrapper[4724]: I0223 18:10:52.462311 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brj4v" event={"ID":"6aa1a66f-e4ce-45f1-a6a7-b610cb79c399","Type":"ContainerStarted","Data":"78d792127717a60d3e8a5038878a43acad90acd373ff861992653bb217514c95"} Feb 23 18:10:53 crc kubenswrapper[4724]: I0223 18:10:53.472879 4724 generic.go:334] "Generic (PLEG): container finished" podID="6aa1a66f-e4ce-45f1-a6a7-b610cb79c399" containerID="78d792127717a60d3e8a5038878a43acad90acd373ff861992653bb217514c95" exitCode=0 Feb 23 18:10:53 crc kubenswrapper[4724]: I0223 18:10:53.472923 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brj4v" event={"ID":"6aa1a66f-e4ce-45f1-a6a7-b610cb79c399","Type":"ContainerDied","Data":"78d792127717a60d3e8a5038878a43acad90acd373ff861992653bb217514c95"} Feb 23 18:10:54 crc kubenswrapper[4724]: I0223 18:10:54.488681 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brj4v" event={"ID":"6aa1a66f-e4ce-45f1-a6a7-b610cb79c399","Type":"ContainerStarted","Data":"d539d7fa03bfe0f0640fb9dce6c38fa024f930359b903eb37d668201b8008cc2"} Feb 23 18:10:54 crc kubenswrapper[4724]: I0223 18:10:54.517182 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-brj4v" podStartSLOduration=2.87675623 podStartE2EDuration="5.517159592s" podCreationTimestamp="2026-02-23 18:10:49 +0000 UTC" firstStartedPulling="2026-02-23 18:10:51.454804079 +0000 UTC m=+2407.271003669" lastFinishedPulling="2026-02-23 18:10:54.095207401 +0000 UTC m=+2409.911407031" observedRunningTime="2026-02-23 18:10:54.510021832 +0000 UTC m=+2410.326221422" watchObservedRunningTime="2026-02-23 18:10:54.517159592 +0000 UTC m=+2410.333359192" Feb 23 18:10:57 crc kubenswrapper[4724]: I0223 18:10:57.751784 4724 patch_prober.go:28] interesting pod/machine-config-daemon-rw78r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 18:10:57 crc kubenswrapper[4724]: I0223 18:10:57.752439 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 18:11:00 crc kubenswrapper[4724]: I0223 18:11:00.277471 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-brj4v" Feb 23 18:11:00 crc kubenswrapper[4724]: I0223 18:11:00.277810 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-brj4v" Feb 23 18:11:00 crc kubenswrapper[4724]: I0223 18:11:00.322297 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-brj4v" Feb 23 18:11:00 crc kubenswrapper[4724]: I0223 18:11:00.579682 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-brj4v" Feb 23 18:11:00 crc kubenswrapper[4724]: I0223 18:11:00.626760 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-brj4v"] Feb 23 18:11:02 crc kubenswrapper[4724]: I0223 18:11:02.552699 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-brj4v" podUID="6aa1a66f-e4ce-45f1-a6a7-b610cb79c399" containerName="registry-server" containerID="cri-o://d539d7fa03bfe0f0640fb9dce6c38fa024f930359b903eb37d668201b8008cc2" gracePeriod=2 Feb 23 18:11:03 crc kubenswrapper[4724]: I0223 18:11:03.126589 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-brj4v" Feb 23 18:11:03 crc kubenswrapper[4724]: I0223 18:11:03.312355 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9sfv2\" (UniqueName: \"kubernetes.io/projected/6aa1a66f-e4ce-45f1-a6a7-b610cb79c399-kube-api-access-9sfv2\") pod \"6aa1a66f-e4ce-45f1-a6a7-b610cb79c399\" (UID: \"6aa1a66f-e4ce-45f1-a6a7-b610cb79c399\") " Feb 23 18:11:03 crc kubenswrapper[4724]: I0223 18:11:03.312718 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6aa1a66f-e4ce-45f1-a6a7-b610cb79c399-catalog-content\") pod \"6aa1a66f-e4ce-45f1-a6a7-b610cb79c399\" (UID: \"6aa1a66f-e4ce-45f1-a6a7-b610cb79c399\") " Feb 23 18:11:03 crc kubenswrapper[4724]: I0223 18:11:03.312775 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6aa1a66f-e4ce-45f1-a6a7-b610cb79c399-utilities\") pod \"6aa1a66f-e4ce-45f1-a6a7-b610cb79c399\" (UID: \"6aa1a66f-e4ce-45f1-a6a7-b610cb79c399\") " Feb 23 18:11:03 crc kubenswrapper[4724]: I0223 18:11:03.313798 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6aa1a66f-e4ce-45f1-a6a7-b610cb79c399-utilities" (OuterVolumeSpecName: "utilities") pod "6aa1a66f-e4ce-45f1-a6a7-b610cb79c399" (UID: "6aa1a66f-e4ce-45f1-a6a7-b610cb79c399"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:11:03 crc kubenswrapper[4724]: I0223 18:11:03.329614 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6aa1a66f-e4ce-45f1-a6a7-b610cb79c399-kube-api-access-9sfv2" (OuterVolumeSpecName: "kube-api-access-9sfv2") pod "6aa1a66f-e4ce-45f1-a6a7-b610cb79c399" (UID: "6aa1a66f-e4ce-45f1-a6a7-b610cb79c399"). InnerVolumeSpecName "kube-api-access-9sfv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:11:03 crc kubenswrapper[4724]: I0223 18:11:03.363343 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6aa1a66f-e4ce-45f1-a6a7-b610cb79c399-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6aa1a66f-e4ce-45f1-a6a7-b610cb79c399" (UID: "6aa1a66f-e4ce-45f1-a6a7-b610cb79c399"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:11:03 crc kubenswrapper[4724]: I0223 18:11:03.416315 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9sfv2\" (UniqueName: \"kubernetes.io/projected/6aa1a66f-e4ce-45f1-a6a7-b610cb79c399-kube-api-access-9sfv2\") on node \"crc\" DevicePath \"\"" Feb 23 18:11:03 crc kubenswrapper[4724]: I0223 18:11:03.416363 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6aa1a66f-e4ce-45f1-a6a7-b610cb79c399-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 18:11:03 crc kubenswrapper[4724]: I0223 18:11:03.416381 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6aa1a66f-e4ce-45f1-a6a7-b610cb79c399-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 18:11:03 crc kubenswrapper[4724]: I0223 18:11:03.731479 4724 generic.go:334] "Generic (PLEG): container finished" podID="6aa1a66f-e4ce-45f1-a6a7-b610cb79c399" containerID="d539d7fa03bfe0f0640fb9dce6c38fa024f930359b903eb37d668201b8008cc2" exitCode=0 Feb 23 18:11:03 crc kubenswrapper[4724]: I0223 18:11:03.731539 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brj4v" event={"ID":"6aa1a66f-e4ce-45f1-a6a7-b610cb79c399","Type":"ContainerDied","Data":"d539d7fa03bfe0f0640fb9dce6c38fa024f930359b903eb37d668201b8008cc2"} Feb 23 18:11:03 crc kubenswrapper[4724]: I0223 18:11:03.731578 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brj4v" event={"ID":"6aa1a66f-e4ce-45f1-a6a7-b610cb79c399","Type":"ContainerDied","Data":"213062d5041749b8b13c6b1e5508d91c33e85e9d5ac3c47d585a6d8eaf0bd543"} Feb 23 18:11:03 crc kubenswrapper[4724]: I0223 18:11:03.731601 4724 scope.go:117] "RemoveContainer" containerID="d539d7fa03bfe0f0640fb9dce6c38fa024f930359b903eb37d668201b8008cc2" Feb 23 18:11:03 crc kubenswrapper[4724]: I0223 18:11:03.731791 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-brj4v" Feb 23 18:11:03 crc kubenswrapper[4724]: I0223 18:11:03.761643 4724 scope.go:117] "RemoveContainer" containerID="78d792127717a60d3e8a5038878a43acad90acd373ff861992653bb217514c95" Feb 23 18:11:03 crc kubenswrapper[4724]: I0223 18:11:03.771243 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-brj4v"] Feb 23 18:11:03 crc kubenswrapper[4724]: I0223 18:11:03.781241 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-brj4v"] Feb 23 18:11:03 crc kubenswrapper[4724]: I0223 18:11:03.785345 4724 scope.go:117] "RemoveContainer" containerID="d22811a50864e92b1feff0ae3fe0d77e78a336d5b78cd13aacf721aed6229a8a" Feb 23 18:11:03 crc kubenswrapper[4724]: I0223 18:11:03.827404 4724 scope.go:117] "RemoveContainer" containerID="d539d7fa03bfe0f0640fb9dce6c38fa024f930359b903eb37d668201b8008cc2" Feb 23 18:11:03 crc kubenswrapper[4724]: E0223 18:11:03.828543 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d539d7fa03bfe0f0640fb9dce6c38fa024f930359b903eb37d668201b8008cc2\": container with ID starting with d539d7fa03bfe0f0640fb9dce6c38fa024f930359b903eb37d668201b8008cc2 not found: ID does not exist" containerID="d539d7fa03bfe0f0640fb9dce6c38fa024f930359b903eb37d668201b8008cc2" Feb 23 18:11:03 crc kubenswrapper[4724]: I0223 18:11:03.828598 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d539d7fa03bfe0f0640fb9dce6c38fa024f930359b903eb37d668201b8008cc2"} err="failed to get container status \"d539d7fa03bfe0f0640fb9dce6c38fa024f930359b903eb37d668201b8008cc2\": rpc error: code = NotFound desc = could not find container \"d539d7fa03bfe0f0640fb9dce6c38fa024f930359b903eb37d668201b8008cc2\": container with ID starting with d539d7fa03bfe0f0640fb9dce6c38fa024f930359b903eb37d668201b8008cc2 not found: ID does not exist" Feb 23 18:11:03 crc kubenswrapper[4724]: I0223 18:11:03.828626 4724 scope.go:117] "RemoveContainer" containerID="78d792127717a60d3e8a5038878a43acad90acd373ff861992653bb217514c95" Feb 23 18:11:03 crc kubenswrapper[4724]: E0223 18:11:03.828919 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78d792127717a60d3e8a5038878a43acad90acd373ff861992653bb217514c95\": container with ID starting with 78d792127717a60d3e8a5038878a43acad90acd373ff861992653bb217514c95 not found: ID does not exist" containerID="78d792127717a60d3e8a5038878a43acad90acd373ff861992653bb217514c95" Feb 23 18:11:03 crc kubenswrapper[4724]: I0223 18:11:03.828945 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78d792127717a60d3e8a5038878a43acad90acd373ff861992653bb217514c95"} err="failed to get container status \"78d792127717a60d3e8a5038878a43acad90acd373ff861992653bb217514c95\": rpc error: code = NotFound desc = could not find container \"78d792127717a60d3e8a5038878a43acad90acd373ff861992653bb217514c95\": container with ID starting with 78d792127717a60d3e8a5038878a43acad90acd373ff861992653bb217514c95 not found: ID does not exist" Feb 23 18:11:03 crc kubenswrapper[4724]: I0223 18:11:03.828967 4724 scope.go:117] "RemoveContainer" containerID="d22811a50864e92b1feff0ae3fe0d77e78a336d5b78cd13aacf721aed6229a8a" Feb 23 18:11:03 crc kubenswrapper[4724]: E0223 18:11:03.829207 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d22811a50864e92b1feff0ae3fe0d77e78a336d5b78cd13aacf721aed6229a8a\": container with ID starting with d22811a50864e92b1feff0ae3fe0d77e78a336d5b78cd13aacf721aed6229a8a not found: ID does not exist" containerID="d22811a50864e92b1feff0ae3fe0d77e78a336d5b78cd13aacf721aed6229a8a" Feb 23 18:11:03 crc kubenswrapper[4724]: I0223 18:11:03.829231 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d22811a50864e92b1feff0ae3fe0d77e78a336d5b78cd13aacf721aed6229a8a"} err="failed to get container status \"d22811a50864e92b1feff0ae3fe0d77e78a336d5b78cd13aacf721aed6229a8a\": rpc error: code = NotFound desc = could not find container \"d22811a50864e92b1feff0ae3fe0d77e78a336d5b78cd13aacf721aed6229a8a\": container with ID starting with d22811a50864e92b1feff0ae3fe0d77e78a336d5b78cd13aacf721aed6229a8a not found: ID does not exist" Feb 23 18:11:04 crc kubenswrapper[4724]: I0223 18:11:04.982181 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6aa1a66f-e4ce-45f1-a6a7-b610cb79c399" path="/var/lib/kubelet/pods/6aa1a66f-e4ce-45f1-a6a7-b610cb79c399/volumes" Feb 23 18:11:27 crc kubenswrapper[4724]: I0223 18:11:27.752216 4724 patch_prober.go:28] interesting pod/machine-config-daemon-rw78r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 18:11:27 crc kubenswrapper[4724]: I0223 18:11:27.752725 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 18:11:48 crc kubenswrapper[4724]: I0223 18:11:48.356892 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-m9mlf"] Feb 23 18:11:48 crc kubenswrapper[4724]: E0223 18:11:48.357959 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aa1a66f-e4ce-45f1-a6a7-b610cb79c399" containerName="extract-utilities" Feb 23 18:11:48 crc kubenswrapper[4724]: I0223 18:11:48.357979 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aa1a66f-e4ce-45f1-a6a7-b610cb79c399" containerName="extract-utilities" Feb 23 18:11:48 crc kubenswrapper[4724]: E0223 18:11:48.357996 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aa1a66f-e4ce-45f1-a6a7-b610cb79c399" containerName="registry-server" Feb 23 18:11:48 crc kubenswrapper[4724]: I0223 18:11:48.358003 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aa1a66f-e4ce-45f1-a6a7-b610cb79c399" containerName="registry-server" Feb 23 18:11:48 crc kubenswrapper[4724]: E0223 18:11:48.358018 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aa1a66f-e4ce-45f1-a6a7-b610cb79c399" containerName="extract-content" Feb 23 18:11:48 crc kubenswrapper[4724]: I0223 18:11:48.358026 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aa1a66f-e4ce-45f1-a6a7-b610cb79c399" containerName="extract-content" Feb 23 18:11:48 crc kubenswrapper[4724]: I0223 18:11:48.358283 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aa1a66f-e4ce-45f1-a6a7-b610cb79c399" containerName="registry-server" Feb 23 18:11:48 crc kubenswrapper[4724]: I0223 18:11:48.360009 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m9mlf" Feb 23 18:11:48 crc kubenswrapper[4724]: I0223 18:11:48.366233 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m9mlf"] Feb 23 18:11:48 crc kubenswrapper[4724]: I0223 18:11:48.413237 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4l77\" (UniqueName: \"kubernetes.io/projected/3207dce2-0b8b-495a-8ec3-81187c0e7002-kube-api-access-k4l77\") pod \"certified-operators-m9mlf\" (UID: \"3207dce2-0b8b-495a-8ec3-81187c0e7002\") " pod="openshift-marketplace/certified-operators-m9mlf" Feb 23 18:11:48 crc kubenswrapper[4724]: I0223 18:11:48.413341 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3207dce2-0b8b-495a-8ec3-81187c0e7002-utilities\") pod \"certified-operators-m9mlf\" (UID: \"3207dce2-0b8b-495a-8ec3-81187c0e7002\") " pod="openshift-marketplace/certified-operators-m9mlf" Feb 23 18:11:48 crc kubenswrapper[4724]: I0223 18:11:48.413375 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3207dce2-0b8b-495a-8ec3-81187c0e7002-catalog-content\") pod \"certified-operators-m9mlf\" (UID: \"3207dce2-0b8b-495a-8ec3-81187c0e7002\") " pod="openshift-marketplace/certified-operators-m9mlf" Feb 23 18:11:48 crc kubenswrapper[4724]: I0223 18:11:48.515401 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3207dce2-0b8b-495a-8ec3-81187c0e7002-utilities\") pod \"certified-operators-m9mlf\" (UID: \"3207dce2-0b8b-495a-8ec3-81187c0e7002\") " pod="openshift-marketplace/certified-operators-m9mlf" Feb 23 18:11:48 crc kubenswrapper[4724]: I0223 18:11:48.515472 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3207dce2-0b8b-495a-8ec3-81187c0e7002-catalog-content\") pod \"certified-operators-m9mlf\" (UID: \"3207dce2-0b8b-495a-8ec3-81187c0e7002\") " pod="openshift-marketplace/certified-operators-m9mlf" Feb 23 18:11:48 crc kubenswrapper[4724]: I0223 18:11:48.515560 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4l77\" (UniqueName: \"kubernetes.io/projected/3207dce2-0b8b-495a-8ec3-81187c0e7002-kube-api-access-k4l77\") pod \"certified-operators-m9mlf\" (UID: \"3207dce2-0b8b-495a-8ec3-81187c0e7002\") " pod="openshift-marketplace/certified-operators-m9mlf" Feb 23 18:11:48 crc kubenswrapper[4724]: I0223 18:11:48.516021 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3207dce2-0b8b-495a-8ec3-81187c0e7002-utilities\") pod \"certified-operators-m9mlf\" (UID: \"3207dce2-0b8b-495a-8ec3-81187c0e7002\") " pod="openshift-marketplace/certified-operators-m9mlf" Feb 23 18:11:48 crc kubenswrapper[4724]: I0223 18:11:48.516157 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3207dce2-0b8b-495a-8ec3-81187c0e7002-catalog-content\") pod \"certified-operators-m9mlf\" (UID: \"3207dce2-0b8b-495a-8ec3-81187c0e7002\") " pod="openshift-marketplace/certified-operators-m9mlf" Feb 23 18:11:48 crc kubenswrapper[4724]: I0223 18:11:48.541646 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4l77\" (UniqueName: \"kubernetes.io/projected/3207dce2-0b8b-495a-8ec3-81187c0e7002-kube-api-access-k4l77\") pod \"certified-operators-m9mlf\" (UID: \"3207dce2-0b8b-495a-8ec3-81187c0e7002\") " pod="openshift-marketplace/certified-operators-m9mlf" Feb 23 18:11:48 crc kubenswrapper[4724]: I0223 18:11:48.682139 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m9mlf" Feb 23 18:11:49 crc kubenswrapper[4724]: I0223 18:11:49.157063 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m9mlf"] Feb 23 18:11:50 crc kubenswrapper[4724]: I0223 18:11:50.135472 4724 generic.go:334] "Generic (PLEG): container finished" podID="3207dce2-0b8b-495a-8ec3-81187c0e7002" containerID="31022b28337872ef5d4463d6a59d222cb759519021ff0f2a72eeb878d5a3b0de" exitCode=0 Feb 23 18:11:50 crc kubenswrapper[4724]: I0223 18:11:50.135558 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m9mlf" event={"ID":"3207dce2-0b8b-495a-8ec3-81187c0e7002","Type":"ContainerDied","Data":"31022b28337872ef5d4463d6a59d222cb759519021ff0f2a72eeb878d5a3b0de"} Feb 23 18:11:50 crc kubenswrapper[4724]: I0223 18:11:50.135770 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m9mlf" event={"ID":"3207dce2-0b8b-495a-8ec3-81187c0e7002","Type":"ContainerStarted","Data":"89cf407655fcc17e4a9512eb55579a25004ab1d1784f2cb6871467d4f67d072a"} Feb 23 18:11:51 crc kubenswrapper[4724]: I0223 18:11:51.145142 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m9mlf" event={"ID":"3207dce2-0b8b-495a-8ec3-81187c0e7002","Type":"ContainerStarted","Data":"eceab65f4facae8e7c7ae592f7aa885634a02a7acc020aff25f882e75ec4a81a"} Feb 23 18:11:52 crc kubenswrapper[4724]: I0223 18:11:52.158461 4724 generic.go:334] "Generic (PLEG): container finished" podID="3207dce2-0b8b-495a-8ec3-81187c0e7002" containerID="eceab65f4facae8e7c7ae592f7aa885634a02a7acc020aff25f882e75ec4a81a" exitCode=0 Feb 23 18:11:52 crc kubenswrapper[4724]: I0223 18:11:52.158509 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m9mlf" event={"ID":"3207dce2-0b8b-495a-8ec3-81187c0e7002","Type":"ContainerDied","Data":"eceab65f4facae8e7c7ae592f7aa885634a02a7acc020aff25f882e75ec4a81a"} Feb 23 18:11:53 crc kubenswrapper[4724]: I0223 18:11:53.168847 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m9mlf" event={"ID":"3207dce2-0b8b-495a-8ec3-81187c0e7002","Type":"ContainerStarted","Data":"e34401b264830715db52918de813ed0f7e6f285dbd140c1b12a194c00bfd23e4"} Feb 23 18:11:53 crc kubenswrapper[4724]: I0223 18:11:53.192588 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-m9mlf" podStartSLOduration=2.8011341659999998 podStartE2EDuration="5.192570411s" podCreationTimestamp="2026-02-23 18:11:48 +0000 UTC" firstStartedPulling="2026-02-23 18:11:50.137308734 +0000 UTC m=+2465.953508334" lastFinishedPulling="2026-02-23 18:11:52.528744979 +0000 UTC m=+2468.344944579" observedRunningTime="2026-02-23 18:11:53.184782373 +0000 UTC m=+2469.000982003" watchObservedRunningTime="2026-02-23 18:11:53.192570411 +0000 UTC m=+2469.008770011" Feb 23 18:11:57 crc kubenswrapper[4724]: I0223 18:11:57.752317 4724 patch_prober.go:28] interesting pod/machine-config-daemon-rw78r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 18:11:57 crc kubenswrapper[4724]: I0223 18:11:57.752879 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 18:11:57 crc kubenswrapper[4724]: I0223 18:11:57.752924 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" Feb 23 18:11:57 crc kubenswrapper[4724]: I0223 18:11:57.753642 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3c33c3a693d61e5bd20fa1e2524ac802f18500488318f285cffccb9cc71efde6"} pod="openshift-machine-config-operator/machine-config-daemon-rw78r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 18:11:57 crc kubenswrapper[4724]: I0223 18:11:57.753697 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" containerID="cri-o://3c33c3a693d61e5bd20fa1e2524ac802f18500488318f285cffccb9cc71efde6" gracePeriod=600 Feb 23 18:11:57 crc kubenswrapper[4724]: E0223 18:11:57.874830 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:11:58 crc kubenswrapper[4724]: I0223 18:11:58.214030 4724 generic.go:334] "Generic (PLEG): container finished" podID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerID="3c33c3a693d61e5bd20fa1e2524ac802f18500488318f285cffccb9cc71efde6" exitCode=0 Feb 23 18:11:58 crc kubenswrapper[4724]: I0223 18:11:58.214367 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" event={"ID":"a065b197-b354-4d9b-b2e9-7d4882a3d1a2","Type":"ContainerDied","Data":"3c33c3a693d61e5bd20fa1e2524ac802f18500488318f285cffccb9cc71efde6"} Feb 23 18:11:58 crc kubenswrapper[4724]: I0223 18:11:58.214424 4724 scope.go:117] "RemoveContainer" containerID="d8e7d3776c6fbb48ad76fa72eb5bbe6d210efd516c146bb3a54891b4dbd9d170" Feb 23 18:11:58 crc kubenswrapper[4724]: I0223 18:11:58.215372 4724 scope.go:117] "RemoveContainer" containerID="3c33c3a693d61e5bd20fa1e2524ac802f18500488318f285cffccb9cc71efde6" Feb 23 18:11:58 crc kubenswrapper[4724]: E0223 18:11:58.215695 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:11:58 crc kubenswrapper[4724]: I0223 18:11:58.682535 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-m9mlf" Feb 23 18:11:58 crc kubenswrapper[4724]: I0223 18:11:58.682919 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-m9mlf" Feb 23 18:11:58 crc kubenswrapper[4724]: I0223 18:11:58.728751 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-m9mlf" Feb 23 18:11:59 crc kubenswrapper[4724]: I0223 18:11:59.273774 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-m9mlf" Feb 23 18:11:59 crc kubenswrapper[4724]: I0223 18:11:59.344078 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m9mlf"] Feb 23 18:12:01 crc kubenswrapper[4724]: I0223 18:12:01.240957 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-m9mlf" podUID="3207dce2-0b8b-495a-8ec3-81187c0e7002" containerName="registry-server" containerID="cri-o://e34401b264830715db52918de813ed0f7e6f285dbd140c1b12a194c00bfd23e4" gracePeriod=2 Feb 23 18:12:01 crc kubenswrapper[4724]: I0223 18:12:01.750316 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m9mlf" Feb 23 18:12:01 crc kubenswrapper[4724]: I0223 18:12:01.877586 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3207dce2-0b8b-495a-8ec3-81187c0e7002-utilities\") pod \"3207dce2-0b8b-495a-8ec3-81187c0e7002\" (UID: \"3207dce2-0b8b-495a-8ec3-81187c0e7002\") " Feb 23 18:12:01 crc kubenswrapper[4724]: I0223 18:12:01.877708 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4l77\" (UniqueName: \"kubernetes.io/projected/3207dce2-0b8b-495a-8ec3-81187c0e7002-kube-api-access-k4l77\") pod \"3207dce2-0b8b-495a-8ec3-81187c0e7002\" (UID: \"3207dce2-0b8b-495a-8ec3-81187c0e7002\") " Feb 23 18:12:01 crc kubenswrapper[4724]: I0223 18:12:01.877729 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3207dce2-0b8b-495a-8ec3-81187c0e7002-catalog-content\") pod \"3207dce2-0b8b-495a-8ec3-81187c0e7002\" (UID: \"3207dce2-0b8b-495a-8ec3-81187c0e7002\") " Feb 23 18:12:01 crc kubenswrapper[4724]: I0223 18:12:01.878908 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3207dce2-0b8b-495a-8ec3-81187c0e7002-utilities" (OuterVolumeSpecName: "utilities") pod "3207dce2-0b8b-495a-8ec3-81187c0e7002" (UID: "3207dce2-0b8b-495a-8ec3-81187c0e7002"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:12:01 crc kubenswrapper[4724]: I0223 18:12:01.887030 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3207dce2-0b8b-495a-8ec3-81187c0e7002-kube-api-access-k4l77" (OuterVolumeSpecName: "kube-api-access-k4l77") pod "3207dce2-0b8b-495a-8ec3-81187c0e7002" (UID: "3207dce2-0b8b-495a-8ec3-81187c0e7002"). InnerVolumeSpecName "kube-api-access-k4l77". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:12:01 crc kubenswrapper[4724]: I0223 18:12:01.980665 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3207dce2-0b8b-495a-8ec3-81187c0e7002-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 18:12:01 crc kubenswrapper[4724]: I0223 18:12:01.980714 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4l77\" (UniqueName: \"kubernetes.io/projected/3207dce2-0b8b-495a-8ec3-81187c0e7002-kube-api-access-k4l77\") on node \"crc\" DevicePath \"\"" Feb 23 18:12:02 crc kubenswrapper[4724]: I0223 18:12:02.046499 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3207dce2-0b8b-495a-8ec3-81187c0e7002-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3207dce2-0b8b-495a-8ec3-81187c0e7002" (UID: "3207dce2-0b8b-495a-8ec3-81187c0e7002"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:12:02 crc kubenswrapper[4724]: I0223 18:12:02.083107 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3207dce2-0b8b-495a-8ec3-81187c0e7002-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 18:12:02 crc kubenswrapper[4724]: I0223 18:12:02.253535 4724 generic.go:334] "Generic (PLEG): container finished" podID="3207dce2-0b8b-495a-8ec3-81187c0e7002" containerID="e34401b264830715db52918de813ed0f7e6f285dbd140c1b12a194c00bfd23e4" exitCode=0 Feb 23 18:12:02 crc kubenswrapper[4724]: I0223 18:12:02.253604 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m9mlf" Feb 23 18:12:02 crc kubenswrapper[4724]: I0223 18:12:02.253611 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m9mlf" event={"ID":"3207dce2-0b8b-495a-8ec3-81187c0e7002","Type":"ContainerDied","Data":"e34401b264830715db52918de813ed0f7e6f285dbd140c1b12a194c00bfd23e4"} Feb 23 18:12:02 crc kubenswrapper[4724]: I0223 18:12:02.253925 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m9mlf" event={"ID":"3207dce2-0b8b-495a-8ec3-81187c0e7002","Type":"ContainerDied","Data":"89cf407655fcc17e4a9512eb55579a25004ab1d1784f2cb6871467d4f67d072a"} Feb 23 18:12:02 crc kubenswrapper[4724]: I0223 18:12:02.253955 4724 scope.go:117] "RemoveContainer" containerID="e34401b264830715db52918de813ed0f7e6f285dbd140c1b12a194c00bfd23e4" Feb 23 18:12:02 crc kubenswrapper[4724]: I0223 18:12:02.294070 4724 scope.go:117] "RemoveContainer" containerID="eceab65f4facae8e7c7ae592f7aa885634a02a7acc020aff25f882e75ec4a81a" Feb 23 18:12:02 crc kubenswrapper[4724]: I0223 18:12:02.301678 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m9mlf"] Feb 23 18:12:02 crc kubenswrapper[4724]: I0223 18:12:02.309909 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-m9mlf"] Feb 23 18:12:02 crc kubenswrapper[4724]: I0223 18:12:02.316650 4724 scope.go:117] "RemoveContainer" containerID="31022b28337872ef5d4463d6a59d222cb759519021ff0f2a72eeb878d5a3b0de" Feb 23 18:12:02 crc kubenswrapper[4724]: I0223 18:12:02.372283 4724 scope.go:117] "RemoveContainer" containerID="e34401b264830715db52918de813ed0f7e6f285dbd140c1b12a194c00bfd23e4" Feb 23 18:12:02 crc kubenswrapper[4724]: E0223 18:12:02.372920 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e34401b264830715db52918de813ed0f7e6f285dbd140c1b12a194c00bfd23e4\": container with ID starting with e34401b264830715db52918de813ed0f7e6f285dbd140c1b12a194c00bfd23e4 not found: ID does not exist" containerID="e34401b264830715db52918de813ed0f7e6f285dbd140c1b12a194c00bfd23e4" Feb 23 18:12:02 crc kubenswrapper[4724]: I0223 18:12:02.373285 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e34401b264830715db52918de813ed0f7e6f285dbd140c1b12a194c00bfd23e4"} err="failed to get container status \"e34401b264830715db52918de813ed0f7e6f285dbd140c1b12a194c00bfd23e4\": rpc error: code = NotFound desc = could not find container \"e34401b264830715db52918de813ed0f7e6f285dbd140c1b12a194c00bfd23e4\": container with ID starting with e34401b264830715db52918de813ed0f7e6f285dbd140c1b12a194c00bfd23e4 not found: ID does not exist" Feb 23 18:12:02 crc kubenswrapper[4724]: I0223 18:12:02.373325 4724 scope.go:117] "RemoveContainer" containerID="eceab65f4facae8e7c7ae592f7aa885634a02a7acc020aff25f882e75ec4a81a" Feb 23 18:12:02 crc kubenswrapper[4724]: E0223 18:12:02.373730 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eceab65f4facae8e7c7ae592f7aa885634a02a7acc020aff25f882e75ec4a81a\": container with ID starting with eceab65f4facae8e7c7ae592f7aa885634a02a7acc020aff25f882e75ec4a81a not found: ID does not exist" containerID="eceab65f4facae8e7c7ae592f7aa885634a02a7acc020aff25f882e75ec4a81a" Feb 23 18:12:02 crc kubenswrapper[4724]: I0223 18:12:02.373794 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eceab65f4facae8e7c7ae592f7aa885634a02a7acc020aff25f882e75ec4a81a"} err="failed to get container status \"eceab65f4facae8e7c7ae592f7aa885634a02a7acc020aff25f882e75ec4a81a\": rpc error: code = NotFound desc = could not find container \"eceab65f4facae8e7c7ae592f7aa885634a02a7acc020aff25f882e75ec4a81a\": container with ID starting with eceab65f4facae8e7c7ae592f7aa885634a02a7acc020aff25f882e75ec4a81a not found: ID does not exist" Feb 23 18:12:02 crc kubenswrapper[4724]: I0223 18:12:02.373831 4724 scope.go:117] "RemoveContainer" containerID="31022b28337872ef5d4463d6a59d222cb759519021ff0f2a72eeb878d5a3b0de" Feb 23 18:12:02 crc kubenswrapper[4724]: E0223 18:12:02.374456 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31022b28337872ef5d4463d6a59d222cb759519021ff0f2a72eeb878d5a3b0de\": container with ID starting with 31022b28337872ef5d4463d6a59d222cb759519021ff0f2a72eeb878d5a3b0de not found: ID does not exist" containerID="31022b28337872ef5d4463d6a59d222cb759519021ff0f2a72eeb878d5a3b0de" Feb 23 18:12:02 crc kubenswrapper[4724]: I0223 18:12:02.374498 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31022b28337872ef5d4463d6a59d222cb759519021ff0f2a72eeb878d5a3b0de"} err="failed to get container status \"31022b28337872ef5d4463d6a59d222cb759519021ff0f2a72eeb878d5a3b0de\": rpc error: code = NotFound desc = could not find container \"31022b28337872ef5d4463d6a59d222cb759519021ff0f2a72eeb878d5a3b0de\": container with ID starting with 31022b28337872ef5d4463d6a59d222cb759519021ff0f2a72eeb878d5a3b0de not found: ID does not exist" Feb 23 18:12:02 crc kubenswrapper[4724]: I0223 18:12:02.962297 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3207dce2-0b8b-495a-8ec3-81187c0e7002" path="/var/lib/kubelet/pods/3207dce2-0b8b-495a-8ec3-81187c0e7002/volumes" Feb 23 18:12:07 crc kubenswrapper[4724]: I0223 18:12:07.301239 4724 generic.go:334] "Generic (PLEG): container finished" podID="28de6808-9434-463a-9b7f-cd4236c51c29" containerID="58054b4d0dde29ad78c2ec8a4b9e7e86bf9a1bcb663314b29f1eebee9052895c" exitCode=0 Feb 23 18:12:07 crc kubenswrapper[4724]: I0223 18:12:07.301344 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t898c" event={"ID":"28de6808-9434-463a-9b7f-cd4236c51c29","Type":"ContainerDied","Data":"58054b4d0dde29ad78c2ec8a4b9e7e86bf9a1bcb663314b29f1eebee9052895c"} Feb 23 18:12:08 crc kubenswrapper[4724]: I0223 18:12:08.752751 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t898c" Feb 23 18:12:08 crc kubenswrapper[4724]: I0223 18:12:08.918501 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/28de6808-9434-463a-9b7f-cd4236c51c29-nova-cell1-compute-config-0\") pod \"28de6808-9434-463a-9b7f-cd4236c51c29\" (UID: \"28de6808-9434-463a-9b7f-cd4236c51c29\") " Feb 23 18:12:08 crc kubenswrapper[4724]: I0223 18:12:08.918635 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/28de6808-9434-463a-9b7f-cd4236c51c29-nova-migration-ssh-key-0\") pod \"28de6808-9434-463a-9b7f-cd4236c51c29\" (UID: \"28de6808-9434-463a-9b7f-cd4236c51c29\") " Feb 23 18:12:08 crc kubenswrapper[4724]: I0223 18:12:08.918679 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/28de6808-9434-463a-9b7f-cd4236c51c29-nova-cell1-compute-config-2\") pod \"28de6808-9434-463a-9b7f-cd4236c51c29\" (UID: \"28de6808-9434-463a-9b7f-cd4236c51c29\") " Feb 23 18:12:08 crc kubenswrapper[4724]: I0223 18:12:08.918731 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8v9j\" (UniqueName: \"kubernetes.io/projected/28de6808-9434-463a-9b7f-cd4236c51c29-kube-api-access-b8v9j\") pod \"28de6808-9434-463a-9b7f-cd4236c51c29\" (UID: \"28de6808-9434-463a-9b7f-cd4236c51c29\") " Feb 23 18:12:08 crc kubenswrapper[4724]: I0223 18:12:08.918758 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/28de6808-9434-463a-9b7f-cd4236c51c29-nova-extra-config-0\") pod \"28de6808-9434-463a-9b7f-cd4236c51c29\" (UID: \"28de6808-9434-463a-9b7f-cd4236c51c29\") " Feb 23 18:12:08 crc kubenswrapper[4724]: I0223 18:12:08.919283 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/28de6808-9434-463a-9b7f-cd4236c51c29-nova-cell1-compute-config-1\") pod \"28de6808-9434-463a-9b7f-cd4236c51c29\" (UID: \"28de6808-9434-463a-9b7f-cd4236c51c29\") " Feb 23 18:12:08 crc kubenswrapper[4724]: I0223 18:12:08.919336 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/28de6808-9434-463a-9b7f-cd4236c51c29-nova-migration-ssh-key-1\") pod \"28de6808-9434-463a-9b7f-cd4236c51c29\" (UID: \"28de6808-9434-463a-9b7f-cd4236c51c29\") " Feb 23 18:12:08 crc kubenswrapper[4724]: I0223 18:12:08.919358 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/28de6808-9434-463a-9b7f-cd4236c51c29-ssh-key-openstack-edpm-ipam\") pod \"28de6808-9434-463a-9b7f-cd4236c51c29\" (UID: \"28de6808-9434-463a-9b7f-cd4236c51c29\") " Feb 23 18:12:08 crc kubenswrapper[4724]: I0223 18:12:08.919385 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28de6808-9434-463a-9b7f-cd4236c51c29-inventory\") pod \"28de6808-9434-463a-9b7f-cd4236c51c29\" (UID: \"28de6808-9434-463a-9b7f-cd4236c51c29\") " Feb 23 18:12:08 crc kubenswrapper[4724]: I0223 18:12:08.919483 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28de6808-9434-463a-9b7f-cd4236c51c29-nova-combined-ca-bundle\") pod \"28de6808-9434-463a-9b7f-cd4236c51c29\" (UID: \"28de6808-9434-463a-9b7f-cd4236c51c29\") " Feb 23 18:12:08 crc kubenswrapper[4724]: I0223 18:12:08.919561 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/28de6808-9434-463a-9b7f-cd4236c51c29-nova-cell1-compute-config-3\") pod \"28de6808-9434-463a-9b7f-cd4236c51c29\" (UID: \"28de6808-9434-463a-9b7f-cd4236c51c29\") " Feb 23 18:12:08 crc kubenswrapper[4724]: I0223 18:12:08.924168 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28de6808-9434-463a-9b7f-cd4236c51c29-kube-api-access-b8v9j" (OuterVolumeSpecName: "kube-api-access-b8v9j") pod "28de6808-9434-463a-9b7f-cd4236c51c29" (UID: "28de6808-9434-463a-9b7f-cd4236c51c29"). InnerVolumeSpecName "kube-api-access-b8v9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:12:08 crc kubenswrapper[4724]: I0223 18:12:08.932230 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28de6808-9434-463a-9b7f-cd4236c51c29-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "28de6808-9434-463a-9b7f-cd4236c51c29" (UID: "28de6808-9434-463a-9b7f-cd4236c51c29"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:12:08 crc kubenswrapper[4724]: I0223 18:12:08.950946 4724 scope.go:117] "RemoveContainer" containerID="3c33c3a693d61e5bd20fa1e2524ac802f18500488318f285cffccb9cc71efde6" Feb 23 18:12:08 crc kubenswrapper[4724]: E0223 18:12:08.951344 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:12:08 crc kubenswrapper[4724]: I0223 18:12:08.953513 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28de6808-9434-463a-9b7f-cd4236c51c29-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "28de6808-9434-463a-9b7f-cd4236c51c29" (UID: "28de6808-9434-463a-9b7f-cd4236c51c29"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:12:08 crc kubenswrapper[4724]: I0223 18:12:08.954239 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28de6808-9434-463a-9b7f-cd4236c51c29-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "28de6808-9434-463a-9b7f-cd4236c51c29" (UID: "28de6808-9434-463a-9b7f-cd4236c51c29"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:12:08 crc kubenswrapper[4724]: I0223 18:12:08.954271 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28de6808-9434-463a-9b7f-cd4236c51c29-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "28de6808-9434-463a-9b7f-cd4236c51c29" (UID: "28de6808-9434-463a-9b7f-cd4236c51c29"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:12:08 crc kubenswrapper[4724]: I0223 18:12:08.955736 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28de6808-9434-463a-9b7f-cd4236c51c29-inventory" (OuterVolumeSpecName: "inventory") pod "28de6808-9434-463a-9b7f-cd4236c51c29" (UID: "28de6808-9434-463a-9b7f-cd4236c51c29"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:12:08 crc kubenswrapper[4724]: I0223 18:12:08.959670 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28de6808-9434-463a-9b7f-cd4236c51c29-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "28de6808-9434-463a-9b7f-cd4236c51c29" (UID: "28de6808-9434-463a-9b7f-cd4236c51c29"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:12:08 crc kubenswrapper[4724]: I0223 18:12:08.961667 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28de6808-9434-463a-9b7f-cd4236c51c29-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "28de6808-9434-463a-9b7f-cd4236c51c29" (UID: "28de6808-9434-463a-9b7f-cd4236c51c29"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:12:08 crc kubenswrapper[4724]: I0223 18:12:08.962537 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28de6808-9434-463a-9b7f-cd4236c51c29-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "28de6808-9434-463a-9b7f-cd4236c51c29" (UID: "28de6808-9434-463a-9b7f-cd4236c51c29"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:12:08 crc kubenswrapper[4724]: I0223 18:12:08.967467 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28de6808-9434-463a-9b7f-cd4236c51c29-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "28de6808-9434-463a-9b7f-cd4236c51c29" (UID: "28de6808-9434-463a-9b7f-cd4236c51c29"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:12:08 crc kubenswrapper[4724]: I0223 18:12:08.981468 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28de6808-9434-463a-9b7f-cd4236c51c29-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "28de6808-9434-463a-9b7f-cd4236c51c29" (UID: "28de6808-9434-463a-9b7f-cd4236c51c29"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:12:09 crc kubenswrapper[4724]: I0223 18:12:09.023027 4724 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/28de6808-9434-463a-9b7f-cd4236c51c29-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Feb 23 18:12:09 crc kubenswrapper[4724]: I0223 18:12:09.023075 4724 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/28de6808-9434-463a-9b7f-cd4236c51c29-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 23 18:12:09 crc kubenswrapper[4724]: I0223 18:12:09.023084 4724 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/28de6808-9434-463a-9b7f-cd4236c51c29-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 23 18:12:09 crc kubenswrapper[4724]: I0223 18:12:09.023096 4724 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/28de6808-9434-463a-9b7f-cd4236c51c29-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Feb 23 18:12:09 crc kubenswrapper[4724]: I0223 18:12:09.023108 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8v9j\" (UniqueName: \"kubernetes.io/projected/28de6808-9434-463a-9b7f-cd4236c51c29-kube-api-access-b8v9j\") on node \"crc\" DevicePath \"\"" Feb 23 18:12:09 crc kubenswrapper[4724]: I0223 18:12:09.023117 4724 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/28de6808-9434-463a-9b7f-cd4236c51c29-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 23 18:12:09 crc kubenswrapper[4724]: I0223 18:12:09.023125 4724 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/28de6808-9434-463a-9b7f-cd4236c51c29-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 23 18:12:09 crc kubenswrapper[4724]: I0223 18:12:09.023133 4724 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/28de6808-9434-463a-9b7f-cd4236c51c29-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 23 18:12:09 crc kubenswrapper[4724]: I0223 18:12:09.023143 4724 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/28de6808-9434-463a-9b7f-cd4236c51c29-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 18:12:09 crc kubenswrapper[4724]: I0223 18:12:09.023155 4724 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28de6808-9434-463a-9b7f-cd4236c51c29-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 18:12:09 crc kubenswrapper[4724]: I0223 18:12:09.023170 4724 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28de6808-9434-463a-9b7f-cd4236c51c29-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:12:09 crc kubenswrapper[4724]: I0223 18:12:09.321097 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t898c" event={"ID":"28de6808-9434-463a-9b7f-cd4236c51c29","Type":"ContainerDied","Data":"e8f0c94f16e4dab55ef550ab0215bb3bfa32cc3d0e7cc6e5d61996f50f54222d"} Feb 23 18:12:09 crc kubenswrapper[4724]: I0223 18:12:09.321632 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8f0c94f16e4dab55ef550ab0215bb3bfa32cc3d0e7cc6e5d61996f50f54222d" Feb 23 18:12:09 crc kubenswrapper[4724]: I0223 18:12:09.321177 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t898c" Feb 23 18:12:09 crc kubenswrapper[4724]: I0223 18:12:09.438126 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mbgkn"] Feb 23 18:12:09 crc kubenswrapper[4724]: E0223 18:12:09.438663 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3207dce2-0b8b-495a-8ec3-81187c0e7002" containerName="registry-server" Feb 23 18:12:09 crc kubenswrapper[4724]: I0223 18:12:09.438690 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="3207dce2-0b8b-495a-8ec3-81187c0e7002" containerName="registry-server" Feb 23 18:12:09 crc kubenswrapper[4724]: E0223 18:12:09.438727 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3207dce2-0b8b-495a-8ec3-81187c0e7002" containerName="extract-content" Feb 23 18:12:09 crc kubenswrapper[4724]: I0223 18:12:09.438736 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="3207dce2-0b8b-495a-8ec3-81187c0e7002" containerName="extract-content" Feb 23 18:12:09 crc kubenswrapper[4724]: E0223 18:12:09.438757 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3207dce2-0b8b-495a-8ec3-81187c0e7002" containerName="extract-utilities" Feb 23 18:12:09 crc kubenswrapper[4724]: I0223 18:12:09.438767 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="3207dce2-0b8b-495a-8ec3-81187c0e7002" containerName="extract-utilities" Feb 23 18:12:09 crc kubenswrapper[4724]: E0223 18:12:09.438787 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28de6808-9434-463a-9b7f-cd4236c51c29" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 23 18:12:09 crc kubenswrapper[4724]: I0223 18:12:09.438795 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="28de6808-9434-463a-9b7f-cd4236c51c29" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 23 18:12:09 crc kubenswrapper[4724]: I0223 18:12:09.439036 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="28de6808-9434-463a-9b7f-cd4236c51c29" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 23 18:12:09 crc kubenswrapper[4724]: I0223 18:12:09.439079 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="3207dce2-0b8b-495a-8ec3-81187c0e7002" containerName="registry-server" Feb 23 18:12:09 crc kubenswrapper[4724]: I0223 18:12:09.440106 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mbgkn" Feb 23 18:12:09 crc kubenswrapper[4724]: I0223 18:12:09.446242 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t8jff" Feb 23 18:12:09 crc kubenswrapper[4724]: I0223 18:12:09.446498 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 18:12:09 crc kubenswrapper[4724]: I0223 18:12:09.446607 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 18:12:09 crc kubenswrapper[4724]: I0223 18:12:09.446572 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 18:12:09 crc kubenswrapper[4724]: I0223 18:12:09.447149 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 23 18:12:09 crc kubenswrapper[4724]: I0223 18:12:09.447571 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mbgkn"] Feb 23 18:12:09 crc kubenswrapper[4724]: I0223 18:12:09.534459 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3052df73-dea7-4da0-b0b1-f881cff2b747-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mbgkn\" (UID: \"3052df73-dea7-4da0-b0b1-f881cff2b747\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mbgkn" Feb 23 18:12:09 crc kubenswrapper[4724]: I0223 18:12:09.534524 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fqg8\" (UniqueName: \"kubernetes.io/projected/3052df73-dea7-4da0-b0b1-f881cff2b747-kube-api-access-5fqg8\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mbgkn\" (UID: \"3052df73-dea7-4da0-b0b1-f881cff2b747\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mbgkn" Feb 23 18:12:09 crc kubenswrapper[4724]: I0223 18:12:09.534573 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/3052df73-dea7-4da0-b0b1-f881cff2b747-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mbgkn\" (UID: \"3052df73-dea7-4da0-b0b1-f881cff2b747\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mbgkn" Feb 23 18:12:09 crc kubenswrapper[4724]: I0223 18:12:09.534603 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/3052df73-dea7-4da0-b0b1-f881cff2b747-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mbgkn\" (UID: \"3052df73-dea7-4da0-b0b1-f881cff2b747\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mbgkn" Feb 23 18:12:09 crc kubenswrapper[4724]: I0223 18:12:09.534823 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3052df73-dea7-4da0-b0b1-f881cff2b747-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mbgkn\" (UID: \"3052df73-dea7-4da0-b0b1-f881cff2b747\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mbgkn" Feb 23 18:12:09 crc kubenswrapper[4724]: I0223 18:12:09.534864 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/3052df73-dea7-4da0-b0b1-f881cff2b747-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mbgkn\" (UID: \"3052df73-dea7-4da0-b0b1-f881cff2b747\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mbgkn" Feb 23 18:12:09 crc kubenswrapper[4724]: I0223 18:12:09.534899 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3052df73-dea7-4da0-b0b1-f881cff2b747-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mbgkn\" (UID: \"3052df73-dea7-4da0-b0b1-f881cff2b747\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mbgkn" Feb 23 18:12:09 crc kubenswrapper[4724]: I0223 18:12:09.637685 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/3052df73-dea7-4da0-b0b1-f881cff2b747-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mbgkn\" (UID: \"3052df73-dea7-4da0-b0b1-f881cff2b747\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mbgkn" Feb 23 18:12:09 crc kubenswrapper[4724]: I0223 18:12:09.637952 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/3052df73-dea7-4da0-b0b1-f881cff2b747-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mbgkn\" (UID: \"3052df73-dea7-4da0-b0b1-f881cff2b747\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mbgkn" Feb 23 18:12:09 crc kubenswrapper[4724]: I0223 18:12:09.638052 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3052df73-dea7-4da0-b0b1-f881cff2b747-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mbgkn\" (UID: \"3052df73-dea7-4da0-b0b1-f881cff2b747\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mbgkn" Feb 23 18:12:09 crc kubenswrapper[4724]: I0223 18:12:09.638073 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/3052df73-dea7-4da0-b0b1-f881cff2b747-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mbgkn\" (UID: \"3052df73-dea7-4da0-b0b1-f881cff2b747\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mbgkn" Feb 23 18:12:09 crc kubenswrapper[4724]: I0223 18:12:09.638104 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3052df73-dea7-4da0-b0b1-f881cff2b747-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mbgkn\" (UID: \"3052df73-dea7-4da0-b0b1-f881cff2b747\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mbgkn" Feb 23 18:12:09 crc kubenswrapper[4724]: I0223 18:12:09.638209 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3052df73-dea7-4da0-b0b1-f881cff2b747-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mbgkn\" (UID: \"3052df73-dea7-4da0-b0b1-f881cff2b747\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mbgkn" Feb 23 18:12:09 crc kubenswrapper[4724]: I0223 18:12:09.638239 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fqg8\" (UniqueName: \"kubernetes.io/projected/3052df73-dea7-4da0-b0b1-f881cff2b747-kube-api-access-5fqg8\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mbgkn\" (UID: \"3052df73-dea7-4da0-b0b1-f881cff2b747\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mbgkn" Feb 23 18:12:09 crc kubenswrapper[4724]: I0223 18:12:09.643094 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/3052df73-dea7-4da0-b0b1-f881cff2b747-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mbgkn\" (UID: \"3052df73-dea7-4da0-b0b1-f881cff2b747\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mbgkn" Feb 23 18:12:09 crc kubenswrapper[4724]: I0223 18:12:09.645497 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/3052df73-dea7-4da0-b0b1-f881cff2b747-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mbgkn\" (UID: \"3052df73-dea7-4da0-b0b1-f881cff2b747\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mbgkn" Feb 23 18:12:09 crc kubenswrapper[4724]: I0223 18:12:09.643921 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3052df73-dea7-4da0-b0b1-f881cff2b747-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mbgkn\" (UID: \"3052df73-dea7-4da0-b0b1-f881cff2b747\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mbgkn" Feb 23 18:12:09 crc kubenswrapper[4724]: I0223 18:12:09.644172 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3052df73-dea7-4da0-b0b1-f881cff2b747-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mbgkn\" (UID: \"3052df73-dea7-4da0-b0b1-f881cff2b747\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mbgkn" Feb 23 18:12:09 crc kubenswrapper[4724]: I0223 18:12:09.644972 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3052df73-dea7-4da0-b0b1-f881cff2b747-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mbgkn\" (UID: \"3052df73-dea7-4da0-b0b1-f881cff2b747\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mbgkn" Feb 23 18:12:09 crc kubenswrapper[4724]: I0223 18:12:09.643121 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/3052df73-dea7-4da0-b0b1-f881cff2b747-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mbgkn\" (UID: \"3052df73-dea7-4da0-b0b1-f881cff2b747\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mbgkn" Feb 23 18:12:09 crc kubenswrapper[4724]: I0223 18:12:09.661361 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fqg8\" (UniqueName: \"kubernetes.io/projected/3052df73-dea7-4da0-b0b1-f881cff2b747-kube-api-access-5fqg8\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mbgkn\" (UID: \"3052df73-dea7-4da0-b0b1-f881cff2b747\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mbgkn" Feb 23 18:12:09 crc kubenswrapper[4724]: I0223 18:12:09.770586 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mbgkn" Feb 23 18:12:10 crc kubenswrapper[4724]: I0223 18:12:10.343958 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mbgkn"] Feb 23 18:12:11 crc kubenswrapper[4724]: I0223 18:12:11.340142 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mbgkn" event={"ID":"3052df73-dea7-4da0-b0b1-f881cff2b747","Type":"ContainerStarted","Data":"ea711dca0b9ebecc3edf0b849a9f1372892b5c6bfb322dd804639cd532607cbc"} Feb 23 18:12:11 crc kubenswrapper[4724]: I0223 18:12:11.340193 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mbgkn" event={"ID":"3052df73-dea7-4da0-b0b1-f881cff2b747","Type":"ContainerStarted","Data":"072a5cdb5566c924e007d211df0f06e3a7e9892e39cb20ba074687834da16253"} Feb 23 18:12:11 crc kubenswrapper[4724]: I0223 18:12:11.363497 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mbgkn" podStartSLOduration=1.762767312 podStartE2EDuration="2.363472s" podCreationTimestamp="2026-02-23 18:12:09 +0000 UTC" firstStartedPulling="2026-02-23 18:12:10.352194457 +0000 UTC m=+2486.168394057" lastFinishedPulling="2026-02-23 18:12:10.952899145 +0000 UTC m=+2486.769098745" observedRunningTime="2026-02-23 18:12:11.356073452 +0000 UTC m=+2487.172273052" watchObservedRunningTime="2026-02-23 18:12:11.363472 +0000 UTC m=+2487.179671600" Feb 23 18:12:21 crc kubenswrapper[4724]: I0223 18:12:21.952304 4724 scope.go:117] "RemoveContainer" containerID="3c33c3a693d61e5bd20fa1e2524ac802f18500488318f285cffccb9cc71efde6" Feb 23 18:12:21 crc kubenswrapper[4724]: E0223 18:12:21.953090 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:12:32 crc kubenswrapper[4724]: I0223 18:12:32.951984 4724 scope.go:117] "RemoveContainer" containerID="3c33c3a693d61e5bd20fa1e2524ac802f18500488318f285cffccb9cc71efde6" Feb 23 18:12:32 crc kubenswrapper[4724]: E0223 18:12:32.953111 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:12:47 crc kubenswrapper[4724]: I0223 18:12:47.951831 4724 scope.go:117] "RemoveContainer" containerID="3c33c3a693d61e5bd20fa1e2524ac802f18500488318f285cffccb9cc71efde6" Feb 23 18:12:47 crc kubenswrapper[4724]: E0223 18:12:47.952662 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:13:02 crc kubenswrapper[4724]: I0223 18:13:02.955640 4724 scope.go:117] "RemoveContainer" containerID="3c33c3a693d61e5bd20fa1e2524ac802f18500488318f285cffccb9cc71efde6" Feb 23 18:13:02 crc kubenswrapper[4724]: E0223 18:13:02.956378 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:13:14 crc kubenswrapper[4724]: I0223 18:13:14.957290 4724 scope.go:117] "RemoveContainer" containerID="3c33c3a693d61e5bd20fa1e2524ac802f18500488318f285cffccb9cc71efde6" Feb 23 18:13:14 crc kubenswrapper[4724]: E0223 18:13:14.958256 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:13:29 crc kubenswrapper[4724]: I0223 18:13:29.951220 4724 scope.go:117] "RemoveContainer" containerID="3c33c3a693d61e5bd20fa1e2524ac802f18500488318f285cffccb9cc71efde6" Feb 23 18:13:29 crc kubenswrapper[4724]: E0223 18:13:29.952062 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:13:43 crc kubenswrapper[4724]: I0223 18:13:43.952256 4724 scope.go:117] "RemoveContainer" containerID="3c33c3a693d61e5bd20fa1e2524ac802f18500488318f285cffccb9cc71efde6" Feb 23 18:13:43 crc kubenswrapper[4724]: E0223 18:13:43.952987 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:13:55 crc kubenswrapper[4724]: I0223 18:13:55.950982 4724 scope.go:117] "RemoveContainer" containerID="3c33c3a693d61e5bd20fa1e2524ac802f18500488318f285cffccb9cc71efde6" Feb 23 18:13:55 crc kubenswrapper[4724]: E0223 18:13:55.951751 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:13:58 crc kubenswrapper[4724]: I0223 18:13:58.389319 4724 generic.go:334] "Generic (PLEG): container finished" podID="3052df73-dea7-4da0-b0b1-f881cff2b747" containerID="ea711dca0b9ebecc3edf0b849a9f1372892b5c6bfb322dd804639cd532607cbc" exitCode=0 Feb 23 18:13:58 crc kubenswrapper[4724]: I0223 18:13:58.389461 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mbgkn" event={"ID":"3052df73-dea7-4da0-b0b1-f881cff2b747","Type":"ContainerDied","Data":"ea711dca0b9ebecc3edf0b849a9f1372892b5c6bfb322dd804639cd532607cbc"} Feb 23 18:13:59 crc kubenswrapper[4724]: I0223 18:13:59.794145 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mbgkn" Feb 23 18:13:59 crc kubenswrapper[4724]: I0223 18:13:59.929247 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3052df73-dea7-4da0-b0b1-f881cff2b747-inventory\") pod \"3052df73-dea7-4da0-b0b1-f881cff2b747\" (UID: \"3052df73-dea7-4da0-b0b1-f881cff2b747\") " Feb 23 18:13:59 crc kubenswrapper[4724]: I0223 18:13:59.929661 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/3052df73-dea7-4da0-b0b1-f881cff2b747-ceilometer-compute-config-data-2\") pod \"3052df73-dea7-4da0-b0b1-f881cff2b747\" (UID: \"3052df73-dea7-4da0-b0b1-f881cff2b747\") " Feb 23 18:13:59 crc kubenswrapper[4724]: I0223 18:13:59.929790 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/3052df73-dea7-4da0-b0b1-f881cff2b747-ceilometer-compute-config-data-1\") pod \"3052df73-dea7-4da0-b0b1-f881cff2b747\" (UID: \"3052df73-dea7-4da0-b0b1-f881cff2b747\") " Feb 23 18:13:59 crc kubenswrapper[4724]: I0223 18:13:59.929896 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3052df73-dea7-4da0-b0b1-f881cff2b747-ssh-key-openstack-edpm-ipam\") pod \"3052df73-dea7-4da0-b0b1-f881cff2b747\" (UID: \"3052df73-dea7-4da0-b0b1-f881cff2b747\") " Feb 23 18:13:59 crc kubenswrapper[4724]: I0223 18:13:59.930005 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/3052df73-dea7-4da0-b0b1-f881cff2b747-ceilometer-compute-config-data-0\") pod \"3052df73-dea7-4da0-b0b1-f881cff2b747\" (UID: \"3052df73-dea7-4da0-b0b1-f881cff2b747\") " Feb 23 18:13:59 crc kubenswrapper[4724]: I0223 18:13:59.930078 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fqg8\" (UniqueName: \"kubernetes.io/projected/3052df73-dea7-4da0-b0b1-f881cff2b747-kube-api-access-5fqg8\") pod \"3052df73-dea7-4da0-b0b1-f881cff2b747\" (UID: \"3052df73-dea7-4da0-b0b1-f881cff2b747\") " Feb 23 18:13:59 crc kubenswrapper[4724]: I0223 18:13:59.930154 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3052df73-dea7-4da0-b0b1-f881cff2b747-telemetry-combined-ca-bundle\") pod \"3052df73-dea7-4da0-b0b1-f881cff2b747\" (UID: \"3052df73-dea7-4da0-b0b1-f881cff2b747\") " Feb 23 18:13:59 crc kubenswrapper[4724]: I0223 18:13:59.936040 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3052df73-dea7-4da0-b0b1-f881cff2b747-kube-api-access-5fqg8" (OuterVolumeSpecName: "kube-api-access-5fqg8") pod "3052df73-dea7-4da0-b0b1-f881cff2b747" (UID: "3052df73-dea7-4da0-b0b1-f881cff2b747"). InnerVolumeSpecName "kube-api-access-5fqg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:13:59 crc kubenswrapper[4724]: I0223 18:13:59.946617 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3052df73-dea7-4da0-b0b1-f881cff2b747-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "3052df73-dea7-4da0-b0b1-f881cff2b747" (UID: "3052df73-dea7-4da0-b0b1-f881cff2b747"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:13:59 crc kubenswrapper[4724]: I0223 18:13:59.960038 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3052df73-dea7-4da0-b0b1-f881cff2b747-inventory" (OuterVolumeSpecName: "inventory") pod "3052df73-dea7-4da0-b0b1-f881cff2b747" (UID: "3052df73-dea7-4da0-b0b1-f881cff2b747"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:13:59 crc kubenswrapper[4724]: I0223 18:13:59.960652 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3052df73-dea7-4da0-b0b1-f881cff2b747-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "3052df73-dea7-4da0-b0b1-f881cff2b747" (UID: "3052df73-dea7-4da0-b0b1-f881cff2b747"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:13:59 crc kubenswrapper[4724]: I0223 18:13:59.961288 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3052df73-dea7-4da0-b0b1-f881cff2b747-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "3052df73-dea7-4da0-b0b1-f881cff2b747" (UID: "3052df73-dea7-4da0-b0b1-f881cff2b747"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:13:59 crc kubenswrapper[4724]: I0223 18:13:59.964593 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3052df73-dea7-4da0-b0b1-f881cff2b747-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "3052df73-dea7-4da0-b0b1-f881cff2b747" (UID: "3052df73-dea7-4da0-b0b1-f881cff2b747"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:13:59 crc kubenswrapper[4724]: I0223 18:13:59.969269 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3052df73-dea7-4da0-b0b1-f881cff2b747-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3052df73-dea7-4da0-b0b1-f881cff2b747" (UID: "3052df73-dea7-4da0-b0b1-f881cff2b747"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:14:00 crc kubenswrapper[4724]: I0223 18:14:00.033344 4724 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3052df73-dea7-4da0-b0b1-f881cff2b747-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 18:14:00 crc kubenswrapper[4724]: I0223 18:14:00.033407 4724 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/3052df73-dea7-4da0-b0b1-f881cff2b747-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 23 18:14:00 crc kubenswrapper[4724]: I0223 18:14:00.033423 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fqg8\" (UniqueName: \"kubernetes.io/projected/3052df73-dea7-4da0-b0b1-f881cff2b747-kube-api-access-5fqg8\") on node \"crc\" DevicePath \"\"" Feb 23 18:14:00 crc kubenswrapper[4724]: I0223 18:14:00.033438 4724 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3052df73-dea7-4da0-b0b1-f881cff2b747-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:14:00 crc kubenswrapper[4724]: I0223 18:14:00.033449 4724 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3052df73-dea7-4da0-b0b1-f881cff2b747-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 18:14:00 crc kubenswrapper[4724]: I0223 18:14:00.033461 4724 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/3052df73-dea7-4da0-b0b1-f881cff2b747-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 23 18:14:00 crc kubenswrapper[4724]: I0223 18:14:00.033472 4724 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/3052df73-dea7-4da0-b0b1-f881cff2b747-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 23 18:14:00 crc kubenswrapper[4724]: I0223 18:14:00.407187 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mbgkn" event={"ID":"3052df73-dea7-4da0-b0b1-f881cff2b747","Type":"ContainerDied","Data":"072a5cdb5566c924e007d211df0f06e3a7e9892e39cb20ba074687834da16253"} Feb 23 18:14:00 crc kubenswrapper[4724]: I0223 18:14:00.407599 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="072a5cdb5566c924e007d211df0f06e3a7e9892e39cb20ba074687834da16253" Feb 23 18:14:00 crc kubenswrapper[4724]: I0223 18:14:00.407263 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mbgkn" Feb 23 18:14:10 crc kubenswrapper[4724]: I0223 18:14:10.951675 4724 scope.go:117] "RemoveContainer" containerID="3c33c3a693d61e5bd20fa1e2524ac802f18500488318f285cffccb9cc71efde6" Feb 23 18:14:10 crc kubenswrapper[4724]: E0223 18:14:10.952420 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:14:25 crc kubenswrapper[4724]: I0223 18:14:25.951069 4724 scope.go:117] "RemoveContainer" containerID="3c33c3a693d61e5bd20fa1e2524ac802f18500488318f285cffccb9cc71efde6" Feb 23 18:14:25 crc kubenswrapper[4724]: E0223 18:14:25.952046 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:14:33 crc kubenswrapper[4724]: I0223 18:14:33.950119 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Feb 23 18:14:33 crc kubenswrapper[4724]: E0223 18:14:33.950984 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3052df73-dea7-4da0-b0b1-f881cff2b747" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 23 18:14:33 crc kubenswrapper[4724]: I0223 18:14:33.950999 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="3052df73-dea7-4da0-b0b1-f881cff2b747" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 23 18:14:33 crc kubenswrapper[4724]: I0223 18:14:33.951186 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="3052df73-dea7-4da0-b0b1-f881cff2b747" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 23 18:14:33 crc kubenswrapper[4724]: I0223 18:14:33.952405 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Feb 23 18:14:33 crc kubenswrapper[4724]: I0223 18:14:33.954742 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Feb 23 18:14:33 crc kubenswrapper[4724]: I0223 18:14:33.973306 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Feb 23 18:14:33 crc kubenswrapper[4724]: I0223 18:14:33.995077 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/55cae485-5e0f-4fb8-a19a-21f84b246733-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"55cae485-5e0f-4fb8-a19a-21f84b246733\") " pod="openstack/cinder-backup-0" Feb 23 18:14:33 crc kubenswrapper[4724]: I0223 18:14:33.995140 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/55cae485-5e0f-4fb8-a19a-21f84b246733-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"55cae485-5e0f-4fb8-a19a-21f84b246733\") " pod="openstack/cinder-backup-0" Feb 23 18:14:33 crc kubenswrapper[4724]: I0223 18:14:33.995167 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/55cae485-5e0f-4fb8-a19a-21f84b246733-dev\") pod \"cinder-backup-0\" (UID: \"55cae485-5e0f-4fb8-a19a-21f84b246733\") " pod="openstack/cinder-backup-0" Feb 23 18:14:33 crc kubenswrapper[4724]: I0223 18:14:33.995188 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/55cae485-5e0f-4fb8-a19a-21f84b246733-etc-nvme\") pod \"cinder-backup-0\" (UID: \"55cae485-5e0f-4fb8-a19a-21f84b246733\") " pod="openstack/cinder-backup-0" Feb 23 18:14:33 crc kubenswrapper[4724]: I0223 18:14:33.995211 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/55cae485-5e0f-4fb8-a19a-21f84b246733-lib-modules\") pod \"cinder-backup-0\" (UID: \"55cae485-5e0f-4fb8-a19a-21f84b246733\") " pod="openstack/cinder-backup-0" Feb 23 18:14:33 crc kubenswrapper[4724]: I0223 18:14:33.995238 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/55cae485-5e0f-4fb8-a19a-21f84b246733-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"55cae485-5e0f-4fb8-a19a-21f84b246733\") " pod="openstack/cinder-backup-0" Feb 23 18:14:33 crc kubenswrapper[4724]: I0223 18:14:33.995264 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/55cae485-5e0f-4fb8-a19a-21f84b246733-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"55cae485-5e0f-4fb8-a19a-21f84b246733\") " pod="openstack/cinder-backup-0" Feb 23 18:14:33 crc kubenswrapper[4724]: I0223 18:14:33.995313 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55cae485-5e0f-4fb8-a19a-21f84b246733-config-data\") pod \"cinder-backup-0\" (UID: \"55cae485-5e0f-4fb8-a19a-21f84b246733\") " pod="openstack/cinder-backup-0" Feb 23 18:14:33 crc kubenswrapper[4724]: I0223 18:14:33.995360 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/55cae485-5e0f-4fb8-a19a-21f84b246733-config-data-custom\") pod \"cinder-backup-0\" (UID: \"55cae485-5e0f-4fb8-a19a-21f84b246733\") " pod="openstack/cinder-backup-0" Feb 23 18:14:33 crc kubenswrapper[4724]: I0223 18:14:33.995408 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8ksn\" (UniqueName: \"kubernetes.io/projected/55cae485-5e0f-4fb8-a19a-21f84b246733-kube-api-access-h8ksn\") pod \"cinder-backup-0\" (UID: \"55cae485-5e0f-4fb8-a19a-21f84b246733\") " pod="openstack/cinder-backup-0" Feb 23 18:14:33 crc kubenswrapper[4724]: I0223 18:14:33.995430 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55cae485-5e0f-4fb8-a19a-21f84b246733-scripts\") pod \"cinder-backup-0\" (UID: \"55cae485-5e0f-4fb8-a19a-21f84b246733\") " pod="openstack/cinder-backup-0" Feb 23 18:14:33 crc kubenswrapper[4724]: I0223 18:14:33.995477 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/55cae485-5e0f-4fb8-a19a-21f84b246733-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"55cae485-5e0f-4fb8-a19a-21f84b246733\") " pod="openstack/cinder-backup-0" Feb 23 18:14:33 crc kubenswrapper[4724]: I0223 18:14:33.995528 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55cae485-5e0f-4fb8-a19a-21f84b246733-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"55cae485-5e0f-4fb8-a19a-21f84b246733\") " pod="openstack/cinder-backup-0" Feb 23 18:14:33 crc kubenswrapper[4724]: I0223 18:14:33.995601 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/55cae485-5e0f-4fb8-a19a-21f84b246733-run\") pod \"cinder-backup-0\" (UID: \"55cae485-5e0f-4fb8-a19a-21f84b246733\") " pod="openstack/cinder-backup-0" Feb 23 18:14:33 crc kubenswrapper[4724]: I0223 18:14:33.995624 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/55cae485-5e0f-4fb8-a19a-21f84b246733-sys\") pod \"cinder-backup-0\" (UID: \"55cae485-5e0f-4fb8-a19a-21f84b246733\") " pod="openstack/cinder-backup-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.022775 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.024711 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-2-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.026870 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-nfs-2-config-data" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.060399 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.082076 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-nfs-0"] Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.084331 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.086111 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-nfs-config-data" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.099090 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/55cae485-5e0f-4fb8-a19a-21f84b246733-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"55cae485-5e0f-4fb8-a19a-21f84b246733\") " pod="openstack/cinder-backup-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.099151 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fee47e38-5239-488d-a11c-53342802f8b1-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"fee47e38-5239-488d-a11c-53342802f8b1\") " pod="openstack/cinder-volume-nfs-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.099185 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/34ef4ee9-8229-4235-bb3c-f5138b1f8d4f-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"34ef4ee9-8229-4235-bb3c-f5138b1f8d4f\") " pod="openstack/cinder-volume-nfs-2-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.099215 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/fee47e38-5239-488d-a11c-53342802f8b1-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"fee47e38-5239-488d-a11c-53342802f8b1\") " pod="openstack/cinder-volume-nfs-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.099222 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/55cae485-5e0f-4fb8-a19a-21f84b246733-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"55cae485-5e0f-4fb8-a19a-21f84b246733\") " pod="openstack/cinder-backup-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.099245 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/fee47e38-5239-488d-a11c-53342802f8b1-dev\") pod \"cinder-volume-nfs-0\" (UID: \"fee47e38-5239-488d-a11c-53342802f8b1\") " pod="openstack/cinder-volume-nfs-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.099298 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55cae485-5e0f-4fb8-a19a-21f84b246733-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"55cae485-5e0f-4fb8-a19a-21f84b246733\") " pod="openstack/cinder-backup-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.099333 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fee47e38-5239-488d-a11c-53342802f8b1-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"fee47e38-5239-488d-a11c-53342802f8b1\") " pod="openstack/cinder-volume-nfs-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.099359 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/fee47e38-5239-488d-a11c-53342802f8b1-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"fee47e38-5239-488d-a11c-53342802f8b1\") " pod="openstack/cinder-volume-nfs-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.099533 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/fee47e38-5239-488d-a11c-53342802f8b1-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"fee47e38-5239-488d-a11c-53342802f8b1\") " pod="openstack/cinder-volume-nfs-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.099574 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/34ef4ee9-8229-4235-bb3c-f5138b1f8d4f-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"34ef4ee9-8229-4235-bb3c-f5138b1f8d4f\") " pod="openstack/cinder-volume-nfs-2-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.099646 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fee47e38-5239-488d-a11c-53342802f8b1-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"fee47e38-5239-488d-a11c-53342802f8b1\") " pod="openstack/cinder-volume-nfs-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.099699 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsw7v\" (UniqueName: \"kubernetes.io/projected/34ef4ee9-8229-4235-bb3c-f5138b1f8d4f-kube-api-access-qsw7v\") pod \"cinder-volume-nfs-2-0\" (UID: \"34ef4ee9-8229-4235-bb3c-f5138b1f8d4f\") " pod="openstack/cinder-volume-nfs-2-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.099784 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/55cae485-5e0f-4fb8-a19a-21f84b246733-sys\") pod \"cinder-backup-0\" (UID: \"55cae485-5e0f-4fb8-a19a-21f84b246733\") " pod="openstack/cinder-backup-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.099808 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/55cae485-5e0f-4fb8-a19a-21f84b246733-run\") pod \"cinder-backup-0\" (UID: \"55cae485-5e0f-4fb8-a19a-21f84b246733\") " pod="openstack/cinder-backup-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.099852 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/34ef4ee9-8229-4235-bb3c-f5138b1f8d4f-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"34ef4ee9-8229-4235-bb3c-f5138b1f8d4f\") " pod="openstack/cinder-volume-nfs-2-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.099886 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fee47e38-5239-488d-a11c-53342802f8b1-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"fee47e38-5239-488d-a11c-53342802f8b1\") " pod="openstack/cinder-volume-nfs-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.099887 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/55cae485-5e0f-4fb8-a19a-21f84b246733-sys\") pod \"cinder-backup-0\" (UID: \"55cae485-5e0f-4fb8-a19a-21f84b246733\") " pod="openstack/cinder-backup-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.099929 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/55cae485-5e0f-4fb8-a19a-21f84b246733-run\") pod \"cinder-backup-0\" (UID: \"55cae485-5e0f-4fb8-a19a-21f84b246733\") " pod="openstack/cinder-backup-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.099946 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/55cae485-5e0f-4fb8-a19a-21f84b246733-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"55cae485-5e0f-4fb8-a19a-21f84b246733\") " pod="openstack/cinder-backup-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.099973 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/34ef4ee9-8229-4235-bb3c-f5138b1f8d4f-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"34ef4ee9-8229-4235-bb3c-f5138b1f8d4f\") " pod="openstack/cinder-volume-nfs-2-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.100003 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/55cae485-5e0f-4fb8-a19a-21f84b246733-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"55cae485-5e0f-4fb8-a19a-21f84b246733\") " pod="openstack/cinder-backup-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.100024 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/55cae485-5e0f-4fb8-a19a-21f84b246733-dev\") pod \"cinder-backup-0\" (UID: \"55cae485-5e0f-4fb8-a19a-21f84b246733\") " pod="openstack/cinder-backup-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.100050 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/55cae485-5e0f-4fb8-a19a-21f84b246733-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"55cae485-5e0f-4fb8-a19a-21f84b246733\") " pod="openstack/cinder-backup-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.100059 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/55cae485-5e0f-4fb8-a19a-21f84b246733-etc-nvme\") pod \"cinder-backup-0\" (UID: \"55cae485-5e0f-4fb8-a19a-21f84b246733\") " pod="openstack/cinder-backup-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.100112 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/55cae485-5e0f-4fb8-a19a-21f84b246733-lib-modules\") pod \"cinder-backup-0\" (UID: \"55cae485-5e0f-4fb8-a19a-21f84b246733\") " pod="openstack/cinder-backup-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.100131 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/55cae485-5e0f-4fb8-a19a-21f84b246733-dev\") pod \"cinder-backup-0\" (UID: \"55cae485-5e0f-4fb8-a19a-21f84b246733\") " pod="openstack/cinder-backup-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.100201 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/55cae485-5e0f-4fb8-a19a-21f84b246733-lib-modules\") pod \"cinder-backup-0\" (UID: \"55cae485-5e0f-4fb8-a19a-21f84b246733\") " pod="openstack/cinder-backup-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.100242 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/34ef4ee9-8229-4235-bb3c-f5138b1f8d4f-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"34ef4ee9-8229-4235-bb3c-f5138b1f8d4f\") " pod="openstack/cinder-volume-nfs-2-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.100264 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/55cae485-5e0f-4fb8-a19a-21f84b246733-etc-nvme\") pod \"cinder-backup-0\" (UID: \"55cae485-5e0f-4fb8-a19a-21f84b246733\") " pod="openstack/cinder-backup-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.100268 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/34ef4ee9-8229-4235-bb3c-f5138b1f8d4f-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"34ef4ee9-8229-4235-bb3c-f5138b1f8d4f\") " pod="openstack/cinder-volume-nfs-2-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.100340 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/55cae485-5e0f-4fb8-a19a-21f84b246733-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"55cae485-5e0f-4fb8-a19a-21f84b246733\") " pod="openstack/cinder-backup-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.100466 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/55cae485-5e0f-4fb8-a19a-21f84b246733-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"55cae485-5e0f-4fb8-a19a-21f84b246733\") " pod="openstack/cinder-backup-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.100493 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/fee47e38-5239-488d-a11c-53342802f8b1-run\") pod \"cinder-volume-nfs-0\" (UID: \"fee47e38-5239-488d-a11c-53342802f8b1\") " pod="openstack/cinder-volume-nfs-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.100524 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/34ef4ee9-8229-4235-bb3c-f5138b1f8d4f-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"34ef4ee9-8229-4235-bb3c-f5138b1f8d4f\") " pod="openstack/cinder-volume-nfs-2-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.100573 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nvn9\" (UniqueName: \"kubernetes.io/projected/fee47e38-5239-488d-a11c-53342802f8b1-kube-api-access-2nvn9\") pod \"cinder-volume-nfs-0\" (UID: \"fee47e38-5239-488d-a11c-53342802f8b1\") " pod="openstack/cinder-volume-nfs-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.100693 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/55cae485-5e0f-4fb8-a19a-21f84b246733-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"55cae485-5e0f-4fb8-a19a-21f84b246733\") " pod="openstack/cinder-backup-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.100758 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/fee47e38-5239-488d-a11c-53342802f8b1-sys\") pod \"cinder-volume-nfs-0\" (UID: \"fee47e38-5239-488d-a11c-53342802f8b1\") " pod="openstack/cinder-volume-nfs-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.100801 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/34ef4ee9-8229-4235-bb3c-f5138b1f8d4f-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"34ef4ee9-8229-4235-bb3c-f5138b1f8d4f\") " pod="openstack/cinder-volume-nfs-2-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.100895 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55cae485-5e0f-4fb8-a19a-21f84b246733-config-data\") pod \"cinder-backup-0\" (UID: \"55cae485-5e0f-4fb8-a19a-21f84b246733\") " pod="openstack/cinder-backup-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.100922 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34ef4ee9-8229-4235-bb3c-f5138b1f8d4f-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"34ef4ee9-8229-4235-bb3c-f5138b1f8d4f\") " pod="openstack/cinder-volume-nfs-2-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.100969 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/fee47e38-5239-488d-a11c-53342802f8b1-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"fee47e38-5239-488d-a11c-53342802f8b1\") " pod="openstack/cinder-volume-nfs-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.101038 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/34ef4ee9-8229-4235-bb3c-f5138b1f8d4f-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"34ef4ee9-8229-4235-bb3c-f5138b1f8d4f\") " pod="openstack/cinder-volume-nfs-2-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.101115 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fee47e38-5239-488d-a11c-53342802f8b1-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"fee47e38-5239-488d-a11c-53342802f8b1\") " pod="openstack/cinder-volume-nfs-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.101217 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/55cae485-5e0f-4fb8-a19a-21f84b246733-config-data-custom\") pod \"cinder-backup-0\" (UID: \"55cae485-5e0f-4fb8-a19a-21f84b246733\") " pod="openstack/cinder-backup-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.101246 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/34ef4ee9-8229-4235-bb3c-f5138b1f8d4f-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"34ef4ee9-8229-4235-bb3c-f5138b1f8d4f\") " pod="openstack/cinder-volume-nfs-2-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.101853 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/fee47e38-5239-488d-a11c-53342802f8b1-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"fee47e38-5239-488d-a11c-53342802f8b1\") " pod="openstack/cinder-volume-nfs-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.101878 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fee47e38-5239-488d-a11c-53342802f8b1-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"fee47e38-5239-488d-a11c-53342802f8b1\") " pod="openstack/cinder-volume-nfs-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.101990 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/34ef4ee9-8229-4235-bb3c-f5138b1f8d4f-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"34ef4ee9-8229-4235-bb3c-f5138b1f8d4f\") " pod="openstack/cinder-volume-nfs-2-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.102021 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34ef4ee9-8229-4235-bb3c-f5138b1f8d4f-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"34ef4ee9-8229-4235-bb3c-f5138b1f8d4f\") " pod="openstack/cinder-volume-nfs-2-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.102083 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8ksn\" (UniqueName: \"kubernetes.io/projected/55cae485-5e0f-4fb8-a19a-21f84b246733-kube-api-access-h8ksn\") pod \"cinder-backup-0\" (UID: \"55cae485-5e0f-4fb8-a19a-21f84b246733\") " pod="openstack/cinder-backup-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.102112 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55cae485-5e0f-4fb8-a19a-21f84b246733-scripts\") pod \"cinder-backup-0\" (UID: \"55cae485-5e0f-4fb8-a19a-21f84b246733\") " pod="openstack/cinder-backup-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.102173 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34ef4ee9-8229-4235-bb3c-f5138b1f8d4f-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"34ef4ee9-8229-4235-bb3c-f5138b1f8d4f\") " pod="openstack/cinder-volume-nfs-2-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.103646 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/55cae485-5e0f-4fb8-a19a-21f84b246733-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"55cae485-5e0f-4fb8-a19a-21f84b246733\") " pod="openstack/cinder-backup-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.105054 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/55cae485-5e0f-4fb8-a19a-21f84b246733-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"55cae485-5e0f-4fb8-a19a-21f84b246733\") " pod="openstack/cinder-backup-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.106574 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55cae485-5e0f-4fb8-a19a-21f84b246733-scripts\") pod \"cinder-backup-0\" (UID: \"55cae485-5e0f-4fb8-a19a-21f84b246733\") " pod="openstack/cinder-backup-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.107010 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/55cae485-5e0f-4fb8-a19a-21f84b246733-config-data-custom\") pod \"cinder-backup-0\" (UID: \"55cae485-5e0f-4fb8-a19a-21f84b246733\") " pod="openstack/cinder-backup-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.110564 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-0"] Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.115049 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55cae485-5e0f-4fb8-a19a-21f84b246733-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"55cae485-5e0f-4fb8-a19a-21f84b246733\") " pod="openstack/cinder-backup-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.118628 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55cae485-5e0f-4fb8-a19a-21f84b246733-config-data\") pod \"cinder-backup-0\" (UID: \"55cae485-5e0f-4fb8-a19a-21f84b246733\") " pod="openstack/cinder-backup-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.122142 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8ksn\" (UniqueName: \"kubernetes.io/projected/55cae485-5e0f-4fb8-a19a-21f84b246733-kube-api-access-h8ksn\") pod \"cinder-backup-0\" (UID: \"55cae485-5e0f-4fb8-a19a-21f84b246733\") " pod="openstack/cinder-backup-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.203093 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/34ef4ee9-8229-4235-bb3c-f5138b1f8d4f-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"34ef4ee9-8229-4235-bb3c-f5138b1f8d4f\") " pod="openstack/cinder-volume-nfs-2-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.203142 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34ef4ee9-8229-4235-bb3c-f5138b1f8d4f-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"34ef4ee9-8229-4235-bb3c-f5138b1f8d4f\") " pod="openstack/cinder-volume-nfs-2-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.203161 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/fee47e38-5239-488d-a11c-53342802f8b1-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"fee47e38-5239-488d-a11c-53342802f8b1\") " pod="openstack/cinder-volume-nfs-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.203180 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/34ef4ee9-8229-4235-bb3c-f5138b1f8d4f-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"34ef4ee9-8229-4235-bb3c-f5138b1f8d4f\") " pod="openstack/cinder-volume-nfs-2-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.203197 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fee47e38-5239-488d-a11c-53342802f8b1-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"fee47e38-5239-488d-a11c-53342802f8b1\") " pod="openstack/cinder-volume-nfs-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.203226 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/34ef4ee9-8229-4235-bb3c-f5138b1f8d4f-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"34ef4ee9-8229-4235-bb3c-f5138b1f8d4f\") " pod="openstack/cinder-volume-nfs-2-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.203243 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/fee47e38-5239-488d-a11c-53342802f8b1-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"fee47e38-5239-488d-a11c-53342802f8b1\") " pod="openstack/cinder-volume-nfs-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.203256 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fee47e38-5239-488d-a11c-53342802f8b1-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"fee47e38-5239-488d-a11c-53342802f8b1\") " pod="openstack/cinder-volume-nfs-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.203270 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/34ef4ee9-8229-4235-bb3c-f5138b1f8d4f-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"34ef4ee9-8229-4235-bb3c-f5138b1f8d4f\") " pod="openstack/cinder-volume-nfs-2-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.203284 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34ef4ee9-8229-4235-bb3c-f5138b1f8d4f-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"34ef4ee9-8229-4235-bb3c-f5138b1f8d4f\") " pod="openstack/cinder-volume-nfs-2-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.203305 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34ef4ee9-8229-4235-bb3c-f5138b1f8d4f-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"34ef4ee9-8229-4235-bb3c-f5138b1f8d4f\") " pod="openstack/cinder-volume-nfs-2-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.203326 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fee47e38-5239-488d-a11c-53342802f8b1-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"fee47e38-5239-488d-a11c-53342802f8b1\") " pod="openstack/cinder-volume-nfs-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.203345 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/34ef4ee9-8229-4235-bb3c-f5138b1f8d4f-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"34ef4ee9-8229-4235-bb3c-f5138b1f8d4f\") " pod="openstack/cinder-volume-nfs-2-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.203361 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/fee47e38-5239-488d-a11c-53342802f8b1-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"fee47e38-5239-488d-a11c-53342802f8b1\") " pod="openstack/cinder-volume-nfs-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.203380 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/fee47e38-5239-488d-a11c-53342802f8b1-dev\") pod \"cinder-volume-nfs-0\" (UID: \"fee47e38-5239-488d-a11c-53342802f8b1\") " pod="openstack/cinder-volume-nfs-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.203425 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fee47e38-5239-488d-a11c-53342802f8b1-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"fee47e38-5239-488d-a11c-53342802f8b1\") " pod="openstack/cinder-volume-nfs-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.203444 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/fee47e38-5239-488d-a11c-53342802f8b1-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"fee47e38-5239-488d-a11c-53342802f8b1\") " pod="openstack/cinder-volume-nfs-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.203468 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/fee47e38-5239-488d-a11c-53342802f8b1-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"fee47e38-5239-488d-a11c-53342802f8b1\") " pod="openstack/cinder-volume-nfs-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.203483 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/34ef4ee9-8229-4235-bb3c-f5138b1f8d4f-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"34ef4ee9-8229-4235-bb3c-f5138b1f8d4f\") " pod="openstack/cinder-volume-nfs-2-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.203502 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fee47e38-5239-488d-a11c-53342802f8b1-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"fee47e38-5239-488d-a11c-53342802f8b1\") " pod="openstack/cinder-volume-nfs-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.203521 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsw7v\" (UniqueName: \"kubernetes.io/projected/34ef4ee9-8229-4235-bb3c-f5138b1f8d4f-kube-api-access-qsw7v\") pod \"cinder-volume-nfs-2-0\" (UID: \"34ef4ee9-8229-4235-bb3c-f5138b1f8d4f\") " pod="openstack/cinder-volume-nfs-2-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.203567 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/34ef4ee9-8229-4235-bb3c-f5138b1f8d4f-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"34ef4ee9-8229-4235-bb3c-f5138b1f8d4f\") " pod="openstack/cinder-volume-nfs-2-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.203589 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fee47e38-5239-488d-a11c-53342802f8b1-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"fee47e38-5239-488d-a11c-53342802f8b1\") " pod="openstack/cinder-volume-nfs-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.203610 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/34ef4ee9-8229-4235-bb3c-f5138b1f8d4f-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"34ef4ee9-8229-4235-bb3c-f5138b1f8d4f\") " pod="openstack/cinder-volume-nfs-2-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.203634 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/34ef4ee9-8229-4235-bb3c-f5138b1f8d4f-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"34ef4ee9-8229-4235-bb3c-f5138b1f8d4f\") " pod="openstack/cinder-volume-nfs-2-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.203650 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/34ef4ee9-8229-4235-bb3c-f5138b1f8d4f-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"34ef4ee9-8229-4235-bb3c-f5138b1f8d4f\") " pod="openstack/cinder-volume-nfs-2-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.203680 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/fee47e38-5239-488d-a11c-53342802f8b1-run\") pod \"cinder-volume-nfs-0\" (UID: \"fee47e38-5239-488d-a11c-53342802f8b1\") " pod="openstack/cinder-volume-nfs-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.203696 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nvn9\" (UniqueName: \"kubernetes.io/projected/fee47e38-5239-488d-a11c-53342802f8b1-kube-api-access-2nvn9\") pod \"cinder-volume-nfs-0\" (UID: \"fee47e38-5239-488d-a11c-53342802f8b1\") " pod="openstack/cinder-volume-nfs-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.203710 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/34ef4ee9-8229-4235-bb3c-f5138b1f8d4f-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"34ef4ee9-8229-4235-bb3c-f5138b1f8d4f\") " pod="openstack/cinder-volume-nfs-2-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.203726 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/fee47e38-5239-488d-a11c-53342802f8b1-sys\") pod \"cinder-volume-nfs-0\" (UID: \"fee47e38-5239-488d-a11c-53342802f8b1\") " pod="openstack/cinder-volume-nfs-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.203800 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/fee47e38-5239-488d-a11c-53342802f8b1-sys\") pod \"cinder-volume-nfs-0\" (UID: \"fee47e38-5239-488d-a11c-53342802f8b1\") " pod="openstack/cinder-volume-nfs-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.203833 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/34ef4ee9-8229-4235-bb3c-f5138b1f8d4f-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"34ef4ee9-8229-4235-bb3c-f5138b1f8d4f\") " pod="openstack/cinder-volume-nfs-2-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.204474 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/fee47e38-5239-488d-a11c-53342802f8b1-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"fee47e38-5239-488d-a11c-53342802f8b1\") " pod="openstack/cinder-volume-nfs-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.204506 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fee47e38-5239-488d-a11c-53342802f8b1-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"fee47e38-5239-488d-a11c-53342802f8b1\") " pod="openstack/cinder-volume-nfs-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.204532 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/34ef4ee9-8229-4235-bb3c-f5138b1f8d4f-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"34ef4ee9-8229-4235-bb3c-f5138b1f8d4f\") " pod="openstack/cinder-volume-nfs-2-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.204512 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/34ef4ee9-8229-4235-bb3c-f5138b1f8d4f-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"34ef4ee9-8229-4235-bb3c-f5138b1f8d4f\") " pod="openstack/cinder-volume-nfs-2-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.204550 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fee47e38-5239-488d-a11c-53342802f8b1-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"fee47e38-5239-488d-a11c-53342802f8b1\") " pod="openstack/cinder-volume-nfs-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.204514 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/34ef4ee9-8229-4235-bb3c-f5138b1f8d4f-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"34ef4ee9-8229-4235-bb3c-f5138b1f8d4f\") " pod="openstack/cinder-volume-nfs-2-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.204675 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/fee47e38-5239-488d-a11c-53342802f8b1-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"fee47e38-5239-488d-a11c-53342802f8b1\") " pod="openstack/cinder-volume-nfs-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.204708 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/fee47e38-5239-488d-a11c-53342802f8b1-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"fee47e38-5239-488d-a11c-53342802f8b1\") " pod="openstack/cinder-volume-nfs-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.204723 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/34ef4ee9-8229-4235-bb3c-f5138b1f8d4f-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"34ef4ee9-8229-4235-bb3c-f5138b1f8d4f\") " pod="openstack/cinder-volume-nfs-2-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.204751 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/34ef4ee9-8229-4235-bb3c-f5138b1f8d4f-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"34ef4ee9-8229-4235-bb3c-f5138b1f8d4f\") " pod="openstack/cinder-volume-nfs-2-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.204871 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/fee47e38-5239-488d-a11c-53342802f8b1-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"fee47e38-5239-488d-a11c-53342802f8b1\") " pod="openstack/cinder-volume-nfs-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.204994 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/fee47e38-5239-488d-a11c-53342802f8b1-run\") pod \"cinder-volume-nfs-0\" (UID: \"fee47e38-5239-488d-a11c-53342802f8b1\") " pod="openstack/cinder-volume-nfs-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.205026 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/34ef4ee9-8229-4235-bb3c-f5138b1f8d4f-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"34ef4ee9-8229-4235-bb3c-f5138b1f8d4f\") " pod="openstack/cinder-volume-nfs-2-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.205173 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/34ef4ee9-8229-4235-bb3c-f5138b1f8d4f-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"34ef4ee9-8229-4235-bb3c-f5138b1f8d4f\") " pod="openstack/cinder-volume-nfs-2-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.205296 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/fee47e38-5239-488d-a11c-53342802f8b1-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"fee47e38-5239-488d-a11c-53342802f8b1\") " pod="openstack/cinder-volume-nfs-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.205305 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/34ef4ee9-8229-4235-bb3c-f5138b1f8d4f-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"34ef4ee9-8229-4235-bb3c-f5138b1f8d4f\") " pod="openstack/cinder-volume-nfs-2-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.205343 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/34ef4ee9-8229-4235-bb3c-f5138b1f8d4f-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"34ef4ee9-8229-4235-bb3c-f5138b1f8d4f\") " pod="openstack/cinder-volume-nfs-2-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.205363 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/fee47e38-5239-488d-a11c-53342802f8b1-dev\") pod \"cinder-volume-nfs-0\" (UID: \"fee47e38-5239-488d-a11c-53342802f8b1\") " pod="openstack/cinder-volume-nfs-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.208353 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fee47e38-5239-488d-a11c-53342802f8b1-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"fee47e38-5239-488d-a11c-53342802f8b1\") " pod="openstack/cinder-volume-nfs-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.209070 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fee47e38-5239-488d-a11c-53342802f8b1-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"fee47e38-5239-488d-a11c-53342802f8b1\") " pod="openstack/cinder-volume-nfs-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.210117 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fee47e38-5239-488d-a11c-53342802f8b1-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"fee47e38-5239-488d-a11c-53342802f8b1\") " pod="openstack/cinder-volume-nfs-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.211115 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fee47e38-5239-488d-a11c-53342802f8b1-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"fee47e38-5239-488d-a11c-53342802f8b1\") " pod="openstack/cinder-volume-nfs-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.211758 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/34ef4ee9-8229-4235-bb3c-f5138b1f8d4f-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"34ef4ee9-8229-4235-bb3c-f5138b1f8d4f\") " pod="openstack/cinder-volume-nfs-2-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.211899 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34ef4ee9-8229-4235-bb3c-f5138b1f8d4f-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"34ef4ee9-8229-4235-bb3c-f5138b1f8d4f\") " pod="openstack/cinder-volume-nfs-2-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.211908 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34ef4ee9-8229-4235-bb3c-f5138b1f8d4f-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"34ef4ee9-8229-4235-bb3c-f5138b1f8d4f\") " pod="openstack/cinder-volume-nfs-2-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.212094 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34ef4ee9-8229-4235-bb3c-f5138b1f8d4f-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"34ef4ee9-8229-4235-bb3c-f5138b1f8d4f\") " pod="openstack/cinder-volume-nfs-2-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.220862 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsw7v\" (UniqueName: \"kubernetes.io/projected/34ef4ee9-8229-4235-bb3c-f5138b1f8d4f-kube-api-access-qsw7v\") pod \"cinder-volume-nfs-2-0\" (UID: \"34ef4ee9-8229-4235-bb3c-f5138b1f8d4f\") " pod="openstack/cinder-volume-nfs-2-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.221692 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nvn9\" (UniqueName: \"kubernetes.io/projected/fee47e38-5239-488d-a11c-53342802f8b1-kube-api-access-2nvn9\") pod \"cinder-volume-nfs-0\" (UID: \"fee47e38-5239-488d-a11c-53342802f8b1\") " pod="openstack/cinder-volume-nfs-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.287143 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.352257 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-2-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.496121 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-0" Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.880256 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Feb 23 18:14:34 crc kubenswrapper[4724]: I0223 18:14:34.903515 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"55cae485-5e0f-4fb8-a19a-21f84b246733","Type":"ContainerStarted","Data":"fbd26e7f5957ce5b631938c240ac45162a67fea9da01f8a453d842f0b257e7d5"} Feb 23 18:14:35 crc kubenswrapper[4724]: I0223 18:14:35.013178 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Feb 23 18:14:35 crc kubenswrapper[4724]: W0223 18:14:35.052859 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34ef4ee9_8229_4235_bb3c_f5138b1f8d4f.slice/crio-bfb785732cc14ba8c5463bada7a2509f26ea388c9e4543a5245b01af910148ac WatchSource:0}: Error finding container bfb785732cc14ba8c5463bada7a2509f26ea388c9e4543a5245b01af910148ac: Status 404 returned error can't find the container with id bfb785732cc14ba8c5463bada7a2509f26ea388c9e4543a5245b01af910148ac Feb 23 18:14:35 crc kubenswrapper[4724]: I0223 18:14:35.134245 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-0"] Feb 23 18:14:35 crc kubenswrapper[4724]: I0223 18:14:35.918683 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"fee47e38-5239-488d-a11c-53342802f8b1","Type":"ContainerStarted","Data":"1ab4a1713b508c0023a2158382abc2cddd5fdc89e798314a371adc3fcea516ee"} Feb 23 18:14:35 crc kubenswrapper[4724]: I0223 18:14:35.919457 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"fee47e38-5239-488d-a11c-53342802f8b1","Type":"ContainerStarted","Data":"d03473f01021babd01a90e0e1b94b29afb76521989cbce6032c3a112bdf50b64"} Feb 23 18:14:35 crc kubenswrapper[4724]: I0223 18:14:35.921691 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"55cae485-5e0f-4fb8-a19a-21f84b246733","Type":"ContainerStarted","Data":"bd5cb473be3637ba3c81c770c7f1f636bd08489e91071d1a91ce5399926a353e"} Feb 23 18:14:35 crc kubenswrapper[4724]: I0223 18:14:35.925057 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"34ef4ee9-8229-4235-bb3c-f5138b1f8d4f","Type":"ContainerStarted","Data":"8e2d9a8c3c74c0d42f52c1df13e6065368cd2eaf6e1df094411f7ae571513b69"} Feb 23 18:14:35 crc kubenswrapper[4724]: I0223 18:14:35.925090 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"34ef4ee9-8229-4235-bb3c-f5138b1f8d4f","Type":"ContainerStarted","Data":"bfb785732cc14ba8c5463bada7a2509f26ea388c9e4543a5245b01af910148ac"} Feb 23 18:14:36 crc kubenswrapper[4724]: I0223 18:14:36.939638 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"55cae485-5e0f-4fb8-a19a-21f84b246733","Type":"ContainerStarted","Data":"0b7bfcebba03f5ab0e1ff2e21a8f594d43a7793ae808db6da40f44631e2f842a"} Feb 23 18:14:36 crc kubenswrapper[4724]: I0223 18:14:36.944100 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"34ef4ee9-8229-4235-bb3c-f5138b1f8d4f","Type":"ContainerStarted","Data":"1d39ec499401fb7e9fdbb22fa775a1e4fa388b1f626040576705ae4ad561c356"} Feb 23 18:14:36 crc kubenswrapper[4724]: I0223 18:14:36.951439 4724 scope.go:117] "RemoveContainer" containerID="3c33c3a693d61e5bd20fa1e2524ac802f18500488318f285cffccb9cc71efde6" Feb 23 18:14:36 crc kubenswrapper[4724]: E0223 18:14:36.951660 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:14:36 crc kubenswrapper[4724]: I0223 18:14:36.972421 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"fee47e38-5239-488d-a11c-53342802f8b1","Type":"ContainerStarted","Data":"7f8985c84e32dd26f8bc6f15b3879a86a868b5bfaf07175303a333e059103bac"} Feb 23 18:14:36 crc kubenswrapper[4724]: I0223 18:14:36.972929 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=3.767991679 podStartE2EDuration="3.972907622s" podCreationTimestamp="2026-02-23 18:14:33 +0000 UTC" firstStartedPulling="2026-02-23 18:14:34.886866228 +0000 UTC m=+2630.703065828" lastFinishedPulling="2026-02-23 18:14:35.091782171 +0000 UTC m=+2630.907981771" observedRunningTime="2026-02-23 18:14:36.968955273 +0000 UTC m=+2632.785154883" watchObservedRunningTime="2026-02-23 18:14:36.972907622 +0000 UTC m=+2632.789107222" Feb 23 18:14:37 crc kubenswrapper[4724]: I0223 18:14:37.001506 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-nfs-0" podStartSLOduration=2.907226251 podStartE2EDuration="3.00147572s" podCreationTimestamp="2026-02-23 18:14:34 +0000 UTC" firstStartedPulling="2026-02-23 18:14:35.286114248 +0000 UTC m=+2631.102313838" lastFinishedPulling="2026-02-23 18:14:35.380363707 +0000 UTC m=+2631.196563307" observedRunningTime="2026-02-23 18:14:36.995864259 +0000 UTC m=+2632.812063879" watchObservedRunningTime="2026-02-23 18:14:37.00147572 +0000 UTC m=+2632.817675330" Feb 23 18:14:37 crc kubenswrapper[4724]: I0223 18:14:37.042080 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-nfs-2-0" podStartSLOduration=3.805931943 podStartE2EDuration="4.042056621s" podCreationTimestamp="2026-02-23 18:14:33 +0000 UTC" firstStartedPulling="2026-02-23 18:14:35.054979455 +0000 UTC m=+2630.871179055" lastFinishedPulling="2026-02-23 18:14:35.291104133 +0000 UTC m=+2631.107303733" observedRunningTime="2026-02-23 18:14:37.015574045 +0000 UTC m=+2632.831773645" watchObservedRunningTime="2026-02-23 18:14:37.042056621 +0000 UTC m=+2632.858256221" Feb 23 18:14:39 crc kubenswrapper[4724]: I0223 18:14:39.287961 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Feb 23 18:14:39 crc kubenswrapper[4724]: I0223 18:14:39.353318 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-nfs-2-0" Feb 23 18:14:39 crc kubenswrapper[4724]: I0223 18:14:39.497708 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-nfs-0" Feb 23 18:14:44 crc kubenswrapper[4724]: I0223 18:14:44.488882 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Feb 23 18:14:44 crc kubenswrapper[4724]: I0223 18:14:44.598084 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-nfs-2-0" Feb 23 18:14:44 crc kubenswrapper[4724]: I0223 18:14:44.789870 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-nfs-0" Feb 23 18:14:48 crc kubenswrapper[4724]: I0223 18:14:48.951596 4724 scope.go:117] "RemoveContainer" containerID="3c33c3a693d61e5bd20fa1e2524ac802f18500488318f285cffccb9cc71efde6" Feb 23 18:14:48 crc kubenswrapper[4724]: E0223 18:14:48.952458 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:15:00 crc kubenswrapper[4724]: I0223 18:15:00.145891 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531175-q6kgt"] Feb 23 18:15:00 crc kubenswrapper[4724]: I0223 18:15:00.147849 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531175-q6kgt" Feb 23 18:15:00 crc kubenswrapper[4724]: I0223 18:15:00.150092 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 23 18:15:00 crc kubenswrapper[4724]: I0223 18:15:00.150849 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 23 18:15:00 crc kubenswrapper[4724]: I0223 18:15:00.159916 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531175-q6kgt"] Feb 23 18:15:00 crc kubenswrapper[4724]: I0223 18:15:00.269217 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae3552e8-ee24-41b0-a477-81536c660b7f-config-volume\") pod \"collect-profiles-29531175-q6kgt\" (UID: \"ae3552e8-ee24-41b0-a477-81536c660b7f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531175-q6kgt" Feb 23 18:15:00 crc kubenswrapper[4724]: I0223 18:15:00.269258 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqff4\" (UniqueName: \"kubernetes.io/projected/ae3552e8-ee24-41b0-a477-81536c660b7f-kube-api-access-tqff4\") pod \"collect-profiles-29531175-q6kgt\" (UID: \"ae3552e8-ee24-41b0-a477-81536c660b7f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531175-q6kgt" Feb 23 18:15:00 crc kubenswrapper[4724]: I0223 18:15:00.269318 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae3552e8-ee24-41b0-a477-81536c660b7f-secret-volume\") pod \"collect-profiles-29531175-q6kgt\" (UID: \"ae3552e8-ee24-41b0-a477-81536c660b7f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531175-q6kgt" Feb 23 18:15:00 crc kubenswrapper[4724]: I0223 18:15:00.371270 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae3552e8-ee24-41b0-a477-81536c660b7f-secret-volume\") pod \"collect-profiles-29531175-q6kgt\" (UID: \"ae3552e8-ee24-41b0-a477-81536c660b7f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531175-q6kgt" Feb 23 18:15:00 crc kubenswrapper[4724]: I0223 18:15:00.371452 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae3552e8-ee24-41b0-a477-81536c660b7f-config-volume\") pod \"collect-profiles-29531175-q6kgt\" (UID: \"ae3552e8-ee24-41b0-a477-81536c660b7f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531175-q6kgt" Feb 23 18:15:00 crc kubenswrapper[4724]: I0223 18:15:00.371475 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqff4\" (UniqueName: \"kubernetes.io/projected/ae3552e8-ee24-41b0-a477-81536c660b7f-kube-api-access-tqff4\") pod \"collect-profiles-29531175-q6kgt\" (UID: \"ae3552e8-ee24-41b0-a477-81536c660b7f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531175-q6kgt" Feb 23 18:15:00 crc kubenswrapper[4724]: I0223 18:15:00.372366 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae3552e8-ee24-41b0-a477-81536c660b7f-config-volume\") pod \"collect-profiles-29531175-q6kgt\" (UID: \"ae3552e8-ee24-41b0-a477-81536c660b7f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531175-q6kgt" Feb 23 18:15:00 crc kubenswrapper[4724]: I0223 18:15:00.382524 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae3552e8-ee24-41b0-a477-81536c660b7f-secret-volume\") pod \"collect-profiles-29531175-q6kgt\" (UID: \"ae3552e8-ee24-41b0-a477-81536c660b7f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531175-q6kgt" Feb 23 18:15:00 crc kubenswrapper[4724]: I0223 18:15:00.387836 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqff4\" (UniqueName: \"kubernetes.io/projected/ae3552e8-ee24-41b0-a477-81536c660b7f-kube-api-access-tqff4\") pod \"collect-profiles-29531175-q6kgt\" (UID: \"ae3552e8-ee24-41b0-a477-81536c660b7f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531175-q6kgt" Feb 23 18:15:00 crc kubenswrapper[4724]: I0223 18:15:00.475964 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531175-q6kgt" Feb 23 18:15:00 crc kubenswrapper[4724]: I0223 18:15:00.925064 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531175-q6kgt"] Feb 23 18:15:01 crc kubenswrapper[4724]: I0223 18:15:01.207405 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531175-q6kgt" event={"ID":"ae3552e8-ee24-41b0-a477-81536c660b7f","Type":"ContainerStarted","Data":"4c2b4dd3f6b984562adb37fd94b97ddcafbc4bf12890f3d84fd702b0320a637d"} Feb 23 18:15:01 crc kubenswrapper[4724]: I0223 18:15:01.207731 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531175-q6kgt" event={"ID":"ae3552e8-ee24-41b0-a477-81536c660b7f","Type":"ContainerStarted","Data":"79e20350e239e5920720fe3eec29eac267c1d9c75e4fab2cda59b2e3fd9379d3"} Feb 23 18:15:01 crc kubenswrapper[4724]: I0223 18:15:01.227351 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29531175-q6kgt" podStartSLOduration=1.227334551 podStartE2EDuration="1.227334551s" podCreationTimestamp="2026-02-23 18:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:15:01.22488578 +0000 UTC m=+2657.041085390" watchObservedRunningTime="2026-02-23 18:15:01.227334551 +0000 UTC m=+2657.043534151" Feb 23 18:15:02 crc kubenswrapper[4724]: I0223 18:15:02.217712 4724 generic.go:334] "Generic (PLEG): container finished" podID="ae3552e8-ee24-41b0-a477-81536c660b7f" containerID="4c2b4dd3f6b984562adb37fd94b97ddcafbc4bf12890f3d84fd702b0320a637d" exitCode=0 Feb 23 18:15:02 crc kubenswrapper[4724]: I0223 18:15:02.217789 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531175-q6kgt" event={"ID":"ae3552e8-ee24-41b0-a477-81536c660b7f","Type":"ContainerDied","Data":"4c2b4dd3f6b984562adb37fd94b97ddcafbc4bf12890f3d84fd702b0320a637d"} Feb 23 18:15:03 crc kubenswrapper[4724]: I0223 18:15:03.556418 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531175-q6kgt" Feb 23 18:15:03 crc kubenswrapper[4724]: I0223 18:15:03.747924 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae3552e8-ee24-41b0-a477-81536c660b7f-secret-volume\") pod \"ae3552e8-ee24-41b0-a477-81536c660b7f\" (UID: \"ae3552e8-ee24-41b0-a477-81536c660b7f\") " Feb 23 18:15:03 crc kubenswrapper[4724]: I0223 18:15:03.748051 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae3552e8-ee24-41b0-a477-81536c660b7f-config-volume\") pod \"ae3552e8-ee24-41b0-a477-81536c660b7f\" (UID: \"ae3552e8-ee24-41b0-a477-81536c660b7f\") " Feb 23 18:15:03 crc kubenswrapper[4724]: I0223 18:15:03.748072 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqff4\" (UniqueName: \"kubernetes.io/projected/ae3552e8-ee24-41b0-a477-81536c660b7f-kube-api-access-tqff4\") pod \"ae3552e8-ee24-41b0-a477-81536c660b7f\" (UID: \"ae3552e8-ee24-41b0-a477-81536c660b7f\") " Feb 23 18:15:03 crc kubenswrapper[4724]: I0223 18:15:03.748857 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae3552e8-ee24-41b0-a477-81536c660b7f-config-volume" (OuterVolumeSpecName: "config-volume") pod "ae3552e8-ee24-41b0-a477-81536c660b7f" (UID: "ae3552e8-ee24-41b0-a477-81536c660b7f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:15:03 crc kubenswrapper[4724]: I0223 18:15:03.754062 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae3552e8-ee24-41b0-a477-81536c660b7f-kube-api-access-tqff4" (OuterVolumeSpecName: "kube-api-access-tqff4") pod "ae3552e8-ee24-41b0-a477-81536c660b7f" (UID: "ae3552e8-ee24-41b0-a477-81536c660b7f"). InnerVolumeSpecName "kube-api-access-tqff4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:15:03 crc kubenswrapper[4724]: I0223 18:15:03.754599 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae3552e8-ee24-41b0-a477-81536c660b7f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ae3552e8-ee24-41b0-a477-81536c660b7f" (UID: "ae3552e8-ee24-41b0-a477-81536c660b7f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:15:03 crc kubenswrapper[4724]: I0223 18:15:03.852052 4724 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae3552e8-ee24-41b0-a477-81536c660b7f-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 23 18:15:03 crc kubenswrapper[4724]: I0223 18:15:03.852369 4724 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae3552e8-ee24-41b0-a477-81536c660b7f-config-volume\") on node \"crc\" DevicePath \"\"" Feb 23 18:15:03 crc kubenswrapper[4724]: I0223 18:15:03.852411 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqff4\" (UniqueName: \"kubernetes.io/projected/ae3552e8-ee24-41b0-a477-81536c660b7f-kube-api-access-tqff4\") on node \"crc\" DevicePath \"\"" Feb 23 18:15:03 crc kubenswrapper[4724]: I0223 18:15:03.950879 4724 scope.go:117] "RemoveContainer" containerID="3c33c3a693d61e5bd20fa1e2524ac802f18500488318f285cffccb9cc71efde6" Feb 23 18:15:03 crc kubenswrapper[4724]: E0223 18:15:03.951297 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:15:04 crc kubenswrapper[4724]: I0223 18:15:04.240063 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531175-q6kgt" event={"ID":"ae3552e8-ee24-41b0-a477-81536c660b7f","Type":"ContainerDied","Data":"79e20350e239e5920720fe3eec29eac267c1d9c75e4fab2cda59b2e3fd9379d3"} Feb 23 18:15:04 crc kubenswrapper[4724]: I0223 18:15:04.240139 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79e20350e239e5920720fe3eec29eac267c1d9c75e4fab2cda59b2e3fd9379d3" Feb 23 18:15:04 crc kubenswrapper[4724]: I0223 18:15:04.240215 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531175-q6kgt" Feb 23 18:15:04 crc kubenswrapper[4724]: I0223 18:15:04.300923 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531130-wbshf"] Feb 23 18:15:04 crc kubenswrapper[4724]: I0223 18:15:04.308685 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531130-wbshf"] Feb 23 18:15:04 crc kubenswrapper[4724]: I0223 18:15:04.970534 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67dbde4d-5c0f-45cf-82ae-435b16e17121" path="/var/lib/kubelet/pods/67dbde4d-5c0f-45cf-82ae-435b16e17121/volumes" Feb 23 18:15:16 crc kubenswrapper[4724]: I0223 18:15:16.951348 4724 scope.go:117] "RemoveContainer" containerID="3c33c3a693d61e5bd20fa1e2524ac802f18500488318f285cffccb9cc71efde6" Feb 23 18:15:16 crc kubenswrapper[4724]: E0223 18:15:16.952159 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:15:30 crc kubenswrapper[4724]: I0223 18:15:30.950941 4724 scope.go:117] "RemoveContainer" containerID="3c33c3a693d61e5bd20fa1e2524ac802f18500488318f285cffccb9cc71efde6" Feb 23 18:15:30 crc kubenswrapper[4724]: E0223 18:15:30.951711 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:15:41 crc kubenswrapper[4724]: I0223 18:15:41.951683 4724 scope.go:117] "RemoveContainer" containerID="3c33c3a693d61e5bd20fa1e2524ac802f18500488318f285cffccb9cc71efde6" Feb 23 18:15:41 crc kubenswrapper[4724]: E0223 18:15:41.952583 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:15:44 crc kubenswrapper[4724]: I0223 18:15:44.424331 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 23 18:15:44 crc kubenswrapper[4724]: I0223 18:15:44.424915 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="28e2f02b-2d94-4130-8d0a-3443aed25fba" containerName="prometheus" containerID="cri-o://81a144562248cdd52d45389349b259977df68466edbb0098ffbce3f964780d79" gracePeriod=600 Feb 23 18:15:44 crc kubenswrapper[4724]: I0223 18:15:44.425379 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="28e2f02b-2d94-4130-8d0a-3443aed25fba" containerName="thanos-sidecar" containerID="cri-o://e526e1201396c66d922fdc87d5ce0caef50f20aa33323edd61b90e687b33d218" gracePeriod=600 Feb 23 18:15:44 crc kubenswrapper[4724]: I0223 18:15:44.425448 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="28e2f02b-2d94-4130-8d0a-3443aed25fba" containerName="config-reloader" containerID="cri-o://03541e868c1cee934f03d37f17af74660aa55cf1c9141aeb72e6120248c82cfd" gracePeriod=600 Feb 23 18:15:44 crc kubenswrapper[4724]: I0223 18:15:44.548024 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="28e2f02b-2d94-4130-8d0a-3443aed25fba" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.140:9090/-/ready\": dial tcp 10.217.0.140:9090: connect: connection refused" Feb 23 18:15:44 crc kubenswrapper[4724]: I0223 18:15:44.638701 4724 generic.go:334] "Generic (PLEG): container finished" podID="28e2f02b-2d94-4130-8d0a-3443aed25fba" containerID="e526e1201396c66d922fdc87d5ce0caef50f20aa33323edd61b90e687b33d218" exitCode=0 Feb 23 18:15:44 crc kubenswrapper[4724]: I0223 18:15:44.638738 4724 generic.go:334] "Generic (PLEG): container finished" podID="28e2f02b-2d94-4130-8d0a-3443aed25fba" containerID="81a144562248cdd52d45389349b259977df68466edbb0098ffbce3f964780d79" exitCode=0 Feb 23 18:15:44 crc kubenswrapper[4724]: I0223 18:15:44.638761 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"28e2f02b-2d94-4130-8d0a-3443aed25fba","Type":"ContainerDied","Data":"e526e1201396c66d922fdc87d5ce0caef50f20aa33323edd61b90e687b33d218"} Feb 23 18:15:44 crc kubenswrapper[4724]: I0223 18:15:44.638792 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"28e2f02b-2d94-4130-8d0a-3443aed25fba","Type":"ContainerDied","Data":"81a144562248cdd52d45389349b259977df68466edbb0098ffbce3f964780d79"} Feb 23 18:15:45 crc kubenswrapper[4724]: I0223 18:15:45.443505 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 23 18:15:45 crc kubenswrapper[4724]: I0223 18:15:45.603188 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28e2f02b-2d94-4130-8d0a-3443aed25fba-secret-combined-ca-bundle\") pod \"28e2f02b-2d94-4130-8d0a-3443aed25fba\" (UID: \"28e2f02b-2d94-4130-8d0a-3443aed25fba\") " Feb 23 18:15:45 crc kubenswrapper[4724]: I0223 18:15:45.603528 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/28e2f02b-2d94-4130-8d0a-3443aed25fba-prometheus-metric-storage-rulefiles-2\") pod \"28e2f02b-2d94-4130-8d0a-3443aed25fba\" (UID: \"28e2f02b-2d94-4130-8d0a-3443aed25fba\") " Feb 23 18:15:45 crc kubenswrapper[4724]: I0223 18:15:45.603593 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/28e2f02b-2d94-4130-8d0a-3443aed25fba-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"28e2f02b-2d94-4130-8d0a-3443aed25fba\" (UID: \"28e2f02b-2d94-4130-8d0a-3443aed25fba\") " Feb 23 18:15:45 crc kubenswrapper[4724]: I0223 18:15:45.603620 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrfpn\" (UniqueName: \"kubernetes.io/projected/28e2f02b-2d94-4130-8d0a-3443aed25fba-kube-api-access-zrfpn\") pod \"28e2f02b-2d94-4130-8d0a-3443aed25fba\" (UID: \"28e2f02b-2d94-4130-8d0a-3443aed25fba\") " Feb 23 18:15:45 crc kubenswrapper[4724]: I0223 18:15:45.603678 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/28e2f02b-2d94-4130-8d0a-3443aed25fba-web-config\") pod \"28e2f02b-2d94-4130-8d0a-3443aed25fba\" (UID: \"28e2f02b-2d94-4130-8d0a-3443aed25fba\") " Feb 23 18:15:45 crc kubenswrapper[4724]: I0223 18:15:45.603844 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-81a74b65-f0fc-4662-b1bf-6ba433e2cb11\") pod \"28e2f02b-2d94-4130-8d0a-3443aed25fba\" (UID: \"28e2f02b-2d94-4130-8d0a-3443aed25fba\") " Feb 23 18:15:45 crc kubenswrapper[4724]: I0223 18:15:45.603869 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/28e2f02b-2d94-4130-8d0a-3443aed25fba-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"28e2f02b-2d94-4130-8d0a-3443aed25fba\" (UID: \"28e2f02b-2d94-4130-8d0a-3443aed25fba\") " Feb 23 18:15:45 crc kubenswrapper[4724]: I0223 18:15:45.603889 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/28e2f02b-2d94-4130-8d0a-3443aed25fba-thanos-prometheus-http-client-file\") pod \"28e2f02b-2d94-4130-8d0a-3443aed25fba\" (UID: \"28e2f02b-2d94-4130-8d0a-3443aed25fba\") " Feb 23 18:15:45 crc kubenswrapper[4724]: I0223 18:15:45.603905 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/28e2f02b-2d94-4130-8d0a-3443aed25fba-prometheus-metric-storage-rulefiles-1\") pod \"28e2f02b-2d94-4130-8d0a-3443aed25fba\" (UID: \"28e2f02b-2d94-4130-8d0a-3443aed25fba\") " Feb 23 18:15:45 crc kubenswrapper[4724]: I0223 18:15:45.603941 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/28e2f02b-2d94-4130-8d0a-3443aed25fba-config-out\") pod \"28e2f02b-2d94-4130-8d0a-3443aed25fba\" (UID: \"28e2f02b-2d94-4130-8d0a-3443aed25fba\") " Feb 23 18:15:45 crc kubenswrapper[4724]: I0223 18:15:45.604016 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/28e2f02b-2d94-4130-8d0a-3443aed25fba-tls-assets\") pod \"28e2f02b-2d94-4130-8d0a-3443aed25fba\" (UID: \"28e2f02b-2d94-4130-8d0a-3443aed25fba\") " Feb 23 18:15:45 crc kubenswrapper[4724]: I0223 18:15:45.604068 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/28e2f02b-2d94-4130-8d0a-3443aed25fba-config\") pod \"28e2f02b-2d94-4130-8d0a-3443aed25fba\" (UID: \"28e2f02b-2d94-4130-8d0a-3443aed25fba\") " Feb 23 18:15:45 crc kubenswrapper[4724]: I0223 18:15:45.604088 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/28e2f02b-2d94-4130-8d0a-3443aed25fba-prometheus-metric-storage-rulefiles-0\") pod \"28e2f02b-2d94-4130-8d0a-3443aed25fba\" (UID: \"28e2f02b-2d94-4130-8d0a-3443aed25fba\") " Feb 23 18:15:45 crc kubenswrapper[4724]: I0223 18:15:45.604185 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28e2f02b-2d94-4130-8d0a-3443aed25fba-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "28e2f02b-2d94-4130-8d0a-3443aed25fba" (UID: "28e2f02b-2d94-4130-8d0a-3443aed25fba"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:15:45 crc kubenswrapper[4724]: I0223 18:15:45.604679 4724 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/28e2f02b-2d94-4130-8d0a-3443aed25fba-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 23 18:15:45 crc kubenswrapper[4724]: I0223 18:15:45.605068 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28e2f02b-2d94-4130-8d0a-3443aed25fba-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "28e2f02b-2d94-4130-8d0a-3443aed25fba" (UID: "28e2f02b-2d94-4130-8d0a-3443aed25fba"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:15:45 crc kubenswrapper[4724]: I0223 18:15:45.605973 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28e2f02b-2d94-4130-8d0a-3443aed25fba-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "28e2f02b-2d94-4130-8d0a-3443aed25fba" (UID: "28e2f02b-2d94-4130-8d0a-3443aed25fba"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:15:45 crc kubenswrapper[4724]: I0223 18:15:45.611199 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28e2f02b-2d94-4130-8d0a-3443aed25fba-config" (OuterVolumeSpecName: "config") pod "28e2f02b-2d94-4130-8d0a-3443aed25fba" (UID: "28e2f02b-2d94-4130-8d0a-3443aed25fba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:15:45 crc kubenswrapper[4724]: I0223 18:15:45.611313 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28e2f02b-2d94-4130-8d0a-3443aed25fba-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "28e2f02b-2d94-4130-8d0a-3443aed25fba" (UID: "28e2f02b-2d94-4130-8d0a-3443aed25fba"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:15:45 crc kubenswrapper[4724]: I0223 18:15:45.611370 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28e2f02b-2d94-4130-8d0a-3443aed25fba-config-out" (OuterVolumeSpecName: "config-out") pod "28e2f02b-2d94-4130-8d0a-3443aed25fba" (UID: "28e2f02b-2d94-4130-8d0a-3443aed25fba"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:15:45 crc kubenswrapper[4724]: I0223 18:15:45.611952 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28e2f02b-2d94-4130-8d0a-3443aed25fba-secret-combined-ca-bundle" (OuterVolumeSpecName: "secret-combined-ca-bundle") pod "28e2f02b-2d94-4130-8d0a-3443aed25fba" (UID: "28e2f02b-2d94-4130-8d0a-3443aed25fba"). InnerVolumeSpecName "secret-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:15:45 crc kubenswrapper[4724]: I0223 18:15:45.614557 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28e2f02b-2d94-4130-8d0a-3443aed25fba-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d") pod "28e2f02b-2d94-4130-8d0a-3443aed25fba" (UID: "28e2f02b-2d94-4130-8d0a-3443aed25fba"). InnerVolumeSpecName "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:15:45 crc kubenswrapper[4724]: I0223 18:15:45.615594 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28e2f02b-2d94-4130-8d0a-3443aed25fba-kube-api-access-zrfpn" (OuterVolumeSpecName: "kube-api-access-zrfpn") pod "28e2f02b-2d94-4130-8d0a-3443aed25fba" (UID: "28e2f02b-2d94-4130-8d0a-3443aed25fba"). InnerVolumeSpecName "kube-api-access-zrfpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:15:45 crc kubenswrapper[4724]: I0223 18:15:45.615934 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28e2f02b-2d94-4130-8d0a-3443aed25fba-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d") pod "28e2f02b-2d94-4130-8d0a-3443aed25fba" (UID: "28e2f02b-2d94-4130-8d0a-3443aed25fba"). InnerVolumeSpecName "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:15:45 crc kubenswrapper[4724]: I0223 18:15:45.623965 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28e2f02b-2d94-4130-8d0a-3443aed25fba-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "28e2f02b-2d94-4130-8d0a-3443aed25fba" (UID: "28e2f02b-2d94-4130-8d0a-3443aed25fba"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:15:45 crc kubenswrapper[4724]: I0223 18:15:45.638169 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-81a74b65-f0fc-4662-b1bf-6ba433e2cb11" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "28e2f02b-2d94-4130-8d0a-3443aed25fba" (UID: "28e2f02b-2d94-4130-8d0a-3443aed25fba"). InnerVolumeSpecName "pvc-81a74b65-f0fc-4662-b1bf-6ba433e2cb11". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 23 18:15:45 crc kubenswrapper[4724]: I0223 18:15:45.657790 4724 generic.go:334] "Generic (PLEG): container finished" podID="28e2f02b-2d94-4130-8d0a-3443aed25fba" containerID="03541e868c1cee934f03d37f17af74660aa55cf1c9141aeb72e6120248c82cfd" exitCode=0 Feb 23 18:15:45 crc kubenswrapper[4724]: I0223 18:15:45.657832 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"28e2f02b-2d94-4130-8d0a-3443aed25fba","Type":"ContainerDied","Data":"03541e868c1cee934f03d37f17af74660aa55cf1c9141aeb72e6120248c82cfd"} Feb 23 18:15:45 crc kubenswrapper[4724]: I0223 18:15:45.657875 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 23 18:15:45 crc kubenswrapper[4724]: I0223 18:15:45.657901 4724 scope.go:117] "RemoveContainer" containerID="e526e1201396c66d922fdc87d5ce0caef50f20aa33323edd61b90e687b33d218" Feb 23 18:15:45 crc kubenswrapper[4724]: I0223 18:15:45.657885 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"28e2f02b-2d94-4130-8d0a-3443aed25fba","Type":"ContainerDied","Data":"d137317e4ebddeec9b1e386a278a5512d36458acd640d2e8fe0e7e7a7470bdf1"} Feb 23 18:15:45 crc kubenswrapper[4724]: I0223 18:15:45.685726 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28e2f02b-2d94-4130-8d0a-3443aed25fba-web-config" (OuterVolumeSpecName: "web-config") pod "28e2f02b-2d94-4130-8d0a-3443aed25fba" (UID: "28e2f02b-2d94-4130-8d0a-3443aed25fba"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:15:45 crc kubenswrapper[4724]: I0223 18:15:45.706450 4724 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/28e2f02b-2d94-4130-8d0a-3443aed25fba-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 23 18:15:45 crc kubenswrapper[4724]: I0223 18:15:45.706480 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/28e2f02b-2d94-4130-8d0a-3443aed25fba-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:15:45 crc kubenswrapper[4724]: I0223 18:15:45.706490 4724 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/28e2f02b-2d94-4130-8d0a-3443aed25fba-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 23 18:15:45 crc kubenswrapper[4724]: I0223 18:15:45.706505 4724 reconciler_common.go:293] "Volume detached for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28e2f02b-2d94-4130-8d0a-3443aed25fba-secret-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:15:45 crc kubenswrapper[4724]: I0223 18:15:45.706518 4724 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/28e2f02b-2d94-4130-8d0a-3443aed25fba-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") on node \"crc\" DevicePath \"\"" Feb 23 18:15:45 crc kubenswrapper[4724]: I0223 18:15:45.706530 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrfpn\" (UniqueName: \"kubernetes.io/projected/28e2f02b-2d94-4130-8d0a-3443aed25fba-kube-api-access-zrfpn\") on node \"crc\" DevicePath \"\"" Feb 23 18:15:45 crc kubenswrapper[4724]: I0223 18:15:45.706538 4724 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/28e2f02b-2d94-4130-8d0a-3443aed25fba-web-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:15:45 crc kubenswrapper[4724]: I0223 18:15:45.706576 4724 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-81a74b65-f0fc-4662-b1bf-6ba433e2cb11\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-81a74b65-f0fc-4662-b1bf-6ba433e2cb11\") on node \"crc\" " Feb 23 18:15:45 crc kubenswrapper[4724]: I0223 18:15:45.706589 4724 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/28e2f02b-2d94-4130-8d0a-3443aed25fba-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") on node \"crc\" DevicePath \"\"" Feb 23 18:15:45 crc kubenswrapper[4724]: I0223 18:15:45.706599 4724 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/28e2f02b-2d94-4130-8d0a-3443aed25fba-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 23 18:15:45 crc kubenswrapper[4724]: I0223 18:15:45.706610 4724 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/28e2f02b-2d94-4130-8d0a-3443aed25fba-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 23 18:15:45 crc kubenswrapper[4724]: I0223 18:15:45.706619 4724 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/28e2f02b-2d94-4130-8d0a-3443aed25fba-config-out\") on node \"crc\" DevicePath \"\"" Feb 23 18:15:45 crc kubenswrapper[4724]: I0223 18:15:45.737513 4724 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 23 18:15:45 crc kubenswrapper[4724]: I0223 18:15:45.737689 4724 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-81a74b65-f0fc-4662-b1bf-6ba433e2cb11" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-81a74b65-f0fc-4662-b1bf-6ba433e2cb11") on node "crc" Feb 23 18:15:45 crc kubenswrapper[4724]: I0223 18:15:45.793507 4724 scope.go:117] "RemoveContainer" containerID="03541e868c1cee934f03d37f17af74660aa55cf1c9141aeb72e6120248c82cfd" Feb 23 18:15:45 crc kubenswrapper[4724]: I0223 18:15:45.808811 4724 reconciler_common.go:293] "Volume detached for volume \"pvc-81a74b65-f0fc-4662-b1bf-6ba433e2cb11\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-81a74b65-f0fc-4662-b1bf-6ba433e2cb11\") on node \"crc\" DevicePath \"\"" Feb 23 18:15:45 crc kubenswrapper[4724]: I0223 18:15:45.817977 4724 scope.go:117] "RemoveContainer" containerID="81a144562248cdd52d45389349b259977df68466edbb0098ffbce3f964780d79" Feb 23 18:15:45 crc kubenswrapper[4724]: I0223 18:15:45.846192 4724 scope.go:117] "RemoveContainer" containerID="aafa8a27e8a0b8a6e76b06fe9f2883e7ddd4fa58de5b466c8150296f49b8c03f" Feb 23 18:15:45 crc kubenswrapper[4724]: I0223 18:15:45.868704 4724 scope.go:117] "RemoveContainer" containerID="e526e1201396c66d922fdc87d5ce0caef50f20aa33323edd61b90e687b33d218" Feb 23 18:15:45 crc kubenswrapper[4724]: E0223 18:15:45.872122 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e526e1201396c66d922fdc87d5ce0caef50f20aa33323edd61b90e687b33d218\": container with ID starting with e526e1201396c66d922fdc87d5ce0caef50f20aa33323edd61b90e687b33d218 not found: ID does not exist" containerID="e526e1201396c66d922fdc87d5ce0caef50f20aa33323edd61b90e687b33d218" Feb 23 18:15:45 crc kubenswrapper[4724]: I0223 18:15:45.872170 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e526e1201396c66d922fdc87d5ce0caef50f20aa33323edd61b90e687b33d218"} err="failed to get container status \"e526e1201396c66d922fdc87d5ce0caef50f20aa33323edd61b90e687b33d218\": rpc error: code = NotFound desc = could not find container \"e526e1201396c66d922fdc87d5ce0caef50f20aa33323edd61b90e687b33d218\": container with ID starting with e526e1201396c66d922fdc87d5ce0caef50f20aa33323edd61b90e687b33d218 not found: ID does not exist" Feb 23 18:15:45 crc kubenswrapper[4724]: I0223 18:15:45.872208 4724 scope.go:117] "RemoveContainer" containerID="03541e868c1cee934f03d37f17af74660aa55cf1c9141aeb72e6120248c82cfd" Feb 23 18:15:45 crc kubenswrapper[4724]: E0223 18:15:45.874040 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03541e868c1cee934f03d37f17af74660aa55cf1c9141aeb72e6120248c82cfd\": container with ID starting with 03541e868c1cee934f03d37f17af74660aa55cf1c9141aeb72e6120248c82cfd not found: ID does not exist" containerID="03541e868c1cee934f03d37f17af74660aa55cf1c9141aeb72e6120248c82cfd" Feb 23 18:15:45 crc kubenswrapper[4724]: I0223 18:15:45.874072 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03541e868c1cee934f03d37f17af74660aa55cf1c9141aeb72e6120248c82cfd"} err="failed to get container status \"03541e868c1cee934f03d37f17af74660aa55cf1c9141aeb72e6120248c82cfd\": rpc error: code = NotFound desc = could not find container \"03541e868c1cee934f03d37f17af74660aa55cf1c9141aeb72e6120248c82cfd\": container with ID starting with 03541e868c1cee934f03d37f17af74660aa55cf1c9141aeb72e6120248c82cfd not found: ID does not exist" Feb 23 18:15:45 crc kubenswrapper[4724]: I0223 18:15:45.874090 4724 scope.go:117] "RemoveContainer" containerID="81a144562248cdd52d45389349b259977df68466edbb0098ffbce3f964780d79" Feb 23 18:15:45 crc kubenswrapper[4724]: E0223 18:15:45.874559 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81a144562248cdd52d45389349b259977df68466edbb0098ffbce3f964780d79\": container with ID starting with 81a144562248cdd52d45389349b259977df68466edbb0098ffbce3f964780d79 not found: ID does not exist" containerID="81a144562248cdd52d45389349b259977df68466edbb0098ffbce3f964780d79" Feb 23 18:15:45 crc kubenswrapper[4724]: I0223 18:15:45.874595 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81a144562248cdd52d45389349b259977df68466edbb0098ffbce3f964780d79"} err="failed to get container status \"81a144562248cdd52d45389349b259977df68466edbb0098ffbce3f964780d79\": rpc error: code = NotFound desc = could not find container \"81a144562248cdd52d45389349b259977df68466edbb0098ffbce3f964780d79\": container with ID starting with 81a144562248cdd52d45389349b259977df68466edbb0098ffbce3f964780d79 not found: ID does not exist" Feb 23 18:15:45 crc kubenswrapper[4724]: I0223 18:15:45.874612 4724 scope.go:117] "RemoveContainer" containerID="aafa8a27e8a0b8a6e76b06fe9f2883e7ddd4fa58de5b466c8150296f49b8c03f" Feb 23 18:15:45 crc kubenswrapper[4724]: E0223 18:15:45.875027 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aafa8a27e8a0b8a6e76b06fe9f2883e7ddd4fa58de5b466c8150296f49b8c03f\": container with ID starting with aafa8a27e8a0b8a6e76b06fe9f2883e7ddd4fa58de5b466c8150296f49b8c03f not found: ID does not exist" containerID="aafa8a27e8a0b8a6e76b06fe9f2883e7ddd4fa58de5b466c8150296f49b8c03f" Feb 23 18:15:45 crc kubenswrapper[4724]: I0223 18:15:45.875063 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aafa8a27e8a0b8a6e76b06fe9f2883e7ddd4fa58de5b466c8150296f49b8c03f"} err="failed to get container status \"aafa8a27e8a0b8a6e76b06fe9f2883e7ddd4fa58de5b466c8150296f49b8c03f\": rpc error: code = NotFound desc = could not find container \"aafa8a27e8a0b8a6e76b06fe9f2883e7ddd4fa58de5b466c8150296f49b8c03f\": container with ID starting with aafa8a27e8a0b8a6e76b06fe9f2883e7ddd4fa58de5b466c8150296f49b8c03f not found: ID does not exist" Feb 23 18:15:45 crc kubenswrapper[4724]: I0223 18:15:45.991870 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.001105 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.028077 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 23 18:15:46 crc kubenswrapper[4724]: E0223 18:15:46.028981 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28e2f02b-2d94-4130-8d0a-3443aed25fba" containerName="prometheus" Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.029008 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="28e2f02b-2d94-4130-8d0a-3443aed25fba" containerName="prometheus" Feb 23 18:15:46 crc kubenswrapper[4724]: E0223 18:15:46.029028 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28e2f02b-2d94-4130-8d0a-3443aed25fba" containerName="config-reloader" Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.029034 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="28e2f02b-2d94-4130-8d0a-3443aed25fba" containerName="config-reloader" Feb 23 18:15:46 crc kubenswrapper[4724]: E0223 18:15:46.029059 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae3552e8-ee24-41b0-a477-81536c660b7f" containerName="collect-profiles" Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.029066 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae3552e8-ee24-41b0-a477-81536c660b7f" containerName="collect-profiles" Feb 23 18:15:46 crc kubenswrapper[4724]: E0223 18:15:46.029089 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28e2f02b-2d94-4130-8d0a-3443aed25fba" containerName="init-config-reloader" Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.029097 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="28e2f02b-2d94-4130-8d0a-3443aed25fba" containerName="init-config-reloader" Feb 23 18:15:46 crc kubenswrapper[4724]: E0223 18:15:46.029121 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28e2f02b-2d94-4130-8d0a-3443aed25fba" containerName="thanos-sidecar" Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.029127 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="28e2f02b-2d94-4130-8d0a-3443aed25fba" containerName="thanos-sidecar" Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.029694 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="28e2f02b-2d94-4130-8d0a-3443aed25fba" containerName="prometheus" Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.029738 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae3552e8-ee24-41b0-a477-81536c660b7f" containerName="collect-profiles" Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.029766 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="28e2f02b-2d94-4130-8d0a-3443aed25fba" containerName="thanos-sidecar" Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.029808 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="28e2f02b-2d94-4130-8d0a-3443aed25fba" containerName="config-reloader" Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.047193 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.049414 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.051457 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.051645 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.051756 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.052032 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-8mdd8" Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.056192 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.056349 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.056503 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.059974 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.216562 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/a8cb62eb-328b-4857-92b7-2ec45d3b7714-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"a8cb62eb-328b-4857-92b7-2ec45d3b7714\") " pod="openstack/prometheus-metric-storage-0" Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.216619 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/a8cb62eb-328b-4857-92b7-2ec45d3b7714-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"a8cb62eb-328b-4857-92b7-2ec45d3b7714\") " pod="openstack/prometheus-metric-storage-0" Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.216639 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-81a74b65-f0fc-4662-b1bf-6ba433e2cb11\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-81a74b65-f0fc-4662-b1bf-6ba433e2cb11\") pod \"prometheus-metric-storage-0\" (UID: \"a8cb62eb-328b-4857-92b7-2ec45d3b7714\") " pod="openstack/prometheus-metric-storage-0" Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.216736 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a8cb62eb-328b-4857-92b7-2ec45d3b7714-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"a8cb62eb-328b-4857-92b7-2ec45d3b7714\") " pod="openstack/prometheus-metric-storage-0" Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.216770 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a8cb62eb-328b-4857-92b7-2ec45d3b7714-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"a8cb62eb-328b-4857-92b7-2ec45d3b7714\") " pod="openstack/prometheus-metric-storage-0" Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.216806 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qphqc\" (UniqueName: \"kubernetes.io/projected/a8cb62eb-328b-4857-92b7-2ec45d3b7714-kube-api-access-qphqc\") pod \"prometheus-metric-storage-0\" (UID: \"a8cb62eb-328b-4857-92b7-2ec45d3b7714\") " pod="openstack/prometheus-metric-storage-0" Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.216830 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8cb62eb-328b-4857-92b7-2ec45d3b7714-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"a8cb62eb-328b-4857-92b7-2ec45d3b7714\") " pod="openstack/prometheus-metric-storage-0" Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.216846 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a8cb62eb-328b-4857-92b7-2ec45d3b7714-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"a8cb62eb-328b-4857-92b7-2ec45d3b7714\") " pod="openstack/prometheus-metric-storage-0" Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.216886 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a8cb62eb-328b-4857-92b7-2ec45d3b7714-config\") pod \"prometheus-metric-storage-0\" (UID: \"a8cb62eb-328b-4857-92b7-2ec45d3b7714\") " pod="openstack/prometheus-metric-storage-0" Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.216913 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/a8cb62eb-328b-4857-92b7-2ec45d3b7714-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"a8cb62eb-328b-4857-92b7-2ec45d3b7714\") " pod="openstack/prometheus-metric-storage-0" Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.216939 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/a8cb62eb-328b-4857-92b7-2ec45d3b7714-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"a8cb62eb-328b-4857-92b7-2ec45d3b7714\") " pod="openstack/prometheus-metric-storage-0" Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.217013 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a8cb62eb-328b-4857-92b7-2ec45d3b7714-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"a8cb62eb-328b-4857-92b7-2ec45d3b7714\") " pod="openstack/prometheus-metric-storage-0" Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.217044 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/a8cb62eb-328b-4857-92b7-2ec45d3b7714-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"a8cb62eb-328b-4857-92b7-2ec45d3b7714\") " pod="openstack/prometheus-metric-storage-0" Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.318906 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/a8cb62eb-328b-4857-92b7-2ec45d3b7714-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"a8cb62eb-328b-4857-92b7-2ec45d3b7714\") " pod="openstack/prometheus-metric-storage-0" Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.318975 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/a8cb62eb-328b-4857-92b7-2ec45d3b7714-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"a8cb62eb-328b-4857-92b7-2ec45d3b7714\") " pod="openstack/prometheus-metric-storage-0" Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.319004 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-81a74b65-f0fc-4662-b1bf-6ba433e2cb11\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-81a74b65-f0fc-4662-b1bf-6ba433e2cb11\") pod \"prometheus-metric-storage-0\" (UID: \"a8cb62eb-328b-4857-92b7-2ec45d3b7714\") " pod="openstack/prometheus-metric-storage-0" Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.319031 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a8cb62eb-328b-4857-92b7-2ec45d3b7714-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"a8cb62eb-328b-4857-92b7-2ec45d3b7714\") " pod="openstack/prometheus-metric-storage-0" Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.319058 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a8cb62eb-328b-4857-92b7-2ec45d3b7714-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"a8cb62eb-328b-4857-92b7-2ec45d3b7714\") " pod="openstack/prometheus-metric-storage-0" Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.319093 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qphqc\" (UniqueName: \"kubernetes.io/projected/a8cb62eb-328b-4857-92b7-2ec45d3b7714-kube-api-access-qphqc\") pod \"prometheus-metric-storage-0\" (UID: \"a8cb62eb-328b-4857-92b7-2ec45d3b7714\") " pod="openstack/prometheus-metric-storage-0" Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.319120 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8cb62eb-328b-4857-92b7-2ec45d3b7714-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"a8cb62eb-328b-4857-92b7-2ec45d3b7714\") " pod="openstack/prometheus-metric-storage-0" Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.319138 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a8cb62eb-328b-4857-92b7-2ec45d3b7714-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"a8cb62eb-328b-4857-92b7-2ec45d3b7714\") " pod="openstack/prometheus-metric-storage-0" Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.319164 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a8cb62eb-328b-4857-92b7-2ec45d3b7714-config\") pod \"prometheus-metric-storage-0\" (UID: \"a8cb62eb-328b-4857-92b7-2ec45d3b7714\") " pod="openstack/prometheus-metric-storage-0" Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.319194 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/a8cb62eb-328b-4857-92b7-2ec45d3b7714-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"a8cb62eb-328b-4857-92b7-2ec45d3b7714\") " pod="openstack/prometheus-metric-storage-0" Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.319221 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/a8cb62eb-328b-4857-92b7-2ec45d3b7714-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"a8cb62eb-328b-4857-92b7-2ec45d3b7714\") " pod="openstack/prometheus-metric-storage-0" Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.319305 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a8cb62eb-328b-4857-92b7-2ec45d3b7714-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"a8cb62eb-328b-4857-92b7-2ec45d3b7714\") " pod="openstack/prometheus-metric-storage-0" Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.319327 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/a8cb62eb-328b-4857-92b7-2ec45d3b7714-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"a8cb62eb-328b-4857-92b7-2ec45d3b7714\") " pod="openstack/prometheus-metric-storage-0" Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.320193 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/a8cb62eb-328b-4857-92b7-2ec45d3b7714-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"a8cb62eb-328b-4857-92b7-2ec45d3b7714\") " pod="openstack/prometheus-metric-storage-0" Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.320287 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/a8cb62eb-328b-4857-92b7-2ec45d3b7714-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"a8cb62eb-328b-4857-92b7-2ec45d3b7714\") " pod="openstack/prometheus-metric-storage-0" Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.320705 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a8cb62eb-328b-4857-92b7-2ec45d3b7714-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"a8cb62eb-328b-4857-92b7-2ec45d3b7714\") " pod="openstack/prometheus-metric-storage-0" Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.323637 4724 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.323685 4724 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-81a74b65-f0fc-4662-b1bf-6ba433e2cb11\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-81a74b65-f0fc-4662-b1bf-6ba433e2cb11\") pod \"prometheus-metric-storage-0\" (UID: \"a8cb62eb-328b-4857-92b7-2ec45d3b7714\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/47f183732fd6cce9e8579bb5bdfe275794daae311819ba60fd57e3b1b945523c/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.323642 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/a8cb62eb-328b-4857-92b7-2ec45d3b7714-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"a8cb62eb-328b-4857-92b7-2ec45d3b7714\") " pod="openstack/prometheus-metric-storage-0" Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.327871 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/a8cb62eb-328b-4857-92b7-2ec45d3b7714-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"a8cb62eb-328b-4857-92b7-2ec45d3b7714\") " pod="openstack/prometheus-metric-storage-0" Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.328590 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/a8cb62eb-328b-4857-92b7-2ec45d3b7714-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"a8cb62eb-328b-4857-92b7-2ec45d3b7714\") " pod="openstack/prometheus-metric-storage-0" Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.328644 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8cb62eb-328b-4857-92b7-2ec45d3b7714-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"a8cb62eb-328b-4857-92b7-2ec45d3b7714\") " pod="openstack/prometheus-metric-storage-0" Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.329742 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a8cb62eb-328b-4857-92b7-2ec45d3b7714-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"a8cb62eb-328b-4857-92b7-2ec45d3b7714\") " pod="openstack/prometheus-metric-storage-0" Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.329845 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a8cb62eb-328b-4857-92b7-2ec45d3b7714-config\") pod \"prometheus-metric-storage-0\" (UID: \"a8cb62eb-328b-4857-92b7-2ec45d3b7714\") " pod="openstack/prometheus-metric-storage-0" Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.337076 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a8cb62eb-328b-4857-92b7-2ec45d3b7714-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"a8cb62eb-328b-4857-92b7-2ec45d3b7714\") " pod="openstack/prometheus-metric-storage-0" Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.337996 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a8cb62eb-328b-4857-92b7-2ec45d3b7714-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"a8cb62eb-328b-4857-92b7-2ec45d3b7714\") " pod="openstack/prometheus-metric-storage-0" Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.346851 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qphqc\" (UniqueName: \"kubernetes.io/projected/a8cb62eb-328b-4857-92b7-2ec45d3b7714-kube-api-access-qphqc\") pod \"prometheus-metric-storage-0\" (UID: \"a8cb62eb-328b-4857-92b7-2ec45d3b7714\") " pod="openstack/prometheus-metric-storage-0" Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.366339 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-81a74b65-f0fc-4662-b1bf-6ba433e2cb11\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-81a74b65-f0fc-4662-b1bf-6ba433e2cb11\") pod \"prometheus-metric-storage-0\" (UID: \"a8cb62eb-328b-4857-92b7-2ec45d3b7714\") " pod="openstack/prometheus-metric-storage-0" Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.372211 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.813886 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 23 18:15:46 crc kubenswrapper[4724]: I0223 18:15:46.971710 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28e2f02b-2d94-4130-8d0a-3443aed25fba" path="/var/lib/kubelet/pods/28e2f02b-2d94-4130-8d0a-3443aed25fba/volumes" Feb 23 18:15:47 crc kubenswrapper[4724]: I0223 18:15:47.675118 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a8cb62eb-328b-4857-92b7-2ec45d3b7714","Type":"ContainerStarted","Data":"1f46d1f1c4a94312e56389fc0dfeb0c08630f0955a68572fc66cee2912ebb480"} Feb 23 18:15:50 crc kubenswrapper[4724]: I0223 18:15:50.505480 4724 scope.go:117] "RemoveContainer" containerID="ee5f8314678a01afc418a25852abecc282f30eba6f14ba505c8be9808761db1e" Feb 23 18:15:50 crc kubenswrapper[4724]: I0223 18:15:50.705259 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a8cb62eb-328b-4857-92b7-2ec45d3b7714","Type":"ContainerStarted","Data":"7a72b22b94bedae2ae3ddaa33f5ebeac0a025354c85ef0cbd54047c8dac443f5"} Feb 23 18:15:56 crc kubenswrapper[4724]: I0223 18:15:56.952452 4724 scope.go:117] "RemoveContainer" containerID="3c33c3a693d61e5bd20fa1e2524ac802f18500488318f285cffccb9cc71efde6" Feb 23 18:15:56 crc kubenswrapper[4724]: E0223 18:15:56.953230 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:15:58 crc kubenswrapper[4724]: I0223 18:15:58.788671 4724 generic.go:334] "Generic (PLEG): container finished" podID="a8cb62eb-328b-4857-92b7-2ec45d3b7714" containerID="7a72b22b94bedae2ae3ddaa33f5ebeac0a025354c85ef0cbd54047c8dac443f5" exitCode=0 Feb 23 18:15:58 crc kubenswrapper[4724]: I0223 18:15:58.788726 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a8cb62eb-328b-4857-92b7-2ec45d3b7714","Type":"ContainerDied","Data":"7a72b22b94bedae2ae3ddaa33f5ebeac0a025354c85ef0cbd54047c8dac443f5"} Feb 23 18:15:59 crc kubenswrapper[4724]: I0223 18:15:59.799798 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a8cb62eb-328b-4857-92b7-2ec45d3b7714","Type":"ContainerStarted","Data":"84ed571fd6dcb3e2c1999572f43264cf9f98553f9682674033be4aa533143e6c"} Feb 23 18:16:02 crc kubenswrapper[4724]: I0223 18:16:02.826768 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a8cb62eb-328b-4857-92b7-2ec45d3b7714","Type":"ContainerStarted","Data":"d30f58a8688e57e4b2d7811b42499aa9cc14c506a5e5197dac17f8b1d8f964e2"} Feb 23 18:16:02 crc kubenswrapper[4724]: I0223 18:16:02.827460 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a8cb62eb-328b-4857-92b7-2ec45d3b7714","Type":"ContainerStarted","Data":"7d6b1a68f54669ec58fa1077be98dc5f3316fbc0f07e16d2ac5e7dcae9227d01"} Feb 23 18:16:06 crc kubenswrapper[4724]: I0223 18:16:06.374022 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 23 18:16:10 crc kubenswrapper[4724]: I0223 18:16:10.951747 4724 scope.go:117] "RemoveContainer" containerID="3c33c3a693d61e5bd20fa1e2524ac802f18500488318f285cffccb9cc71efde6" Feb 23 18:16:10 crc kubenswrapper[4724]: E0223 18:16:10.952526 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:16:16 crc kubenswrapper[4724]: I0223 18:16:16.373502 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 23 18:16:16 crc kubenswrapper[4724]: I0223 18:16:16.380248 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 23 18:16:16 crc kubenswrapper[4724]: I0223 18:16:16.408739 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=31.408721478 podStartE2EDuration="31.408721478s" podCreationTimestamp="2026-02-23 18:15:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:16:02.858914399 +0000 UTC m=+2718.675114009" watchObservedRunningTime="2026-02-23 18:16:16.408721478 +0000 UTC m=+2732.224921098" Feb 23 18:16:16 crc kubenswrapper[4724]: I0223 18:16:16.962738 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 23 18:16:21 crc kubenswrapper[4724]: I0223 18:16:21.652190 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 23 18:16:21 crc kubenswrapper[4724]: I0223 18:16:21.657146 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 23 18:16:21 crc kubenswrapper[4724]: I0223 18:16:21.660087 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-7tgsq" Feb 23 18:16:21 crc kubenswrapper[4724]: I0223 18:16:21.660331 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 23 18:16:21 crc kubenswrapper[4724]: I0223 18:16:21.661489 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 23 18:16:21 crc kubenswrapper[4724]: I0223 18:16:21.662566 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 23 18:16:21 crc kubenswrapper[4724]: I0223 18:16:21.686725 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 23 18:16:21 crc kubenswrapper[4724]: I0223 18:16:21.758097 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0d826425-e3f8-42d4-823f-2f8db766ad9a-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"0d826425-e3f8-42d4-823f-2f8db766ad9a\") " pod="openstack/tempest-tests-tempest" Feb 23 18:16:21 crc kubenswrapper[4724]: I0223 18:16:21.758200 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/0d826425-e3f8-42d4-823f-2f8db766ad9a-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"0d826425-e3f8-42d4-823f-2f8db766ad9a\") " pod="openstack/tempest-tests-tempest" Feb 23 18:16:21 crc kubenswrapper[4724]: I0223 18:16:21.758237 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jktn6\" (UniqueName: \"kubernetes.io/projected/0d826425-e3f8-42d4-823f-2f8db766ad9a-kube-api-access-jktn6\") pod \"tempest-tests-tempest\" (UID: \"0d826425-e3f8-42d4-823f-2f8db766ad9a\") " pod="openstack/tempest-tests-tempest" Feb 23 18:16:21 crc kubenswrapper[4724]: I0223 18:16:21.758268 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0d826425-e3f8-42d4-823f-2f8db766ad9a-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"0d826425-e3f8-42d4-823f-2f8db766ad9a\") " pod="openstack/tempest-tests-tempest" Feb 23 18:16:21 crc kubenswrapper[4724]: I0223 18:16:21.758283 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0d826425-e3f8-42d4-823f-2f8db766ad9a-config-data\") pod \"tempest-tests-tempest\" (UID: \"0d826425-e3f8-42d4-823f-2f8db766ad9a\") " pod="openstack/tempest-tests-tempest" Feb 23 18:16:21 crc kubenswrapper[4724]: I0223 18:16:21.758300 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0d826425-e3f8-42d4-823f-2f8db766ad9a-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"0d826425-e3f8-42d4-823f-2f8db766ad9a\") " pod="openstack/tempest-tests-tempest" Feb 23 18:16:21 crc kubenswrapper[4724]: I0223 18:16:21.758339 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/0d826425-e3f8-42d4-823f-2f8db766ad9a-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"0d826425-e3f8-42d4-823f-2f8db766ad9a\") " pod="openstack/tempest-tests-tempest" Feb 23 18:16:21 crc kubenswrapper[4724]: I0223 18:16:21.758741 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"0d826425-e3f8-42d4-823f-2f8db766ad9a\") " pod="openstack/tempest-tests-tempest" Feb 23 18:16:21 crc kubenswrapper[4724]: I0223 18:16:21.759006 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/0d826425-e3f8-42d4-823f-2f8db766ad9a-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"0d826425-e3f8-42d4-823f-2f8db766ad9a\") " pod="openstack/tempest-tests-tempest" Feb 23 18:16:21 crc kubenswrapper[4724]: I0223 18:16:21.860784 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/0d826425-e3f8-42d4-823f-2f8db766ad9a-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"0d826425-e3f8-42d4-823f-2f8db766ad9a\") " pod="openstack/tempest-tests-tempest" Feb 23 18:16:21 crc kubenswrapper[4724]: I0223 18:16:21.860841 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0d826425-e3f8-42d4-823f-2f8db766ad9a-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"0d826425-e3f8-42d4-823f-2f8db766ad9a\") " pod="openstack/tempest-tests-tempest" Feb 23 18:16:21 crc kubenswrapper[4724]: I0223 18:16:21.860928 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/0d826425-e3f8-42d4-823f-2f8db766ad9a-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"0d826425-e3f8-42d4-823f-2f8db766ad9a\") " pod="openstack/tempest-tests-tempest" Feb 23 18:16:21 crc kubenswrapper[4724]: I0223 18:16:21.860976 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jktn6\" (UniqueName: \"kubernetes.io/projected/0d826425-e3f8-42d4-823f-2f8db766ad9a-kube-api-access-jktn6\") pod \"tempest-tests-tempest\" (UID: \"0d826425-e3f8-42d4-823f-2f8db766ad9a\") " pod="openstack/tempest-tests-tempest" Feb 23 18:16:21 crc kubenswrapper[4724]: I0223 18:16:21.861021 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0d826425-e3f8-42d4-823f-2f8db766ad9a-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"0d826425-e3f8-42d4-823f-2f8db766ad9a\") " pod="openstack/tempest-tests-tempest" Feb 23 18:16:21 crc kubenswrapper[4724]: I0223 18:16:21.861042 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0d826425-e3f8-42d4-823f-2f8db766ad9a-config-data\") pod \"tempest-tests-tempest\" (UID: \"0d826425-e3f8-42d4-823f-2f8db766ad9a\") " pod="openstack/tempest-tests-tempest" Feb 23 18:16:21 crc kubenswrapper[4724]: I0223 18:16:21.861060 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0d826425-e3f8-42d4-823f-2f8db766ad9a-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"0d826425-e3f8-42d4-823f-2f8db766ad9a\") " pod="openstack/tempest-tests-tempest" Feb 23 18:16:21 crc kubenswrapper[4724]: I0223 18:16:21.861110 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/0d826425-e3f8-42d4-823f-2f8db766ad9a-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"0d826425-e3f8-42d4-823f-2f8db766ad9a\") " pod="openstack/tempest-tests-tempest" Feb 23 18:16:21 crc kubenswrapper[4724]: I0223 18:16:21.861188 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"0d826425-e3f8-42d4-823f-2f8db766ad9a\") " pod="openstack/tempest-tests-tempest" Feb 23 18:16:21 crc kubenswrapper[4724]: I0223 18:16:21.861983 4724 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"0d826425-e3f8-42d4-823f-2f8db766ad9a\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/tempest-tests-tempest" Feb 23 18:16:21 crc kubenswrapper[4724]: I0223 18:16:21.862456 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/0d826425-e3f8-42d4-823f-2f8db766ad9a-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"0d826425-e3f8-42d4-823f-2f8db766ad9a\") " pod="openstack/tempest-tests-tempest" Feb 23 18:16:21 crc kubenswrapper[4724]: I0223 18:16:21.862882 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0d826425-e3f8-42d4-823f-2f8db766ad9a-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"0d826425-e3f8-42d4-823f-2f8db766ad9a\") " pod="openstack/tempest-tests-tempest" Feb 23 18:16:21 crc kubenswrapper[4724]: I0223 18:16:21.862990 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/0d826425-e3f8-42d4-823f-2f8db766ad9a-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"0d826425-e3f8-42d4-823f-2f8db766ad9a\") " pod="openstack/tempest-tests-tempest" Feb 23 18:16:21 crc kubenswrapper[4724]: I0223 18:16:21.863163 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0d826425-e3f8-42d4-823f-2f8db766ad9a-config-data\") pod \"tempest-tests-tempest\" (UID: \"0d826425-e3f8-42d4-823f-2f8db766ad9a\") " pod="openstack/tempest-tests-tempest" Feb 23 18:16:21 crc kubenswrapper[4724]: I0223 18:16:21.868287 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0d826425-e3f8-42d4-823f-2f8db766ad9a-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"0d826425-e3f8-42d4-823f-2f8db766ad9a\") " pod="openstack/tempest-tests-tempest" Feb 23 18:16:21 crc kubenswrapper[4724]: I0223 18:16:21.872747 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/0d826425-e3f8-42d4-823f-2f8db766ad9a-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"0d826425-e3f8-42d4-823f-2f8db766ad9a\") " pod="openstack/tempest-tests-tempest" Feb 23 18:16:21 crc kubenswrapper[4724]: I0223 18:16:21.876986 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0d826425-e3f8-42d4-823f-2f8db766ad9a-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"0d826425-e3f8-42d4-823f-2f8db766ad9a\") " pod="openstack/tempest-tests-tempest" Feb 23 18:16:21 crc kubenswrapper[4724]: I0223 18:16:21.880006 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jktn6\" (UniqueName: \"kubernetes.io/projected/0d826425-e3f8-42d4-823f-2f8db766ad9a-kube-api-access-jktn6\") pod \"tempest-tests-tempest\" (UID: \"0d826425-e3f8-42d4-823f-2f8db766ad9a\") " pod="openstack/tempest-tests-tempest" Feb 23 18:16:21 crc kubenswrapper[4724]: I0223 18:16:21.904715 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"0d826425-e3f8-42d4-823f-2f8db766ad9a\") " pod="openstack/tempest-tests-tempest" Feb 23 18:16:21 crc kubenswrapper[4724]: I0223 18:16:21.985598 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 23 18:16:22 crc kubenswrapper[4724]: I0223 18:16:22.451904 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 23 18:16:22 crc kubenswrapper[4724]: I0223 18:16:22.455182 4724 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 18:16:23 crc kubenswrapper[4724]: I0223 18:16:23.004848 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"0d826425-e3f8-42d4-823f-2f8db766ad9a","Type":"ContainerStarted","Data":"e3b8deeb44fb45fa04a78c6d874267283c0cfb2f9a290bfe341f9b8fc5ebc476"} Feb 23 18:16:24 crc kubenswrapper[4724]: I0223 18:16:24.958655 4724 scope.go:117] "RemoveContainer" containerID="3c33c3a693d61e5bd20fa1e2524ac802f18500488318f285cffccb9cc71efde6" Feb 23 18:16:24 crc kubenswrapper[4724]: E0223 18:16:24.959172 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:16:34 crc kubenswrapper[4724]: I0223 18:16:34.134528 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"0d826425-e3f8-42d4-823f-2f8db766ad9a","Type":"ContainerStarted","Data":"4d449e7d2897358f9b45f5be03e332abda0f08a7f8b60e2956d496e45370ed34"} Feb 23 18:16:34 crc kubenswrapper[4724]: I0223 18:16:34.159475 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.677186214 podStartE2EDuration="14.159454907s" podCreationTimestamp="2026-02-23 18:16:20 +0000 UTC" firstStartedPulling="2026-02-23 18:16:22.454988194 +0000 UTC m=+2738.271187794" lastFinishedPulling="2026-02-23 18:16:32.937256887 +0000 UTC m=+2748.753456487" observedRunningTime="2026-02-23 18:16:34.156279265 +0000 UTC m=+2749.972478865" watchObservedRunningTime="2026-02-23 18:16:34.159454907 +0000 UTC m=+2749.975654507" Feb 23 18:16:35 crc kubenswrapper[4724]: I0223 18:16:35.951116 4724 scope.go:117] "RemoveContainer" containerID="3c33c3a693d61e5bd20fa1e2524ac802f18500488318f285cffccb9cc71efde6" Feb 23 18:16:35 crc kubenswrapper[4724]: E0223 18:16:35.951604 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:16:46 crc kubenswrapper[4724]: I0223 18:16:46.952473 4724 scope.go:117] "RemoveContainer" containerID="3c33c3a693d61e5bd20fa1e2524ac802f18500488318f285cffccb9cc71efde6" Feb 23 18:16:46 crc kubenswrapper[4724]: E0223 18:16:46.954213 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:16:57 crc kubenswrapper[4724]: I0223 18:16:57.951601 4724 scope.go:117] "RemoveContainer" containerID="3c33c3a693d61e5bd20fa1e2524ac802f18500488318f285cffccb9cc71efde6" Feb 23 18:16:58 crc kubenswrapper[4724]: I0223 18:16:58.387061 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" event={"ID":"a065b197-b354-4d9b-b2e9-7d4882a3d1a2","Type":"ContainerStarted","Data":"b033258df9f255c0b2ea97bdef3f4c62ca399ef091efe17c797300a595bddebf"} Feb 23 18:17:22 crc kubenswrapper[4724]: I0223 18:17:22.940238 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dbb97"] Feb 23 18:17:22 crc kubenswrapper[4724]: I0223 18:17:22.942970 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dbb97" Feb 23 18:17:22 crc kubenswrapper[4724]: I0223 18:17:22.971016 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dbb97"] Feb 23 18:17:23 crc kubenswrapper[4724]: I0223 18:17:23.119936 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebfcc8af-7664-4330-a6e9-0bc8bd208550-utilities\") pod \"redhat-marketplace-dbb97\" (UID: \"ebfcc8af-7664-4330-a6e9-0bc8bd208550\") " pod="openshift-marketplace/redhat-marketplace-dbb97" Feb 23 18:17:23 crc kubenswrapper[4724]: I0223 18:17:23.120135 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzzmk\" (UniqueName: \"kubernetes.io/projected/ebfcc8af-7664-4330-a6e9-0bc8bd208550-kube-api-access-bzzmk\") pod \"redhat-marketplace-dbb97\" (UID: \"ebfcc8af-7664-4330-a6e9-0bc8bd208550\") " pod="openshift-marketplace/redhat-marketplace-dbb97" Feb 23 18:17:23 crc kubenswrapper[4724]: I0223 18:17:23.120173 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebfcc8af-7664-4330-a6e9-0bc8bd208550-catalog-content\") pod \"redhat-marketplace-dbb97\" (UID: \"ebfcc8af-7664-4330-a6e9-0bc8bd208550\") " pod="openshift-marketplace/redhat-marketplace-dbb97" Feb 23 18:17:23 crc kubenswrapper[4724]: I0223 18:17:23.222383 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzzmk\" (UniqueName: \"kubernetes.io/projected/ebfcc8af-7664-4330-a6e9-0bc8bd208550-kube-api-access-bzzmk\") pod \"redhat-marketplace-dbb97\" (UID: \"ebfcc8af-7664-4330-a6e9-0bc8bd208550\") " pod="openshift-marketplace/redhat-marketplace-dbb97" Feb 23 18:17:23 crc kubenswrapper[4724]: I0223 18:17:23.222476 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebfcc8af-7664-4330-a6e9-0bc8bd208550-catalog-content\") pod \"redhat-marketplace-dbb97\" (UID: \"ebfcc8af-7664-4330-a6e9-0bc8bd208550\") " pod="openshift-marketplace/redhat-marketplace-dbb97" Feb 23 18:17:23 crc kubenswrapper[4724]: I0223 18:17:23.222573 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebfcc8af-7664-4330-a6e9-0bc8bd208550-utilities\") pod \"redhat-marketplace-dbb97\" (UID: \"ebfcc8af-7664-4330-a6e9-0bc8bd208550\") " pod="openshift-marketplace/redhat-marketplace-dbb97" Feb 23 18:17:23 crc kubenswrapper[4724]: I0223 18:17:23.223096 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebfcc8af-7664-4330-a6e9-0bc8bd208550-utilities\") pod \"redhat-marketplace-dbb97\" (UID: \"ebfcc8af-7664-4330-a6e9-0bc8bd208550\") " pod="openshift-marketplace/redhat-marketplace-dbb97" Feb 23 18:17:23 crc kubenswrapper[4724]: I0223 18:17:23.223171 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebfcc8af-7664-4330-a6e9-0bc8bd208550-catalog-content\") pod \"redhat-marketplace-dbb97\" (UID: \"ebfcc8af-7664-4330-a6e9-0bc8bd208550\") " pod="openshift-marketplace/redhat-marketplace-dbb97" Feb 23 18:17:23 crc kubenswrapper[4724]: I0223 18:17:23.252301 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzzmk\" (UniqueName: \"kubernetes.io/projected/ebfcc8af-7664-4330-a6e9-0bc8bd208550-kube-api-access-bzzmk\") pod \"redhat-marketplace-dbb97\" (UID: \"ebfcc8af-7664-4330-a6e9-0bc8bd208550\") " pod="openshift-marketplace/redhat-marketplace-dbb97" Feb 23 18:17:23 crc kubenswrapper[4724]: I0223 18:17:23.270408 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dbb97" Feb 23 18:17:23 crc kubenswrapper[4724]: I0223 18:17:23.790153 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dbb97"] Feb 23 18:17:24 crc kubenswrapper[4724]: I0223 18:17:24.653634 4724 generic.go:334] "Generic (PLEG): container finished" podID="ebfcc8af-7664-4330-a6e9-0bc8bd208550" containerID="cb1107a5b6b7e812b72baea4bc22abc9bfb02e46157fd83e9718ea3c53f9ca05" exitCode=0 Feb 23 18:17:24 crc kubenswrapper[4724]: I0223 18:17:24.653749 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dbb97" event={"ID":"ebfcc8af-7664-4330-a6e9-0bc8bd208550","Type":"ContainerDied","Data":"cb1107a5b6b7e812b72baea4bc22abc9bfb02e46157fd83e9718ea3c53f9ca05"} Feb 23 18:17:24 crc kubenswrapper[4724]: I0223 18:17:24.653894 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dbb97" event={"ID":"ebfcc8af-7664-4330-a6e9-0bc8bd208550","Type":"ContainerStarted","Data":"0f4fc0a232fc6a851865e648dfb537d6a2e42d9c64dc57010e031ac2ccb7005d"} Feb 23 18:17:25 crc kubenswrapper[4724]: I0223 18:17:25.663560 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dbb97" event={"ID":"ebfcc8af-7664-4330-a6e9-0bc8bd208550","Type":"ContainerStarted","Data":"c75fdaf5ec76173a38f46f543f2138442f59d08feac47ace2cc9bbe0d385c974"} Feb 23 18:17:27 crc kubenswrapper[4724]: I0223 18:17:27.687906 4724 generic.go:334] "Generic (PLEG): container finished" podID="ebfcc8af-7664-4330-a6e9-0bc8bd208550" containerID="c75fdaf5ec76173a38f46f543f2138442f59d08feac47ace2cc9bbe0d385c974" exitCode=0 Feb 23 18:17:27 crc kubenswrapper[4724]: I0223 18:17:27.688186 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dbb97" event={"ID":"ebfcc8af-7664-4330-a6e9-0bc8bd208550","Type":"ContainerDied","Data":"c75fdaf5ec76173a38f46f543f2138442f59d08feac47ace2cc9bbe0d385c974"} Feb 23 18:17:28 crc kubenswrapper[4724]: I0223 18:17:28.699632 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dbb97" event={"ID":"ebfcc8af-7664-4330-a6e9-0bc8bd208550","Type":"ContainerStarted","Data":"3a163b67d6e2acd29ed524e1e40efb73edcdcdf65a8b501aa27ded9f80581c9f"} Feb 23 18:17:28 crc kubenswrapper[4724]: I0223 18:17:28.726990 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dbb97" podStartSLOduration=3.3270899910000002 podStartE2EDuration="6.72696922s" podCreationTimestamp="2026-02-23 18:17:22 +0000 UTC" firstStartedPulling="2026-02-23 18:17:24.655600625 +0000 UTC m=+2800.471800225" lastFinishedPulling="2026-02-23 18:17:28.055479854 +0000 UTC m=+2803.871679454" observedRunningTime="2026-02-23 18:17:28.715054468 +0000 UTC m=+2804.531254068" watchObservedRunningTime="2026-02-23 18:17:28.72696922 +0000 UTC m=+2804.543168810" Feb 23 18:17:33 crc kubenswrapper[4724]: I0223 18:17:33.270781 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dbb97" Feb 23 18:17:33 crc kubenswrapper[4724]: I0223 18:17:33.272155 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dbb97" Feb 23 18:17:33 crc kubenswrapper[4724]: I0223 18:17:33.331467 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dbb97" Feb 23 18:17:33 crc kubenswrapper[4724]: I0223 18:17:33.786523 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dbb97" Feb 23 18:17:33 crc kubenswrapper[4724]: I0223 18:17:33.829071 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dbb97"] Feb 23 18:17:35 crc kubenswrapper[4724]: I0223 18:17:35.756538 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dbb97" podUID="ebfcc8af-7664-4330-a6e9-0bc8bd208550" containerName="registry-server" containerID="cri-o://3a163b67d6e2acd29ed524e1e40efb73edcdcdf65a8b501aa27ded9f80581c9f" gracePeriod=2 Feb 23 18:17:36 crc kubenswrapper[4724]: I0223 18:17:36.260134 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dbb97" Feb 23 18:17:36 crc kubenswrapper[4724]: I0223 18:17:36.392556 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzzmk\" (UniqueName: \"kubernetes.io/projected/ebfcc8af-7664-4330-a6e9-0bc8bd208550-kube-api-access-bzzmk\") pod \"ebfcc8af-7664-4330-a6e9-0bc8bd208550\" (UID: \"ebfcc8af-7664-4330-a6e9-0bc8bd208550\") " Feb 23 18:17:36 crc kubenswrapper[4724]: I0223 18:17:36.392602 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebfcc8af-7664-4330-a6e9-0bc8bd208550-utilities\") pod \"ebfcc8af-7664-4330-a6e9-0bc8bd208550\" (UID: \"ebfcc8af-7664-4330-a6e9-0bc8bd208550\") " Feb 23 18:17:36 crc kubenswrapper[4724]: I0223 18:17:36.392903 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebfcc8af-7664-4330-a6e9-0bc8bd208550-catalog-content\") pod \"ebfcc8af-7664-4330-a6e9-0bc8bd208550\" (UID: \"ebfcc8af-7664-4330-a6e9-0bc8bd208550\") " Feb 23 18:17:36 crc kubenswrapper[4724]: I0223 18:17:36.393623 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebfcc8af-7664-4330-a6e9-0bc8bd208550-utilities" (OuterVolumeSpecName: "utilities") pod "ebfcc8af-7664-4330-a6e9-0bc8bd208550" (UID: "ebfcc8af-7664-4330-a6e9-0bc8bd208550"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:17:36 crc kubenswrapper[4724]: I0223 18:17:36.399239 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebfcc8af-7664-4330-a6e9-0bc8bd208550-kube-api-access-bzzmk" (OuterVolumeSpecName: "kube-api-access-bzzmk") pod "ebfcc8af-7664-4330-a6e9-0bc8bd208550" (UID: "ebfcc8af-7664-4330-a6e9-0bc8bd208550"). InnerVolumeSpecName "kube-api-access-bzzmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:17:36 crc kubenswrapper[4724]: I0223 18:17:36.419222 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebfcc8af-7664-4330-a6e9-0bc8bd208550-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ebfcc8af-7664-4330-a6e9-0bc8bd208550" (UID: "ebfcc8af-7664-4330-a6e9-0bc8bd208550"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:17:36 crc kubenswrapper[4724]: I0223 18:17:36.494980 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebfcc8af-7664-4330-a6e9-0bc8bd208550-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 18:17:36 crc kubenswrapper[4724]: I0223 18:17:36.495196 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzzmk\" (UniqueName: \"kubernetes.io/projected/ebfcc8af-7664-4330-a6e9-0bc8bd208550-kube-api-access-bzzmk\") on node \"crc\" DevicePath \"\"" Feb 23 18:17:36 crc kubenswrapper[4724]: I0223 18:17:36.495292 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebfcc8af-7664-4330-a6e9-0bc8bd208550-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 18:17:36 crc kubenswrapper[4724]: I0223 18:17:36.766763 4724 generic.go:334] "Generic (PLEG): container finished" podID="ebfcc8af-7664-4330-a6e9-0bc8bd208550" containerID="3a163b67d6e2acd29ed524e1e40efb73edcdcdf65a8b501aa27ded9f80581c9f" exitCode=0 Feb 23 18:17:36 crc kubenswrapper[4724]: I0223 18:17:36.766802 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dbb97" event={"ID":"ebfcc8af-7664-4330-a6e9-0bc8bd208550","Type":"ContainerDied","Data":"3a163b67d6e2acd29ed524e1e40efb73edcdcdf65a8b501aa27ded9f80581c9f"} Feb 23 18:17:36 crc kubenswrapper[4724]: I0223 18:17:36.766827 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dbb97" event={"ID":"ebfcc8af-7664-4330-a6e9-0bc8bd208550","Type":"ContainerDied","Data":"0f4fc0a232fc6a851865e648dfb537d6a2e42d9c64dc57010e031ac2ccb7005d"} Feb 23 18:17:36 crc kubenswrapper[4724]: I0223 18:17:36.766845 4724 scope.go:117] "RemoveContainer" containerID="3a163b67d6e2acd29ed524e1e40efb73edcdcdf65a8b501aa27ded9f80581c9f" Feb 23 18:17:36 crc kubenswrapper[4724]: I0223 18:17:36.766957 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dbb97" Feb 23 18:17:36 crc kubenswrapper[4724]: I0223 18:17:36.813324 4724 scope.go:117] "RemoveContainer" containerID="c75fdaf5ec76173a38f46f543f2138442f59d08feac47ace2cc9bbe0d385c974" Feb 23 18:17:36 crc kubenswrapper[4724]: I0223 18:17:36.817912 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dbb97"] Feb 23 18:17:36 crc kubenswrapper[4724]: I0223 18:17:36.828180 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dbb97"] Feb 23 18:17:36 crc kubenswrapper[4724]: I0223 18:17:36.844472 4724 scope.go:117] "RemoveContainer" containerID="cb1107a5b6b7e812b72baea4bc22abc9bfb02e46157fd83e9718ea3c53f9ca05" Feb 23 18:17:36 crc kubenswrapper[4724]: I0223 18:17:36.877820 4724 scope.go:117] "RemoveContainer" containerID="3a163b67d6e2acd29ed524e1e40efb73edcdcdf65a8b501aa27ded9f80581c9f" Feb 23 18:17:36 crc kubenswrapper[4724]: E0223 18:17:36.878200 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a163b67d6e2acd29ed524e1e40efb73edcdcdf65a8b501aa27ded9f80581c9f\": container with ID starting with 3a163b67d6e2acd29ed524e1e40efb73edcdcdf65a8b501aa27ded9f80581c9f not found: ID does not exist" containerID="3a163b67d6e2acd29ed524e1e40efb73edcdcdf65a8b501aa27ded9f80581c9f" Feb 23 18:17:36 crc kubenswrapper[4724]: I0223 18:17:36.878243 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a163b67d6e2acd29ed524e1e40efb73edcdcdf65a8b501aa27ded9f80581c9f"} err="failed to get container status \"3a163b67d6e2acd29ed524e1e40efb73edcdcdf65a8b501aa27ded9f80581c9f\": rpc error: code = NotFound desc = could not find container \"3a163b67d6e2acd29ed524e1e40efb73edcdcdf65a8b501aa27ded9f80581c9f\": container with ID starting with 3a163b67d6e2acd29ed524e1e40efb73edcdcdf65a8b501aa27ded9f80581c9f not found: ID does not exist" Feb 23 18:17:36 crc kubenswrapper[4724]: I0223 18:17:36.878269 4724 scope.go:117] "RemoveContainer" containerID="c75fdaf5ec76173a38f46f543f2138442f59d08feac47ace2cc9bbe0d385c974" Feb 23 18:17:36 crc kubenswrapper[4724]: E0223 18:17:36.878624 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c75fdaf5ec76173a38f46f543f2138442f59d08feac47ace2cc9bbe0d385c974\": container with ID starting with c75fdaf5ec76173a38f46f543f2138442f59d08feac47ace2cc9bbe0d385c974 not found: ID does not exist" containerID="c75fdaf5ec76173a38f46f543f2138442f59d08feac47ace2cc9bbe0d385c974" Feb 23 18:17:36 crc kubenswrapper[4724]: I0223 18:17:36.878676 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c75fdaf5ec76173a38f46f543f2138442f59d08feac47ace2cc9bbe0d385c974"} err="failed to get container status \"c75fdaf5ec76173a38f46f543f2138442f59d08feac47ace2cc9bbe0d385c974\": rpc error: code = NotFound desc = could not find container \"c75fdaf5ec76173a38f46f543f2138442f59d08feac47ace2cc9bbe0d385c974\": container with ID starting with c75fdaf5ec76173a38f46f543f2138442f59d08feac47ace2cc9bbe0d385c974 not found: ID does not exist" Feb 23 18:17:36 crc kubenswrapper[4724]: I0223 18:17:36.878707 4724 scope.go:117] "RemoveContainer" containerID="cb1107a5b6b7e812b72baea4bc22abc9bfb02e46157fd83e9718ea3c53f9ca05" Feb 23 18:17:36 crc kubenswrapper[4724]: E0223 18:17:36.878986 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb1107a5b6b7e812b72baea4bc22abc9bfb02e46157fd83e9718ea3c53f9ca05\": container with ID starting with cb1107a5b6b7e812b72baea4bc22abc9bfb02e46157fd83e9718ea3c53f9ca05 not found: ID does not exist" containerID="cb1107a5b6b7e812b72baea4bc22abc9bfb02e46157fd83e9718ea3c53f9ca05" Feb 23 18:17:36 crc kubenswrapper[4724]: I0223 18:17:36.879018 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb1107a5b6b7e812b72baea4bc22abc9bfb02e46157fd83e9718ea3c53f9ca05"} err="failed to get container status \"cb1107a5b6b7e812b72baea4bc22abc9bfb02e46157fd83e9718ea3c53f9ca05\": rpc error: code = NotFound desc = could not find container \"cb1107a5b6b7e812b72baea4bc22abc9bfb02e46157fd83e9718ea3c53f9ca05\": container with ID starting with cb1107a5b6b7e812b72baea4bc22abc9bfb02e46157fd83e9718ea3c53f9ca05 not found: ID does not exist" Feb 23 18:17:36 crc kubenswrapper[4724]: I0223 18:17:36.963529 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebfcc8af-7664-4330-a6e9-0bc8bd208550" path="/var/lib/kubelet/pods/ebfcc8af-7664-4330-a6e9-0bc8bd208550/volumes" Feb 23 18:17:39 crc kubenswrapper[4724]: I0223 18:17:39.459729 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xw98p"] Feb 23 18:17:39 crc kubenswrapper[4724]: E0223 18:17:39.460894 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebfcc8af-7664-4330-a6e9-0bc8bd208550" containerName="extract-content" Feb 23 18:17:39 crc kubenswrapper[4724]: I0223 18:17:39.460910 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebfcc8af-7664-4330-a6e9-0bc8bd208550" containerName="extract-content" Feb 23 18:17:39 crc kubenswrapper[4724]: E0223 18:17:39.460929 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebfcc8af-7664-4330-a6e9-0bc8bd208550" containerName="registry-server" Feb 23 18:17:39 crc kubenswrapper[4724]: I0223 18:17:39.460938 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebfcc8af-7664-4330-a6e9-0bc8bd208550" containerName="registry-server" Feb 23 18:17:39 crc kubenswrapper[4724]: E0223 18:17:39.460960 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebfcc8af-7664-4330-a6e9-0bc8bd208550" containerName="extract-utilities" Feb 23 18:17:39 crc kubenswrapper[4724]: I0223 18:17:39.460968 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebfcc8af-7664-4330-a6e9-0bc8bd208550" containerName="extract-utilities" Feb 23 18:17:39 crc kubenswrapper[4724]: I0223 18:17:39.461219 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebfcc8af-7664-4330-a6e9-0bc8bd208550" containerName="registry-server" Feb 23 18:17:39 crc kubenswrapper[4724]: I0223 18:17:39.464229 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xw98p" Feb 23 18:17:39 crc kubenswrapper[4724]: I0223 18:17:39.474585 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xw98p"] Feb 23 18:17:39 crc kubenswrapper[4724]: I0223 18:17:39.553532 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtzzt\" (UniqueName: \"kubernetes.io/projected/30f2744b-f64a-46fa-aebd-22d3c2a79265-kube-api-access-dtzzt\") pod \"redhat-operators-xw98p\" (UID: \"30f2744b-f64a-46fa-aebd-22d3c2a79265\") " pod="openshift-marketplace/redhat-operators-xw98p" Feb 23 18:17:39 crc kubenswrapper[4724]: I0223 18:17:39.553725 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30f2744b-f64a-46fa-aebd-22d3c2a79265-utilities\") pod \"redhat-operators-xw98p\" (UID: \"30f2744b-f64a-46fa-aebd-22d3c2a79265\") " pod="openshift-marketplace/redhat-operators-xw98p" Feb 23 18:17:39 crc kubenswrapper[4724]: I0223 18:17:39.553792 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30f2744b-f64a-46fa-aebd-22d3c2a79265-catalog-content\") pod \"redhat-operators-xw98p\" (UID: \"30f2744b-f64a-46fa-aebd-22d3c2a79265\") " pod="openshift-marketplace/redhat-operators-xw98p" Feb 23 18:17:39 crc kubenswrapper[4724]: I0223 18:17:39.655785 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtzzt\" (UniqueName: \"kubernetes.io/projected/30f2744b-f64a-46fa-aebd-22d3c2a79265-kube-api-access-dtzzt\") pod \"redhat-operators-xw98p\" (UID: \"30f2744b-f64a-46fa-aebd-22d3c2a79265\") " pod="openshift-marketplace/redhat-operators-xw98p" Feb 23 18:17:39 crc kubenswrapper[4724]: I0223 18:17:39.655912 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30f2744b-f64a-46fa-aebd-22d3c2a79265-utilities\") pod \"redhat-operators-xw98p\" (UID: \"30f2744b-f64a-46fa-aebd-22d3c2a79265\") " pod="openshift-marketplace/redhat-operators-xw98p" Feb 23 18:17:39 crc kubenswrapper[4724]: I0223 18:17:39.655941 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30f2744b-f64a-46fa-aebd-22d3c2a79265-catalog-content\") pod \"redhat-operators-xw98p\" (UID: \"30f2744b-f64a-46fa-aebd-22d3c2a79265\") " pod="openshift-marketplace/redhat-operators-xw98p" Feb 23 18:17:39 crc kubenswrapper[4724]: I0223 18:17:39.656552 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30f2744b-f64a-46fa-aebd-22d3c2a79265-utilities\") pod \"redhat-operators-xw98p\" (UID: \"30f2744b-f64a-46fa-aebd-22d3c2a79265\") " pod="openshift-marketplace/redhat-operators-xw98p" Feb 23 18:17:39 crc kubenswrapper[4724]: I0223 18:17:39.656622 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30f2744b-f64a-46fa-aebd-22d3c2a79265-catalog-content\") pod \"redhat-operators-xw98p\" (UID: \"30f2744b-f64a-46fa-aebd-22d3c2a79265\") " pod="openshift-marketplace/redhat-operators-xw98p" Feb 23 18:17:39 crc kubenswrapper[4724]: I0223 18:17:39.691751 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtzzt\" (UniqueName: \"kubernetes.io/projected/30f2744b-f64a-46fa-aebd-22d3c2a79265-kube-api-access-dtzzt\") pod \"redhat-operators-xw98p\" (UID: \"30f2744b-f64a-46fa-aebd-22d3c2a79265\") " pod="openshift-marketplace/redhat-operators-xw98p" Feb 23 18:17:39 crc kubenswrapper[4724]: I0223 18:17:39.801445 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xw98p" Feb 23 18:17:40 crc kubenswrapper[4724]: I0223 18:17:40.317856 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xw98p"] Feb 23 18:17:40 crc kubenswrapper[4724]: I0223 18:17:40.803023 4724 generic.go:334] "Generic (PLEG): container finished" podID="30f2744b-f64a-46fa-aebd-22d3c2a79265" containerID="35da3ba734feb613be1fd99e0f1cb0628dc55c722de192c7e5d4191a9518df02" exitCode=0 Feb 23 18:17:40 crc kubenswrapper[4724]: I0223 18:17:40.803325 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xw98p" event={"ID":"30f2744b-f64a-46fa-aebd-22d3c2a79265","Type":"ContainerDied","Data":"35da3ba734feb613be1fd99e0f1cb0628dc55c722de192c7e5d4191a9518df02"} Feb 23 18:17:40 crc kubenswrapper[4724]: I0223 18:17:40.803415 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xw98p" event={"ID":"30f2744b-f64a-46fa-aebd-22d3c2a79265","Type":"ContainerStarted","Data":"3cb550f85ec82449309073a0a388cc965f423803f7b9a40e95068dd3587abe55"} Feb 23 18:17:41 crc kubenswrapper[4724]: I0223 18:17:41.814898 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xw98p" event={"ID":"30f2744b-f64a-46fa-aebd-22d3c2a79265","Type":"ContainerStarted","Data":"33d1f2a11f3a7ee916d8308a29ebd0dd77da7b902266ea7843fa1bd69c83c933"} Feb 23 18:17:45 crc kubenswrapper[4724]: I0223 18:17:45.850629 4724 generic.go:334] "Generic (PLEG): container finished" podID="30f2744b-f64a-46fa-aebd-22d3c2a79265" containerID="33d1f2a11f3a7ee916d8308a29ebd0dd77da7b902266ea7843fa1bd69c83c933" exitCode=0 Feb 23 18:17:45 crc kubenswrapper[4724]: I0223 18:17:45.850723 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xw98p" event={"ID":"30f2744b-f64a-46fa-aebd-22d3c2a79265","Type":"ContainerDied","Data":"33d1f2a11f3a7ee916d8308a29ebd0dd77da7b902266ea7843fa1bd69c83c933"} Feb 23 18:17:46 crc kubenswrapper[4724]: I0223 18:17:46.862208 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xw98p" event={"ID":"30f2744b-f64a-46fa-aebd-22d3c2a79265","Type":"ContainerStarted","Data":"3e44da6b00ff9be15e3b8d98c9b78218d3ad18554a9a9a3b69eebdc88833c367"} Feb 23 18:17:46 crc kubenswrapper[4724]: I0223 18:17:46.880573 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xw98p" podStartSLOduration=2.334905165 podStartE2EDuration="7.880549135s" podCreationTimestamp="2026-02-23 18:17:39 +0000 UTC" firstStartedPulling="2026-02-23 18:17:40.804757834 +0000 UTC m=+2816.620957434" lastFinishedPulling="2026-02-23 18:17:46.350401804 +0000 UTC m=+2822.166601404" observedRunningTime="2026-02-23 18:17:46.879029136 +0000 UTC m=+2822.695228736" watchObservedRunningTime="2026-02-23 18:17:46.880549135 +0000 UTC m=+2822.696748735" Feb 23 18:17:49 crc kubenswrapper[4724]: I0223 18:17:49.802610 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xw98p" Feb 23 18:17:49 crc kubenswrapper[4724]: I0223 18:17:49.803310 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xw98p" Feb 23 18:17:50 crc kubenswrapper[4724]: I0223 18:17:50.852785 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xw98p" podUID="30f2744b-f64a-46fa-aebd-22d3c2a79265" containerName="registry-server" probeResult="failure" output=< Feb 23 18:17:50 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 23 18:17:50 crc kubenswrapper[4724]: > Feb 23 18:17:59 crc kubenswrapper[4724]: I0223 18:17:59.852741 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xw98p" Feb 23 18:17:59 crc kubenswrapper[4724]: I0223 18:17:59.912279 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xw98p" Feb 23 18:18:00 crc kubenswrapper[4724]: I0223 18:18:00.088892 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xw98p"] Feb 23 18:18:00 crc kubenswrapper[4724]: I0223 18:18:00.990501 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xw98p" podUID="30f2744b-f64a-46fa-aebd-22d3c2a79265" containerName="registry-server" containerID="cri-o://3e44da6b00ff9be15e3b8d98c9b78218d3ad18554a9a9a3b69eebdc88833c367" gracePeriod=2 Feb 23 18:18:01 crc kubenswrapper[4724]: I0223 18:18:01.464412 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xw98p" Feb 23 18:18:01 crc kubenswrapper[4724]: I0223 18:18:01.519473 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30f2744b-f64a-46fa-aebd-22d3c2a79265-catalog-content\") pod \"30f2744b-f64a-46fa-aebd-22d3c2a79265\" (UID: \"30f2744b-f64a-46fa-aebd-22d3c2a79265\") " Feb 23 18:18:01 crc kubenswrapper[4724]: I0223 18:18:01.519662 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtzzt\" (UniqueName: \"kubernetes.io/projected/30f2744b-f64a-46fa-aebd-22d3c2a79265-kube-api-access-dtzzt\") pod \"30f2744b-f64a-46fa-aebd-22d3c2a79265\" (UID: \"30f2744b-f64a-46fa-aebd-22d3c2a79265\") " Feb 23 18:18:01 crc kubenswrapper[4724]: I0223 18:18:01.519899 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30f2744b-f64a-46fa-aebd-22d3c2a79265-utilities\") pod \"30f2744b-f64a-46fa-aebd-22d3c2a79265\" (UID: \"30f2744b-f64a-46fa-aebd-22d3c2a79265\") " Feb 23 18:18:01 crc kubenswrapper[4724]: I0223 18:18:01.520931 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30f2744b-f64a-46fa-aebd-22d3c2a79265-utilities" (OuterVolumeSpecName: "utilities") pod "30f2744b-f64a-46fa-aebd-22d3c2a79265" (UID: "30f2744b-f64a-46fa-aebd-22d3c2a79265"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:18:01 crc kubenswrapper[4724]: I0223 18:18:01.525517 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30f2744b-f64a-46fa-aebd-22d3c2a79265-kube-api-access-dtzzt" (OuterVolumeSpecName: "kube-api-access-dtzzt") pod "30f2744b-f64a-46fa-aebd-22d3c2a79265" (UID: "30f2744b-f64a-46fa-aebd-22d3c2a79265"). InnerVolumeSpecName "kube-api-access-dtzzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:18:01 crc kubenswrapper[4724]: I0223 18:18:01.622074 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30f2744b-f64a-46fa-aebd-22d3c2a79265-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 18:18:01 crc kubenswrapper[4724]: I0223 18:18:01.622118 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtzzt\" (UniqueName: \"kubernetes.io/projected/30f2744b-f64a-46fa-aebd-22d3c2a79265-kube-api-access-dtzzt\") on node \"crc\" DevicePath \"\"" Feb 23 18:18:01 crc kubenswrapper[4724]: I0223 18:18:01.655244 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30f2744b-f64a-46fa-aebd-22d3c2a79265-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "30f2744b-f64a-46fa-aebd-22d3c2a79265" (UID: "30f2744b-f64a-46fa-aebd-22d3c2a79265"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:18:01 crc kubenswrapper[4724]: I0223 18:18:01.724153 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30f2744b-f64a-46fa-aebd-22d3c2a79265-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 18:18:02 crc kubenswrapper[4724]: I0223 18:18:02.000730 4724 generic.go:334] "Generic (PLEG): container finished" podID="30f2744b-f64a-46fa-aebd-22d3c2a79265" containerID="3e44da6b00ff9be15e3b8d98c9b78218d3ad18554a9a9a3b69eebdc88833c367" exitCode=0 Feb 23 18:18:02 crc kubenswrapper[4724]: I0223 18:18:02.000783 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xw98p" event={"ID":"30f2744b-f64a-46fa-aebd-22d3c2a79265","Type":"ContainerDied","Data":"3e44da6b00ff9be15e3b8d98c9b78218d3ad18554a9a9a3b69eebdc88833c367"} Feb 23 18:18:02 crc kubenswrapper[4724]: I0223 18:18:02.000834 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xw98p" event={"ID":"30f2744b-f64a-46fa-aebd-22d3c2a79265","Type":"ContainerDied","Data":"3cb550f85ec82449309073a0a388cc965f423803f7b9a40e95068dd3587abe55"} Feb 23 18:18:02 crc kubenswrapper[4724]: I0223 18:18:02.000863 4724 scope.go:117] "RemoveContainer" containerID="3e44da6b00ff9be15e3b8d98c9b78218d3ad18554a9a9a3b69eebdc88833c367" Feb 23 18:18:02 crc kubenswrapper[4724]: I0223 18:18:02.001029 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xw98p" Feb 23 18:18:02 crc kubenswrapper[4724]: I0223 18:18:02.023523 4724 scope.go:117] "RemoveContainer" containerID="33d1f2a11f3a7ee916d8308a29ebd0dd77da7b902266ea7843fa1bd69c83c933" Feb 23 18:18:02 crc kubenswrapper[4724]: I0223 18:18:02.037472 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xw98p"] Feb 23 18:18:02 crc kubenswrapper[4724]: I0223 18:18:02.046166 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xw98p"] Feb 23 18:18:02 crc kubenswrapper[4724]: I0223 18:18:02.054307 4724 scope.go:117] "RemoveContainer" containerID="35da3ba734feb613be1fd99e0f1cb0628dc55c722de192c7e5d4191a9518df02" Feb 23 18:18:02 crc kubenswrapper[4724]: I0223 18:18:02.086641 4724 scope.go:117] "RemoveContainer" containerID="3e44da6b00ff9be15e3b8d98c9b78218d3ad18554a9a9a3b69eebdc88833c367" Feb 23 18:18:02 crc kubenswrapper[4724]: E0223 18:18:02.087267 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e44da6b00ff9be15e3b8d98c9b78218d3ad18554a9a9a3b69eebdc88833c367\": container with ID starting with 3e44da6b00ff9be15e3b8d98c9b78218d3ad18554a9a9a3b69eebdc88833c367 not found: ID does not exist" containerID="3e44da6b00ff9be15e3b8d98c9b78218d3ad18554a9a9a3b69eebdc88833c367" Feb 23 18:18:02 crc kubenswrapper[4724]: I0223 18:18:02.087315 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e44da6b00ff9be15e3b8d98c9b78218d3ad18554a9a9a3b69eebdc88833c367"} err="failed to get container status \"3e44da6b00ff9be15e3b8d98c9b78218d3ad18554a9a9a3b69eebdc88833c367\": rpc error: code = NotFound desc = could not find container \"3e44da6b00ff9be15e3b8d98c9b78218d3ad18554a9a9a3b69eebdc88833c367\": container with ID starting with 3e44da6b00ff9be15e3b8d98c9b78218d3ad18554a9a9a3b69eebdc88833c367 not found: ID does not exist" Feb 23 18:18:02 crc kubenswrapper[4724]: I0223 18:18:02.087349 4724 scope.go:117] "RemoveContainer" containerID="33d1f2a11f3a7ee916d8308a29ebd0dd77da7b902266ea7843fa1bd69c83c933" Feb 23 18:18:02 crc kubenswrapper[4724]: E0223 18:18:02.087826 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33d1f2a11f3a7ee916d8308a29ebd0dd77da7b902266ea7843fa1bd69c83c933\": container with ID starting with 33d1f2a11f3a7ee916d8308a29ebd0dd77da7b902266ea7843fa1bd69c83c933 not found: ID does not exist" containerID="33d1f2a11f3a7ee916d8308a29ebd0dd77da7b902266ea7843fa1bd69c83c933" Feb 23 18:18:02 crc kubenswrapper[4724]: I0223 18:18:02.087888 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33d1f2a11f3a7ee916d8308a29ebd0dd77da7b902266ea7843fa1bd69c83c933"} err="failed to get container status \"33d1f2a11f3a7ee916d8308a29ebd0dd77da7b902266ea7843fa1bd69c83c933\": rpc error: code = NotFound desc = could not find container \"33d1f2a11f3a7ee916d8308a29ebd0dd77da7b902266ea7843fa1bd69c83c933\": container with ID starting with 33d1f2a11f3a7ee916d8308a29ebd0dd77da7b902266ea7843fa1bd69c83c933 not found: ID does not exist" Feb 23 18:18:02 crc kubenswrapper[4724]: I0223 18:18:02.087919 4724 scope.go:117] "RemoveContainer" containerID="35da3ba734feb613be1fd99e0f1cb0628dc55c722de192c7e5d4191a9518df02" Feb 23 18:18:02 crc kubenswrapper[4724]: E0223 18:18:02.088214 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35da3ba734feb613be1fd99e0f1cb0628dc55c722de192c7e5d4191a9518df02\": container with ID starting with 35da3ba734feb613be1fd99e0f1cb0628dc55c722de192c7e5d4191a9518df02 not found: ID does not exist" containerID="35da3ba734feb613be1fd99e0f1cb0628dc55c722de192c7e5d4191a9518df02" Feb 23 18:18:02 crc kubenswrapper[4724]: I0223 18:18:02.088237 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35da3ba734feb613be1fd99e0f1cb0628dc55c722de192c7e5d4191a9518df02"} err="failed to get container status \"35da3ba734feb613be1fd99e0f1cb0628dc55c722de192c7e5d4191a9518df02\": rpc error: code = NotFound desc = could not find container \"35da3ba734feb613be1fd99e0f1cb0628dc55c722de192c7e5d4191a9518df02\": container with ID starting with 35da3ba734feb613be1fd99e0f1cb0628dc55c722de192c7e5d4191a9518df02 not found: ID does not exist" Feb 23 18:18:02 crc kubenswrapper[4724]: I0223 18:18:02.960618 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30f2744b-f64a-46fa-aebd-22d3c2a79265" path="/var/lib/kubelet/pods/30f2744b-f64a-46fa-aebd-22d3c2a79265/volumes" Feb 23 18:19:27 crc kubenswrapper[4724]: I0223 18:19:27.752517 4724 patch_prober.go:28] interesting pod/machine-config-daemon-rw78r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 18:19:27 crc kubenswrapper[4724]: I0223 18:19:27.753864 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 18:19:57 crc kubenswrapper[4724]: I0223 18:19:57.752333 4724 patch_prober.go:28] interesting pod/machine-config-daemon-rw78r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 18:19:57 crc kubenswrapper[4724]: I0223 18:19:57.752862 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 18:20:27 crc kubenswrapper[4724]: I0223 18:20:27.751926 4724 patch_prober.go:28] interesting pod/machine-config-daemon-rw78r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 18:20:27 crc kubenswrapper[4724]: I0223 18:20:27.752597 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 18:20:27 crc kubenswrapper[4724]: I0223 18:20:27.752665 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" Feb 23 18:20:27 crc kubenswrapper[4724]: I0223 18:20:27.753517 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b033258df9f255c0b2ea97bdef3f4c62ca399ef091efe17c797300a595bddebf"} pod="openshift-machine-config-operator/machine-config-daemon-rw78r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 18:20:27 crc kubenswrapper[4724]: I0223 18:20:27.753593 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" containerID="cri-o://b033258df9f255c0b2ea97bdef3f4c62ca399ef091efe17c797300a595bddebf" gracePeriod=600 Feb 23 18:20:28 crc kubenswrapper[4724]: I0223 18:20:28.290466 4724 generic.go:334] "Generic (PLEG): container finished" podID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerID="b033258df9f255c0b2ea97bdef3f4c62ca399ef091efe17c797300a595bddebf" exitCode=0 Feb 23 18:20:28 crc kubenswrapper[4724]: I0223 18:20:28.290529 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" event={"ID":"a065b197-b354-4d9b-b2e9-7d4882a3d1a2","Type":"ContainerDied","Data":"b033258df9f255c0b2ea97bdef3f4c62ca399ef091efe17c797300a595bddebf"} Feb 23 18:20:28 crc kubenswrapper[4724]: I0223 18:20:28.290805 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" event={"ID":"a065b197-b354-4d9b-b2e9-7d4882a3d1a2","Type":"ContainerStarted","Data":"64d71fc5aeb4931e4af4007838ccb218942291620b2aad62c58ece0405cf269d"} Feb 23 18:20:28 crc kubenswrapper[4724]: I0223 18:20:28.290841 4724 scope.go:117] "RemoveContainer" containerID="3c33c3a693d61e5bd20fa1e2524ac802f18500488318f285cffccb9cc71efde6" Feb 23 18:22:13 crc kubenswrapper[4724]: I0223 18:22:13.541310 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-v6778"] Feb 23 18:22:13 crc kubenswrapper[4724]: E0223 18:22:13.542374 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30f2744b-f64a-46fa-aebd-22d3c2a79265" containerName="extract-content" Feb 23 18:22:13 crc kubenswrapper[4724]: I0223 18:22:13.542488 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="30f2744b-f64a-46fa-aebd-22d3c2a79265" containerName="extract-content" Feb 23 18:22:13 crc kubenswrapper[4724]: E0223 18:22:13.542506 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30f2744b-f64a-46fa-aebd-22d3c2a79265" containerName="registry-server" Feb 23 18:22:13 crc kubenswrapper[4724]: I0223 18:22:13.542516 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="30f2744b-f64a-46fa-aebd-22d3c2a79265" containerName="registry-server" Feb 23 18:22:13 crc kubenswrapper[4724]: E0223 18:22:13.542563 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30f2744b-f64a-46fa-aebd-22d3c2a79265" containerName="extract-utilities" Feb 23 18:22:13 crc kubenswrapper[4724]: I0223 18:22:13.542573 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="30f2744b-f64a-46fa-aebd-22d3c2a79265" containerName="extract-utilities" Feb 23 18:22:13 crc kubenswrapper[4724]: I0223 18:22:13.542802 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="30f2744b-f64a-46fa-aebd-22d3c2a79265" containerName="registry-server" Feb 23 18:22:13 crc kubenswrapper[4724]: I0223 18:22:13.544659 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v6778" Feb 23 18:22:13 crc kubenswrapper[4724]: I0223 18:22:13.552109 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v6778"] Feb 23 18:22:13 crc kubenswrapper[4724]: I0223 18:22:13.658876 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb8d2bee-bdcc-49ff-ab76-ade97f8be92e-catalog-content\") pod \"certified-operators-v6778\" (UID: \"eb8d2bee-bdcc-49ff-ab76-ade97f8be92e\") " pod="openshift-marketplace/certified-operators-v6778" Feb 23 18:22:13 crc kubenswrapper[4724]: I0223 18:22:13.659190 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb8d2bee-bdcc-49ff-ab76-ade97f8be92e-utilities\") pod \"certified-operators-v6778\" (UID: \"eb8d2bee-bdcc-49ff-ab76-ade97f8be92e\") " pod="openshift-marketplace/certified-operators-v6778" Feb 23 18:22:13 crc kubenswrapper[4724]: I0223 18:22:13.659458 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcz94\" (UniqueName: \"kubernetes.io/projected/eb8d2bee-bdcc-49ff-ab76-ade97f8be92e-kube-api-access-pcz94\") pod \"certified-operators-v6778\" (UID: \"eb8d2bee-bdcc-49ff-ab76-ade97f8be92e\") " pod="openshift-marketplace/certified-operators-v6778" Feb 23 18:22:13 crc kubenswrapper[4724]: I0223 18:22:13.760863 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcz94\" (UniqueName: \"kubernetes.io/projected/eb8d2bee-bdcc-49ff-ab76-ade97f8be92e-kube-api-access-pcz94\") pod \"certified-operators-v6778\" (UID: \"eb8d2bee-bdcc-49ff-ab76-ade97f8be92e\") " pod="openshift-marketplace/certified-operators-v6778" Feb 23 18:22:13 crc kubenswrapper[4724]: I0223 18:22:13.760985 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb8d2bee-bdcc-49ff-ab76-ade97f8be92e-catalog-content\") pod \"certified-operators-v6778\" (UID: \"eb8d2bee-bdcc-49ff-ab76-ade97f8be92e\") " pod="openshift-marketplace/certified-operators-v6778" Feb 23 18:22:13 crc kubenswrapper[4724]: I0223 18:22:13.761052 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb8d2bee-bdcc-49ff-ab76-ade97f8be92e-utilities\") pod \"certified-operators-v6778\" (UID: \"eb8d2bee-bdcc-49ff-ab76-ade97f8be92e\") " pod="openshift-marketplace/certified-operators-v6778" Feb 23 18:22:13 crc kubenswrapper[4724]: I0223 18:22:13.761540 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb8d2bee-bdcc-49ff-ab76-ade97f8be92e-utilities\") pod \"certified-operators-v6778\" (UID: \"eb8d2bee-bdcc-49ff-ab76-ade97f8be92e\") " pod="openshift-marketplace/certified-operators-v6778" Feb 23 18:22:13 crc kubenswrapper[4724]: I0223 18:22:13.761914 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb8d2bee-bdcc-49ff-ab76-ade97f8be92e-catalog-content\") pod \"certified-operators-v6778\" (UID: \"eb8d2bee-bdcc-49ff-ab76-ade97f8be92e\") " pod="openshift-marketplace/certified-operators-v6778" Feb 23 18:22:13 crc kubenswrapper[4724]: I0223 18:22:13.781238 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcz94\" (UniqueName: \"kubernetes.io/projected/eb8d2bee-bdcc-49ff-ab76-ade97f8be92e-kube-api-access-pcz94\") pod \"certified-operators-v6778\" (UID: \"eb8d2bee-bdcc-49ff-ab76-ade97f8be92e\") " pod="openshift-marketplace/certified-operators-v6778" Feb 23 18:22:13 crc kubenswrapper[4724]: I0223 18:22:13.872408 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v6778" Feb 23 18:22:14 crc kubenswrapper[4724]: I0223 18:22:14.343175 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v6778"] Feb 23 18:22:14 crc kubenswrapper[4724]: I0223 18:22:14.906172 4724 generic.go:334] "Generic (PLEG): container finished" podID="eb8d2bee-bdcc-49ff-ab76-ade97f8be92e" containerID="c6d160f438eef8fbf5d94777507b330a010e4dc67d74ef078f8ef31def9300d1" exitCode=0 Feb 23 18:22:14 crc kubenswrapper[4724]: I0223 18:22:14.906231 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v6778" event={"ID":"eb8d2bee-bdcc-49ff-ab76-ade97f8be92e","Type":"ContainerDied","Data":"c6d160f438eef8fbf5d94777507b330a010e4dc67d74ef078f8ef31def9300d1"} Feb 23 18:22:14 crc kubenswrapper[4724]: I0223 18:22:14.906281 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v6778" event={"ID":"eb8d2bee-bdcc-49ff-ab76-ade97f8be92e","Type":"ContainerStarted","Data":"41aad6df3c9d0c0ee8cf82e93f227ddc4e48b6410dd93cb6f0aa18b7ed49cfad"} Feb 23 18:22:14 crc kubenswrapper[4724]: I0223 18:22:14.908495 4724 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 18:22:15 crc kubenswrapper[4724]: I0223 18:22:15.920447 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v6778" event={"ID":"eb8d2bee-bdcc-49ff-ab76-ade97f8be92e","Type":"ContainerStarted","Data":"f1752c778c90e59eb648dee69107fabd137f48d66602f2c68e47b7ed398fb31c"} Feb 23 18:22:18 crc kubenswrapper[4724]: E0223 18:22:18.941430 4724 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb8d2bee_bdcc_49ff_ab76_ade97f8be92e.slice/crio-conmon-f1752c778c90e59eb648dee69107fabd137f48d66602f2c68e47b7ed398fb31c.scope\": RecentStats: unable to find data in memory cache]" Feb 23 18:22:18 crc kubenswrapper[4724]: I0223 18:22:18.950687 4724 generic.go:334] "Generic (PLEG): container finished" podID="eb8d2bee-bdcc-49ff-ab76-ade97f8be92e" containerID="f1752c778c90e59eb648dee69107fabd137f48d66602f2c68e47b7ed398fb31c" exitCode=0 Feb 23 18:22:18 crc kubenswrapper[4724]: I0223 18:22:18.968588 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v6778" event={"ID":"eb8d2bee-bdcc-49ff-ab76-ade97f8be92e","Type":"ContainerDied","Data":"f1752c778c90e59eb648dee69107fabd137f48d66602f2c68e47b7ed398fb31c"} Feb 23 18:22:20 crc kubenswrapper[4724]: I0223 18:22:20.970893 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v6778" event={"ID":"eb8d2bee-bdcc-49ff-ab76-ade97f8be92e","Type":"ContainerStarted","Data":"d250874cd7696840b09ae50a2fa19d04be85b5887bc370fd9d831846da78d5ca"} Feb 23 18:22:21 crc kubenswrapper[4724]: I0223 18:22:21.004285 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-v6778" podStartSLOduration=3.143978485 podStartE2EDuration="8.004265708s" podCreationTimestamp="2026-02-23 18:22:13 +0000 UTC" firstStartedPulling="2026-02-23 18:22:14.908252294 +0000 UTC m=+3090.724451884" lastFinishedPulling="2026-02-23 18:22:19.768539467 +0000 UTC m=+3095.584739107" observedRunningTime="2026-02-23 18:22:20.987544995 +0000 UTC m=+3096.803744595" watchObservedRunningTime="2026-02-23 18:22:21.004265708 +0000 UTC m=+3096.820465308" Feb 23 18:22:23 crc kubenswrapper[4724]: I0223 18:22:23.873218 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-v6778" Feb 23 18:22:23 crc kubenswrapper[4724]: I0223 18:22:23.873673 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-v6778" Feb 23 18:22:23 crc kubenswrapper[4724]: I0223 18:22:23.952543 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-v6778" Feb 23 18:22:24 crc kubenswrapper[4724]: I0223 18:22:24.079251 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-v6778" Feb 23 18:22:24 crc kubenswrapper[4724]: I0223 18:22:24.195301 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v6778"] Feb 23 18:22:26 crc kubenswrapper[4724]: I0223 18:22:26.034268 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-v6778" podUID="eb8d2bee-bdcc-49ff-ab76-ade97f8be92e" containerName="registry-server" containerID="cri-o://d250874cd7696840b09ae50a2fa19d04be85b5887bc370fd9d831846da78d5ca" gracePeriod=2 Feb 23 18:22:26 crc kubenswrapper[4724]: I0223 18:22:26.535177 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v6778" Feb 23 18:22:26 crc kubenswrapper[4724]: I0223 18:22:26.637518 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcz94\" (UniqueName: \"kubernetes.io/projected/eb8d2bee-bdcc-49ff-ab76-ade97f8be92e-kube-api-access-pcz94\") pod \"eb8d2bee-bdcc-49ff-ab76-ade97f8be92e\" (UID: \"eb8d2bee-bdcc-49ff-ab76-ade97f8be92e\") " Feb 23 18:22:26 crc kubenswrapper[4724]: I0223 18:22:26.637938 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb8d2bee-bdcc-49ff-ab76-ade97f8be92e-utilities\") pod \"eb8d2bee-bdcc-49ff-ab76-ade97f8be92e\" (UID: \"eb8d2bee-bdcc-49ff-ab76-ade97f8be92e\") " Feb 23 18:22:26 crc kubenswrapper[4724]: I0223 18:22:26.638123 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb8d2bee-bdcc-49ff-ab76-ade97f8be92e-catalog-content\") pod \"eb8d2bee-bdcc-49ff-ab76-ade97f8be92e\" (UID: \"eb8d2bee-bdcc-49ff-ab76-ade97f8be92e\") " Feb 23 18:22:26 crc kubenswrapper[4724]: I0223 18:22:26.638759 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb8d2bee-bdcc-49ff-ab76-ade97f8be92e-utilities" (OuterVolumeSpecName: "utilities") pod "eb8d2bee-bdcc-49ff-ab76-ade97f8be92e" (UID: "eb8d2bee-bdcc-49ff-ab76-ade97f8be92e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:22:26 crc kubenswrapper[4724]: I0223 18:22:26.644365 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb8d2bee-bdcc-49ff-ab76-ade97f8be92e-kube-api-access-pcz94" (OuterVolumeSpecName: "kube-api-access-pcz94") pod "eb8d2bee-bdcc-49ff-ab76-ade97f8be92e" (UID: "eb8d2bee-bdcc-49ff-ab76-ade97f8be92e"). InnerVolumeSpecName "kube-api-access-pcz94". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:22:26 crc kubenswrapper[4724]: I0223 18:22:26.697602 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb8d2bee-bdcc-49ff-ab76-ade97f8be92e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eb8d2bee-bdcc-49ff-ab76-ade97f8be92e" (UID: "eb8d2bee-bdcc-49ff-ab76-ade97f8be92e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:22:26 crc kubenswrapper[4724]: I0223 18:22:26.742473 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb8d2bee-bdcc-49ff-ab76-ade97f8be92e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 18:22:26 crc kubenswrapper[4724]: I0223 18:22:26.742531 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcz94\" (UniqueName: \"kubernetes.io/projected/eb8d2bee-bdcc-49ff-ab76-ade97f8be92e-kube-api-access-pcz94\") on node \"crc\" DevicePath \"\"" Feb 23 18:22:26 crc kubenswrapper[4724]: I0223 18:22:26.742545 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb8d2bee-bdcc-49ff-ab76-ade97f8be92e-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 18:22:27 crc kubenswrapper[4724]: I0223 18:22:27.044307 4724 generic.go:334] "Generic (PLEG): container finished" podID="eb8d2bee-bdcc-49ff-ab76-ade97f8be92e" containerID="d250874cd7696840b09ae50a2fa19d04be85b5887bc370fd9d831846da78d5ca" exitCode=0 Feb 23 18:22:27 crc kubenswrapper[4724]: I0223 18:22:27.044363 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v6778" event={"ID":"eb8d2bee-bdcc-49ff-ab76-ade97f8be92e","Type":"ContainerDied","Data":"d250874cd7696840b09ae50a2fa19d04be85b5887bc370fd9d831846da78d5ca"} Feb 23 18:22:27 crc kubenswrapper[4724]: I0223 18:22:27.044464 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v6778" event={"ID":"eb8d2bee-bdcc-49ff-ab76-ade97f8be92e","Type":"ContainerDied","Data":"41aad6df3c9d0c0ee8cf82e93f227ddc4e48b6410dd93cb6f0aa18b7ed49cfad"} Feb 23 18:22:27 crc kubenswrapper[4724]: I0223 18:22:27.044511 4724 scope.go:117] "RemoveContainer" containerID="d250874cd7696840b09ae50a2fa19d04be85b5887bc370fd9d831846da78d5ca" Feb 23 18:22:27 crc kubenswrapper[4724]: I0223 18:22:27.045452 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v6778" Feb 23 18:22:27 crc kubenswrapper[4724]: I0223 18:22:27.067826 4724 scope.go:117] "RemoveContainer" containerID="f1752c778c90e59eb648dee69107fabd137f48d66602f2c68e47b7ed398fb31c" Feb 23 18:22:27 crc kubenswrapper[4724]: I0223 18:22:27.072283 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v6778"] Feb 23 18:22:27 crc kubenswrapper[4724]: I0223 18:22:27.081216 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-v6778"] Feb 23 18:22:27 crc kubenswrapper[4724]: I0223 18:22:27.087034 4724 scope.go:117] "RemoveContainer" containerID="c6d160f438eef8fbf5d94777507b330a010e4dc67d74ef078f8ef31def9300d1" Feb 23 18:22:27 crc kubenswrapper[4724]: I0223 18:22:27.137817 4724 scope.go:117] "RemoveContainer" containerID="d250874cd7696840b09ae50a2fa19d04be85b5887bc370fd9d831846da78d5ca" Feb 23 18:22:27 crc kubenswrapper[4724]: E0223 18:22:27.138345 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d250874cd7696840b09ae50a2fa19d04be85b5887bc370fd9d831846da78d5ca\": container with ID starting with d250874cd7696840b09ae50a2fa19d04be85b5887bc370fd9d831846da78d5ca not found: ID does not exist" containerID="d250874cd7696840b09ae50a2fa19d04be85b5887bc370fd9d831846da78d5ca" Feb 23 18:22:27 crc kubenswrapper[4724]: I0223 18:22:27.138496 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d250874cd7696840b09ae50a2fa19d04be85b5887bc370fd9d831846da78d5ca"} err="failed to get container status \"d250874cd7696840b09ae50a2fa19d04be85b5887bc370fd9d831846da78d5ca\": rpc error: code = NotFound desc = could not find container \"d250874cd7696840b09ae50a2fa19d04be85b5887bc370fd9d831846da78d5ca\": container with ID starting with d250874cd7696840b09ae50a2fa19d04be85b5887bc370fd9d831846da78d5ca not found: ID does not exist" Feb 23 18:22:27 crc kubenswrapper[4724]: I0223 18:22:27.138591 4724 scope.go:117] "RemoveContainer" containerID="f1752c778c90e59eb648dee69107fabd137f48d66602f2c68e47b7ed398fb31c" Feb 23 18:22:27 crc kubenswrapper[4724]: E0223 18:22:27.139054 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1752c778c90e59eb648dee69107fabd137f48d66602f2c68e47b7ed398fb31c\": container with ID starting with f1752c778c90e59eb648dee69107fabd137f48d66602f2c68e47b7ed398fb31c not found: ID does not exist" containerID="f1752c778c90e59eb648dee69107fabd137f48d66602f2c68e47b7ed398fb31c" Feb 23 18:22:27 crc kubenswrapper[4724]: I0223 18:22:27.139089 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1752c778c90e59eb648dee69107fabd137f48d66602f2c68e47b7ed398fb31c"} err="failed to get container status \"f1752c778c90e59eb648dee69107fabd137f48d66602f2c68e47b7ed398fb31c\": rpc error: code = NotFound desc = could not find container \"f1752c778c90e59eb648dee69107fabd137f48d66602f2c68e47b7ed398fb31c\": container with ID starting with f1752c778c90e59eb648dee69107fabd137f48d66602f2c68e47b7ed398fb31c not found: ID does not exist" Feb 23 18:22:27 crc kubenswrapper[4724]: I0223 18:22:27.139112 4724 scope.go:117] "RemoveContainer" containerID="c6d160f438eef8fbf5d94777507b330a010e4dc67d74ef078f8ef31def9300d1" Feb 23 18:22:27 crc kubenswrapper[4724]: E0223 18:22:27.139385 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6d160f438eef8fbf5d94777507b330a010e4dc67d74ef078f8ef31def9300d1\": container with ID starting with c6d160f438eef8fbf5d94777507b330a010e4dc67d74ef078f8ef31def9300d1 not found: ID does not exist" containerID="c6d160f438eef8fbf5d94777507b330a010e4dc67d74ef078f8ef31def9300d1" Feb 23 18:22:27 crc kubenswrapper[4724]: I0223 18:22:27.139444 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6d160f438eef8fbf5d94777507b330a010e4dc67d74ef078f8ef31def9300d1"} err="failed to get container status \"c6d160f438eef8fbf5d94777507b330a010e4dc67d74ef078f8ef31def9300d1\": rpc error: code = NotFound desc = could not find container \"c6d160f438eef8fbf5d94777507b330a010e4dc67d74ef078f8ef31def9300d1\": container with ID starting with c6d160f438eef8fbf5d94777507b330a010e4dc67d74ef078f8ef31def9300d1 not found: ID does not exist" Feb 23 18:22:28 crc kubenswrapper[4724]: I0223 18:22:28.960986 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb8d2bee-bdcc-49ff-ab76-ade97f8be92e" path="/var/lib/kubelet/pods/eb8d2bee-bdcc-49ff-ab76-ade97f8be92e/volumes" Feb 23 18:22:57 crc kubenswrapper[4724]: I0223 18:22:57.752574 4724 patch_prober.go:28] interesting pod/machine-config-daemon-rw78r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 18:22:57 crc kubenswrapper[4724]: I0223 18:22:57.752990 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 18:23:27 crc kubenswrapper[4724]: I0223 18:23:27.752651 4724 patch_prober.go:28] interesting pod/machine-config-daemon-rw78r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 18:23:27 crc kubenswrapper[4724]: I0223 18:23:27.753209 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 18:23:57 crc kubenswrapper[4724]: I0223 18:23:57.751762 4724 patch_prober.go:28] interesting pod/machine-config-daemon-rw78r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 18:23:57 crc kubenswrapper[4724]: I0223 18:23:57.752312 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 18:23:57 crc kubenswrapper[4724]: I0223 18:23:57.752357 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" Feb 23 18:23:57 crc kubenswrapper[4724]: I0223 18:23:57.753139 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"64d71fc5aeb4931e4af4007838ccb218942291620b2aad62c58ece0405cf269d"} pod="openshift-machine-config-operator/machine-config-daemon-rw78r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 18:23:57 crc kubenswrapper[4724]: I0223 18:23:57.753226 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" containerID="cri-o://64d71fc5aeb4931e4af4007838ccb218942291620b2aad62c58ece0405cf269d" gracePeriod=600 Feb 23 18:23:58 crc kubenswrapper[4724]: E0223 18:23:58.234459 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:23:58 crc kubenswrapper[4724]: I0223 18:23:58.858942 4724 generic.go:334] "Generic (PLEG): container finished" podID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerID="64d71fc5aeb4931e4af4007838ccb218942291620b2aad62c58ece0405cf269d" exitCode=0 Feb 23 18:23:58 crc kubenswrapper[4724]: I0223 18:23:58.858987 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" event={"ID":"a065b197-b354-4d9b-b2e9-7d4882a3d1a2","Type":"ContainerDied","Data":"64d71fc5aeb4931e4af4007838ccb218942291620b2aad62c58ece0405cf269d"} Feb 23 18:23:58 crc kubenswrapper[4724]: I0223 18:23:58.859024 4724 scope.go:117] "RemoveContainer" containerID="b033258df9f255c0b2ea97bdef3f4c62ca399ef091efe17c797300a595bddebf" Feb 23 18:23:58 crc kubenswrapper[4724]: I0223 18:23:58.860020 4724 scope.go:117] "RemoveContainer" containerID="64d71fc5aeb4931e4af4007838ccb218942291620b2aad62c58ece0405cf269d" Feb 23 18:23:58 crc kubenswrapper[4724]: E0223 18:23:58.860521 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:24:11 crc kubenswrapper[4724]: I0223 18:24:11.951962 4724 scope.go:117] "RemoveContainer" containerID="64d71fc5aeb4931e4af4007838ccb218942291620b2aad62c58ece0405cf269d" Feb 23 18:24:11 crc kubenswrapper[4724]: E0223 18:24:11.952833 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:24:24 crc kubenswrapper[4724]: I0223 18:24:24.957850 4724 scope.go:117] "RemoveContainer" containerID="64d71fc5aeb4931e4af4007838ccb218942291620b2aad62c58ece0405cf269d" Feb 23 18:24:24 crc kubenswrapper[4724]: E0223 18:24:24.959528 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:24:36 crc kubenswrapper[4724]: I0223 18:24:36.951744 4724 scope.go:117] "RemoveContainer" containerID="64d71fc5aeb4931e4af4007838ccb218942291620b2aad62c58ece0405cf269d" Feb 23 18:24:36 crc kubenswrapper[4724]: E0223 18:24:36.952559 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:24:49 crc kubenswrapper[4724]: I0223 18:24:49.951958 4724 scope.go:117] "RemoveContainer" containerID="64d71fc5aeb4931e4af4007838ccb218942291620b2aad62c58ece0405cf269d" Feb 23 18:24:49 crc kubenswrapper[4724]: E0223 18:24:49.953004 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:25:03 crc kubenswrapper[4724]: I0223 18:25:03.951568 4724 scope.go:117] "RemoveContainer" containerID="64d71fc5aeb4931e4af4007838ccb218942291620b2aad62c58ece0405cf269d" Feb 23 18:25:03 crc kubenswrapper[4724]: E0223 18:25:03.952274 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:25:18 crc kubenswrapper[4724]: I0223 18:25:18.950886 4724 scope.go:117] "RemoveContainer" containerID="64d71fc5aeb4931e4af4007838ccb218942291620b2aad62c58ece0405cf269d" Feb 23 18:25:18 crc kubenswrapper[4724]: E0223 18:25:18.951613 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:25:33 crc kubenswrapper[4724]: I0223 18:25:33.951287 4724 scope.go:117] "RemoveContainer" containerID="64d71fc5aeb4931e4af4007838ccb218942291620b2aad62c58ece0405cf269d" Feb 23 18:25:33 crc kubenswrapper[4724]: E0223 18:25:33.952545 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:25:44 crc kubenswrapper[4724]: I0223 18:25:44.961635 4724 scope.go:117] "RemoveContainer" containerID="64d71fc5aeb4931e4af4007838ccb218942291620b2aad62c58ece0405cf269d" Feb 23 18:25:44 crc kubenswrapper[4724]: E0223 18:25:44.962353 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:25:56 crc kubenswrapper[4724]: I0223 18:25:56.951008 4724 scope.go:117] "RemoveContainer" containerID="64d71fc5aeb4931e4af4007838ccb218942291620b2aad62c58ece0405cf269d" Feb 23 18:25:56 crc kubenswrapper[4724]: E0223 18:25:56.951811 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:26:10 crc kubenswrapper[4724]: I0223 18:26:10.951000 4724 scope.go:117] "RemoveContainer" containerID="64d71fc5aeb4931e4af4007838ccb218942291620b2aad62c58ece0405cf269d" Feb 23 18:26:10 crc kubenswrapper[4724]: E0223 18:26:10.951946 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:26:22 crc kubenswrapper[4724]: I0223 18:26:22.951684 4724 scope.go:117] "RemoveContainer" containerID="64d71fc5aeb4931e4af4007838ccb218942291620b2aad62c58ece0405cf269d" Feb 23 18:26:22 crc kubenswrapper[4724]: E0223 18:26:22.952640 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:26:36 crc kubenswrapper[4724]: I0223 18:26:36.951508 4724 scope.go:117] "RemoveContainer" containerID="64d71fc5aeb4931e4af4007838ccb218942291620b2aad62c58ece0405cf269d" Feb 23 18:26:36 crc kubenswrapper[4724]: E0223 18:26:36.952158 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:26:49 crc kubenswrapper[4724]: I0223 18:26:49.950741 4724 scope.go:117] "RemoveContainer" containerID="64d71fc5aeb4931e4af4007838ccb218942291620b2aad62c58ece0405cf269d" Feb 23 18:26:49 crc kubenswrapper[4724]: E0223 18:26:49.951579 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:26:58 crc kubenswrapper[4724]: I0223 18:26:58.079280 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-77qhj"] Feb 23 18:26:58 crc kubenswrapper[4724]: E0223 18:26:58.082763 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb8d2bee-bdcc-49ff-ab76-ade97f8be92e" containerName="registry-server" Feb 23 18:26:58 crc kubenswrapper[4724]: I0223 18:26:58.082923 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb8d2bee-bdcc-49ff-ab76-ade97f8be92e" containerName="registry-server" Feb 23 18:26:58 crc kubenswrapper[4724]: E0223 18:26:58.083020 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb8d2bee-bdcc-49ff-ab76-ade97f8be92e" containerName="extract-content" Feb 23 18:26:58 crc kubenswrapper[4724]: I0223 18:26:58.083115 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb8d2bee-bdcc-49ff-ab76-ade97f8be92e" containerName="extract-content" Feb 23 18:26:58 crc kubenswrapper[4724]: E0223 18:26:58.083235 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb8d2bee-bdcc-49ff-ab76-ade97f8be92e" containerName="extract-utilities" Feb 23 18:26:58 crc kubenswrapper[4724]: I0223 18:26:58.083319 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb8d2bee-bdcc-49ff-ab76-ade97f8be92e" containerName="extract-utilities" Feb 23 18:26:58 crc kubenswrapper[4724]: I0223 18:26:58.083696 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb8d2bee-bdcc-49ff-ab76-ade97f8be92e" containerName="registry-server" Feb 23 18:26:58 crc kubenswrapper[4724]: I0223 18:26:58.085608 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-77qhj" Feb 23 18:26:58 crc kubenswrapper[4724]: I0223 18:26:58.091038 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-77qhj"] Feb 23 18:26:58 crc kubenswrapper[4724]: I0223 18:26:58.142809 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/902e4d68-65dd-4b78-8a93-fe261f2a8ec3-utilities\") pod \"community-operators-77qhj\" (UID: \"902e4d68-65dd-4b78-8a93-fe261f2a8ec3\") " pod="openshift-marketplace/community-operators-77qhj" Feb 23 18:26:58 crc kubenswrapper[4724]: I0223 18:26:58.143133 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/902e4d68-65dd-4b78-8a93-fe261f2a8ec3-catalog-content\") pod \"community-operators-77qhj\" (UID: \"902e4d68-65dd-4b78-8a93-fe261f2a8ec3\") " pod="openshift-marketplace/community-operators-77qhj" Feb 23 18:26:58 crc kubenswrapper[4724]: I0223 18:26:58.143318 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkjqb\" (UniqueName: \"kubernetes.io/projected/902e4d68-65dd-4b78-8a93-fe261f2a8ec3-kube-api-access-nkjqb\") pod \"community-operators-77qhj\" (UID: \"902e4d68-65dd-4b78-8a93-fe261f2a8ec3\") " pod="openshift-marketplace/community-operators-77qhj" Feb 23 18:26:58 crc kubenswrapper[4724]: I0223 18:26:58.245791 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/902e4d68-65dd-4b78-8a93-fe261f2a8ec3-utilities\") pod \"community-operators-77qhj\" (UID: \"902e4d68-65dd-4b78-8a93-fe261f2a8ec3\") " pod="openshift-marketplace/community-operators-77qhj" Feb 23 18:26:58 crc kubenswrapper[4724]: I0223 18:26:58.245914 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/902e4d68-65dd-4b78-8a93-fe261f2a8ec3-catalog-content\") pod \"community-operators-77qhj\" (UID: \"902e4d68-65dd-4b78-8a93-fe261f2a8ec3\") " pod="openshift-marketplace/community-operators-77qhj" Feb 23 18:26:58 crc kubenswrapper[4724]: I0223 18:26:58.245948 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkjqb\" (UniqueName: \"kubernetes.io/projected/902e4d68-65dd-4b78-8a93-fe261f2a8ec3-kube-api-access-nkjqb\") pod \"community-operators-77qhj\" (UID: \"902e4d68-65dd-4b78-8a93-fe261f2a8ec3\") " pod="openshift-marketplace/community-operators-77qhj" Feb 23 18:26:58 crc kubenswrapper[4724]: I0223 18:26:58.246590 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/902e4d68-65dd-4b78-8a93-fe261f2a8ec3-utilities\") pod \"community-operators-77qhj\" (UID: \"902e4d68-65dd-4b78-8a93-fe261f2a8ec3\") " pod="openshift-marketplace/community-operators-77qhj" Feb 23 18:26:58 crc kubenswrapper[4724]: I0223 18:26:58.246594 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/902e4d68-65dd-4b78-8a93-fe261f2a8ec3-catalog-content\") pod \"community-operators-77qhj\" (UID: \"902e4d68-65dd-4b78-8a93-fe261f2a8ec3\") " pod="openshift-marketplace/community-operators-77qhj" Feb 23 18:26:58 crc kubenswrapper[4724]: I0223 18:26:58.272241 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkjqb\" (UniqueName: \"kubernetes.io/projected/902e4d68-65dd-4b78-8a93-fe261f2a8ec3-kube-api-access-nkjqb\") pod \"community-operators-77qhj\" (UID: \"902e4d68-65dd-4b78-8a93-fe261f2a8ec3\") " pod="openshift-marketplace/community-operators-77qhj" Feb 23 18:26:58 crc kubenswrapper[4724]: I0223 18:26:58.414411 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-77qhj" Feb 23 18:26:58 crc kubenswrapper[4724]: I0223 18:26:58.974846 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-77qhj"] Feb 23 18:26:59 crc kubenswrapper[4724]: I0223 18:26:59.407783 4724 generic.go:334] "Generic (PLEG): container finished" podID="902e4d68-65dd-4b78-8a93-fe261f2a8ec3" containerID="58537a1ca768244ef874168a2159de85b8edda8328f291e830c5f88f50e4f9d2" exitCode=0 Feb 23 18:26:59 crc kubenswrapper[4724]: I0223 18:26:59.407827 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-77qhj" event={"ID":"902e4d68-65dd-4b78-8a93-fe261f2a8ec3","Type":"ContainerDied","Data":"58537a1ca768244ef874168a2159de85b8edda8328f291e830c5f88f50e4f9d2"} Feb 23 18:26:59 crc kubenswrapper[4724]: I0223 18:26:59.407856 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-77qhj" event={"ID":"902e4d68-65dd-4b78-8a93-fe261f2a8ec3","Type":"ContainerStarted","Data":"2581ac5c120c89b26d9d131ff3c1be9d14ccc1b86ce74102574efb6fce768d5a"} Feb 23 18:27:00 crc kubenswrapper[4724]: I0223 18:27:00.417364 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-77qhj" event={"ID":"902e4d68-65dd-4b78-8a93-fe261f2a8ec3","Type":"ContainerStarted","Data":"9317f9e327b1252eb526e278d46880622715fcdfa7a49cba9a22d521c8d8a21f"} Feb 23 18:27:02 crc kubenswrapper[4724]: I0223 18:27:02.436846 4724 generic.go:334] "Generic (PLEG): container finished" podID="902e4d68-65dd-4b78-8a93-fe261f2a8ec3" containerID="9317f9e327b1252eb526e278d46880622715fcdfa7a49cba9a22d521c8d8a21f" exitCode=0 Feb 23 18:27:02 crc kubenswrapper[4724]: I0223 18:27:02.436968 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-77qhj" event={"ID":"902e4d68-65dd-4b78-8a93-fe261f2a8ec3","Type":"ContainerDied","Data":"9317f9e327b1252eb526e278d46880622715fcdfa7a49cba9a22d521c8d8a21f"} Feb 23 18:27:03 crc kubenswrapper[4724]: I0223 18:27:03.449168 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-77qhj" event={"ID":"902e4d68-65dd-4b78-8a93-fe261f2a8ec3","Type":"ContainerStarted","Data":"8cdfb4c5e60a80bd8e480eefb6833856fc9ba8168d07a163fa2c45ea3080b60a"} Feb 23 18:27:03 crc kubenswrapper[4724]: I0223 18:27:03.473060 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-77qhj" podStartSLOduration=2.082011895 podStartE2EDuration="5.473040694s" podCreationTimestamp="2026-02-23 18:26:58 +0000 UTC" firstStartedPulling="2026-02-23 18:26:59.410015185 +0000 UTC m=+3375.226214785" lastFinishedPulling="2026-02-23 18:27:02.801043984 +0000 UTC m=+3378.617243584" observedRunningTime="2026-02-23 18:27:03.465188685 +0000 UTC m=+3379.281388295" watchObservedRunningTime="2026-02-23 18:27:03.473040694 +0000 UTC m=+3379.289240294" Feb 23 18:27:03 crc kubenswrapper[4724]: I0223 18:27:03.952046 4724 scope.go:117] "RemoveContainer" containerID="64d71fc5aeb4931e4af4007838ccb218942291620b2aad62c58ece0405cf269d" Feb 23 18:27:03 crc kubenswrapper[4724]: E0223 18:27:03.952276 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:27:08 crc kubenswrapper[4724]: I0223 18:27:08.415308 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-77qhj" Feb 23 18:27:08 crc kubenswrapper[4724]: I0223 18:27:08.415868 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-77qhj" Feb 23 18:27:08 crc kubenswrapper[4724]: I0223 18:27:08.461109 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-77qhj" Feb 23 18:27:08 crc kubenswrapper[4724]: I0223 18:27:08.550474 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-77qhj" Feb 23 18:27:11 crc kubenswrapper[4724]: I0223 18:27:11.069606 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-77qhj"] Feb 23 18:27:11 crc kubenswrapper[4724]: I0223 18:27:11.070193 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-77qhj" podUID="902e4d68-65dd-4b78-8a93-fe261f2a8ec3" containerName="registry-server" containerID="cri-o://8cdfb4c5e60a80bd8e480eefb6833856fc9ba8168d07a163fa2c45ea3080b60a" gracePeriod=2 Feb 23 18:27:11 crc kubenswrapper[4724]: I0223 18:27:11.497109 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-77qhj" Feb 23 18:27:11 crc kubenswrapper[4724]: I0223 18:27:11.526520 4724 generic.go:334] "Generic (PLEG): container finished" podID="902e4d68-65dd-4b78-8a93-fe261f2a8ec3" containerID="8cdfb4c5e60a80bd8e480eefb6833856fc9ba8168d07a163fa2c45ea3080b60a" exitCode=0 Feb 23 18:27:11 crc kubenswrapper[4724]: I0223 18:27:11.526556 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-77qhj" event={"ID":"902e4d68-65dd-4b78-8a93-fe261f2a8ec3","Type":"ContainerDied","Data":"8cdfb4c5e60a80bd8e480eefb6833856fc9ba8168d07a163fa2c45ea3080b60a"} Feb 23 18:27:11 crc kubenswrapper[4724]: I0223 18:27:11.526576 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-77qhj" event={"ID":"902e4d68-65dd-4b78-8a93-fe261f2a8ec3","Type":"ContainerDied","Data":"2581ac5c120c89b26d9d131ff3c1be9d14ccc1b86ce74102574efb6fce768d5a"} Feb 23 18:27:11 crc kubenswrapper[4724]: I0223 18:27:11.526576 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-77qhj" Feb 23 18:27:11 crc kubenswrapper[4724]: I0223 18:27:11.526589 4724 scope.go:117] "RemoveContainer" containerID="8cdfb4c5e60a80bd8e480eefb6833856fc9ba8168d07a163fa2c45ea3080b60a" Feb 23 18:27:11 crc kubenswrapper[4724]: I0223 18:27:11.545665 4724 scope.go:117] "RemoveContainer" containerID="9317f9e327b1252eb526e278d46880622715fcdfa7a49cba9a22d521c8d8a21f" Feb 23 18:27:11 crc kubenswrapper[4724]: I0223 18:27:11.567843 4724 scope.go:117] "RemoveContainer" containerID="58537a1ca768244ef874168a2159de85b8edda8328f291e830c5f88f50e4f9d2" Feb 23 18:27:11 crc kubenswrapper[4724]: I0223 18:27:11.615454 4724 scope.go:117] "RemoveContainer" containerID="8cdfb4c5e60a80bd8e480eefb6833856fc9ba8168d07a163fa2c45ea3080b60a" Feb 23 18:27:11 crc kubenswrapper[4724]: E0223 18:27:11.616169 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cdfb4c5e60a80bd8e480eefb6833856fc9ba8168d07a163fa2c45ea3080b60a\": container with ID starting with 8cdfb4c5e60a80bd8e480eefb6833856fc9ba8168d07a163fa2c45ea3080b60a not found: ID does not exist" containerID="8cdfb4c5e60a80bd8e480eefb6833856fc9ba8168d07a163fa2c45ea3080b60a" Feb 23 18:27:11 crc kubenswrapper[4724]: I0223 18:27:11.616234 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cdfb4c5e60a80bd8e480eefb6833856fc9ba8168d07a163fa2c45ea3080b60a"} err="failed to get container status \"8cdfb4c5e60a80bd8e480eefb6833856fc9ba8168d07a163fa2c45ea3080b60a\": rpc error: code = NotFound desc = could not find container \"8cdfb4c5e60a80bd8e480eefb6833856fc9ba8168d07a163fa2c45ea3080b60a\": container with ID starting with 8cdfb4c5e60a80bd8e480eefb6833856fc9ba8168d07a163fa2c45ea3080b60a not found: ID does not exist" Feb 23 18:27:11 crc kubenswrapper[4724]: I0223 18:27:11.616269 4724 scope.go:117] "RemoveContainer" containerID="9317f9e327b1252eb526e278d46880622715fcdfa7a49cba9a22d521c8d8a21f" Feb 23 18:27:11 crc kubenswrapper[4724]: E0223 18:27:11.616664 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9317f9e327b1252eb526e278d46880622715fcdfa7a49cba9a22d521c8d8a21f\": container with ID starting with 9317f9e327b1252eb526e278d46880622715fcdfa7a49cba9a22d521c8d8a21f not found: ID does not exist" containerID="9317f9e327b1252eb526e278d46880622715fcdfa7a49cba9a22d521c8d8a21f" Feb 23 18:27:11 crc kubenswrapper[4724]: I0223 18:27:11.616695 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9317f9e327b1252eb526e278d46880622715fcdfa7a49cba9a22d521c8d8a21f"} err="failed to get container status \"9317f9e327b1252eb526e278d46880622715fcdfa7a49cba9a22d521c8d8a21f\": rpc error: code = NotFound desc = could not find container \"9317f9e327b1252eb526e278d46880622715fcdfa7a49cba9a22d521c8d8a21f\": container with ID starting with 9317f9e327b1252eb526e278d46880622715fcdfa7a49cba9a22d521c8d8a21f not found: ID does not exist" Feb 23 18:27:11 crc kubenswrapper[4724]: I0223 18:27:11.616715 4724 scope.go:117] "RemoveContainer" containerID="58537a1ca768244ef874168a2159de85b8edda8328f291e830c5f88f50e4f9d2" Feb 23 18:27:11 crc kubenswrapper[4724]: E0223 18:27:11.617117 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58537a1ca768244ef874168a2159de85b8edda8328f291e830c5f88f50e4f9d2\": container with ID starting with 58537a1ca768244ef874168a2159de85b8edda8328f291e830c5f88f50e4f9d2 not found: ID does not exist" containerID="58537a1ca768244ef874168a2159de85b8edda8328f291e830c5f88f50e4f9d2" Feb 23 18:27:11 crc kubenswrapper[4724]: I0223 18:27:11.617155 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58537a1ca768244ef874168a2159de85b8edda8328f291e830c5f88f50e4f9d2"} err="failed to get container status \"58537a1ca768244ef874168a2159de85b8edda8328f291e830c5f88f50e4f9d2\": rpc error: code = NotFound desc = could not find container \"58537a1ca768244ef874168a2159de85b8edda8328f291e830c5f88f50e4f9d2\": container with ID starting with 58537a1ca768244ef874168a2159de85b8edda8328f291e830c5f88f50e4f9d2 not found: ID does not exist" Feb 23 18:27:11 crc kubenswrapper[4724]: I0223 18:27:11.661999 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkjqb\" (UniqueName: \"kubernetes.io/projected/902e4d68-65dd-4b78-8a93-fe261f2a8ec3-kube-api-access-nkjqb\") pod \"902e4d68-65dd-4b78-8a93-fe261f2a8ec3\" (UID: \"902e4d68-65dd-4b78-8a93-fe261f2a8ec3\") " Feb 23 18:27:11 crc kubenswrapper[4724]: I0223 18:27:11.662135 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/902e4d68-65dd-4b78-8a93-fe261f2a8ec3-catalog-content\") pod \"902e4d68-65dd-4b78-8a93-fe261f2a8ec3\" (UID: \"902e4d68-65dd-4b78-8a93-fe261f2a8ec3\") " Feb 23 18:27:11 crc kubenswrapper[4724]: I0223 18:27:11.662238 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/902e4d68-65dd-4b78-8a93-fe261f2a8ec3-utilities\") pod \"902e4d68-65dd-4b78-8a93-fe261f2a8ec3\" (UID: \"902e4d68-65dd-4b78-8a93-fe261f2a8ec3\") " Feb 23 18:27:11 crc kubenswrapper[4724]: I0223 18:27:11.663277 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/902e4d68-65dd-4b78-8a93-fe261f2a8ec3-utilities" (OuterVolumeSpecName: "utilities") pod "902e4d68-65dd-4b78-8a93-fe261f2a8ec3" (UID: "902e4d68-65dd-4b78-8a93-fe261f2a8ec3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:27:11 crc kubenswrapper[4724]: I0223 18:27:11.670337 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/902e4d68-65dd-4b78-8a93-fe261f2a8ec3-kube-api-access-nkjqb" (OuterVolumeSpecName: "kube-api-access-nkjqb") pod "902e4d68-65dd-4b78-8a93-fe261f2a8ec3" (UID: "902e4d68-65dd-4b78-8a93-fe261f2a8ec3"). InnerVolumeSpecName "kube-api-access-nkjqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:27:11 crc kubenswrapper[4724]: I0223 18:27:11.710070 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/902e4d68-65dd-4b78-8a93-fe261f2a8ec3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "902e4d68-65dd-4b78-8a93-fe261f2a8ec3" (UID: "902e4d68-65dd-4b78-8a93-fe261f2a8ec3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:27:11 crc kubenswrapper[4724]: I0223 18:27:11.764800 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/902e4d68-65dd-4b78-8a93-fe261f2a8ec3-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 18:27:11 crc kubenswrapper[4724]: I0223 18:27:11.764843 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkjqb\" (UniqueName: \"kubernetes.io/projected/902e4d68-65dd-4b78-8a93-fe261f2a8ec3-kube-api-access-nkjqb\") on node \"crc\" DevicePath \"\"" Feb 23 18:27:11 crc kubenswrapper[4724]: I0223 18:27:11.764854 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/902e4d68-65dd-4b78-8a93-fe261f2a8ec3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 18:27:11 crc kubenswrapper[4724]: I0223 18:27:11.857024 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-77qhj"] Feb 23 18:27:11 crc kubenswrapper[4724]: I0223 18:27:11.865169 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-77qhj"] Feb 23 18:27:12 crc kubenswrapper[4724]: I0223 18:27:12.961043 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="902e4d68-65dd-4b78-8a93-fe261f2a8ec3" path="/var/lib/kubelet/pods/902e4d68-65dd-4b78-8a93-fe261f2a8ec3/volumes" Feb 23 18:27:16 crc kubenswrapper[4724]: I0223 18:27:16.951299 4724 scope.go:117] "RemoveContainer" containerID="64d71fc5aeb4931e4af4007838ccb218942291620b2aad62c58ece0405cf269d" Feb 23 18:27:16 crc kubenswrapper[4724]: E0223 18:27:16.951950 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:27:26 crc kubenswrapper[4724]: E0223 18:27:26.208757 4724 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.174:60502->38.102.83.174:46225: read tcp 38.102.83.174:60502->38.102.83.174:46225: read: connection reset by peer Feb 23 18:27:27 crc kubenswrapper[4724]: E0223 18:27:27.877880 4724 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.174:51782->38.102.83.174:46225: write tcp 38.102.83.174:51782->38.102.83.174:46225: write: broken pipe Feb 23 18:27:28 crc kubenswrapper[4724]: I0223 18:27:28.952077 4724 scope.go:117] "RemoveContainer" containerID="64d71fc5aeb4931e4af4007838ccb218942291620b2aad62c58ece0405cf269d" Feb 23 18:27:28 crc kubenswrapper[4724]: E0223 18:27:28.953001 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:27:40 crc kubenswrapper[4724]: I0223 18:27:40.951896 4724 scope.go:117] "RemoveContainer" containerID="64d71fc5aeb4931e4af4007838ccb218942291620b2aad62c58ece0405cf269d" Feb 23 18:27:40 crc kubenswrapper[4724]: E0223 18:27:40.952652 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:27:52 crc kubenswrapper[4724]: I0223 18:27:52.951199 4724 scope.go:117] "RemoveContainer" containerID="64d71fc5aeb4931e4af4007838ccb218942291620b2aad62c58ece0405cf269d" Feb 23 18:27:52 crc kubenswrapper[4724]: E0223 18:27:52.952063 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:28:07 crc kubenswrapper[4724]: I0223 18:28:07.951565 4724 scope.go:117] "RemoveContainer" containerID="64d71fc5aeb4931e4af4007838ccb218942291620b2aad62c58ece0405cf269d" Feb 23 18:28:07 crc kubenswrapper[4724]: E0223 18:28:07.953089 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:28:08 crc kubenswrapper[4724]: I0223 18:28:08.239601 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vpc4r"] Feb 23 18:28:08 crc kubenswrapper[4724]: E0223 18:28:08.240305 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="902e4d68-65dd-4b78-8a93-fe261f2a8ec3" containerName="extract-content" Feb 23 18:28:08 crc kubenswrapper[4724]: I0223 18:28:08.240320 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="902e4d68-65dd-4b78-8a93-fe261f2a8ec3" containerName="extract-content" Feb 23 18:28:08 crc kubenswrapper[4724]: E0223 18:28:08.240340 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="902e4d68-65dd-4b78-8a93-fe261f2a8ec3" containerName="registry-server" Feb 23 18:28:08 crc kubenswrapper[4724]: I0223 18:28:08.240347 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="902e4d68-65dd-4b78-8a93-fe261f2a8ec3" containerName="registry-server" Feb 23 18:28:08 crc kubenswrapper[4724]: E0223 18:28:08.240362 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="902e4d68-65dd-4b78-8a93-fe261f2a8ec3" containerName="extract-utilities" Feb 23 18:28:08 crc kubenswrapper[4724]: I0223 18:28:08.240369 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="902e4d68-65dd-4b78-8a93-fe261f2a8ec3" containerName="extract-utilities" Feb 23 18:28:08 crc kubenswrapper[4724]: I0223 18:28:08.240605 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="902e4d68-65dd-4b78-8a93-fe261f2a8ec3" containerName="registry-server" Feb 23 18:28:08 crc kubenswrapper[4724]: I0223 18:28:08.242025 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vpc4r" Feb 23 18:28:08 crc kubenswrapper[4724]: I0223 18:28:08.257934 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vpc4r"] Feb 23 18:28:08 crc kubenswrapper[4724]: I0223 18:28:08.356580 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4c51f96-112f-4c4a-89a1-23b4c6151a22-catalog-content\") pod \"redhat-marketplace-vpc4r\" (UID: \"b4c51f96-112f-4c4a-89a1-23b4c6151a22\") " pod="openshift-marketplace/redhat-marketplace-vpc4r" Feb 23 18:28:08 crc kubenswrapper[4724]: I0223 18:28:08.356791 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d79fc\" (UniqueName: \"kubernetes.io/projected/b4c51f96-112f-4c4a-89a1-23b4c6151a22-kube-api-access-d79fc\") pod \"redhat-marketplace-vpc4r\" (UID: \"b4c51f96-112f-4c4a-89a1-23b4c6151a22\") " pod="openshift-marketplace/redhat-marketplace-vpc4r" Feb 23 18:28:08 crc kubenswrapper[4724]: I0223 18:28:08.357055 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4c51f96-112f-4c4a-89a1-23b4c6151a22-utilities\") pod \"redhat-marketplace-vpc4r\" (UID: \"b4c51f96-112f-4c4a-89a1-23b4c6151a22\") " pod="openshift-marketplace/redhat-marketplace-vpc4r" Feb 23 18:28:08 crc kubenswrapper[4724]: I0223 18:28:08.459871 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d79fc\" (UniqueName: \"kubernetes.io/projected/b4c51f96-112f-4c4a-89a1-23b4c6151a22-kube-api-access-d79fc\") pod \"redhat-marketplace-vpc4r\" (UID: \"b4c51f96-112f-4c4a-89a1-23b4c6151a22\") " pod="openshift-marketplace/redhat-marketplace-vpc4r" Feb 23 18:28:08 crc kubenswrapper[4724]: I0223 18:28:08.460045 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4c51f96-112f-4c4a-89a1-23b4c6151a22-utilities\") pod \"redhat-marketplace-vpc4r\" (UID: \"b4c51f96-112f-4c4a-89a1-23b4c6151a22\") " pod="openshift-marketplace/redhat-marketplace-vpc4r" Feb 23 18:28:08 crc kubenswrapper[4724]: I0223 18:28:08.460178 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4c51f96-112f-4c4a-89a1-23b4c6151a22-catalog-content\") pod \"redhat-marketplace-vpc4r\" (UID: \"b4c51f96-112f-4c4a-89a1-23b4c6151a22\") " pod="openshift-marketplace/redhat-marketplace-vpc4r" Feb 23 18:28:08 crc kubenswrapper[4724]: I0223 18:28:08.460901 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4c51f96-112f-4c4a-89a1-23b4c6151a22-catalog-content\") pod \"redhat-marketplace-vpc4r\" (UID: \"b4c51f96-112f-4c4a-89a1-23b4c6151a22\") " pod="openshift-marketplace/redhat-marketplace-vpc4r" Feb 23 18:28:08 crc kubenswrapper[4724]: I0223 18:28:08.460932 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4c51f96-112f-4c4a-89a1-23b4c6151a22-utilities\") pod \"redhat-marketplace-vpc4r\" (UID: \"b4c51f96-112f-4c4a-89a1-23b4c6151a22\") " pod="openshift-marketplace/redhat-marketplace-vpc4r" Feb 23 18:28:08 crc kubenswrapper[4724]: I0223 18:28:08.484416 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d79fc\" (UniqueName: \"kubernetes.io/projected/b4c51f96-112f-4c4a-89a1-23b4c6151a22-kube-api-access-d79fc\") pod \"redhat-marketplace-vpc4r\" (UID: \"b4c51f96-112f-4c4a-89a1-23b4c6151a22\") " pod="openshift-marketplace/redhat-marketplace-vpc4r" Feb 23 18:28:08 crc kubenswrapper[4724]: I0223 18:28:08.566557 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vpc4r" Feb 23 18:28:09 crc kubenswrapper[4724]: I0223 18:28:09.062214 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vpc4r"] Feb 23 18:28:10 crc kubenswrapper[4724]: I0223 18:28:10.050530 4724 generic.go:334] "Generic (PLEG): container finished" podID="b4c51f96-112f-4c4a-89a1-23b4c6151a22" containerID="38b631e40a8c49cc309f166616befe77b121c1351d03313d8a13489fa292876e" exitCode=0 Feb 23 18:28:10 crc kubenswrapper[4724]: I0223 18:28:10.050671 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vpc4r" event={"ID":"b4c51f96-112f-4c4a-89a1-23b4c6151a22","Type":"ContainerDied","Data":"38b631e40a8c49cc309f166616befe77b121c1351d03313d8a13489fa292876e"} Feb 23 18:28:10 crc kubenswrapper[4724]: I0223 18:28:10.051176 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vpc4r" event={"ID":"b4c51f96-112f-4c4a-89a1-23b4c6151a22","Type":"ContainerStarted","Data":"576ba19e6d4caf64d679bf3a80d7ab032b60dcc20e43b48e5e61d9b282452f55"} Feb 23 18:28:10 crc kubenswrapper[4724]: I0223 18:28:10.053231 4724 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 18:28:10 crc kubenswrapper[4724]: I0223 18:28:10.837993 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-z2blh"] Feb 23 18:28:10 crc kubenswrapper[4724]: I0223 18:28:10.840192 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z2blh" Feb 23 18:28:10 crc kubenswrapper[4724]: I0223 18:28:10.849455 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z2blh"] Feb 23 18:28:10 crc kubenswrapper[4724]: I0223 18:28:10.912037 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmd5n\" (UniqueName: \"kubernetes.io/projected/a826dc5e-08ae-48eb-a02b-c451d026a6a4-kube-api-access-jmd5n\") pod \"redhat-operators-z2blh\" (UID: \"a826dc5e-08ae-48eb-a02b-c451d026a6a4\") " pod="openshift-marketplace/redhat-operators-z2blh" Feb 23 18:28:10 crc kubenswrapper[4724]: I0223 18:28:10.912371 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a826dc5e-08ae-48eb-a02b-c451d026a6a4-catalog-content\") pod \"redhat-operators-z2blh\" (UID: \"a826dc5e-08ae-48eb-a02b-c451d026a6a4\") " pod="openshift-marketplace/redhat-operators-z2blh" Feb 23 18:28:10 crc kubenswrapper[4724]: I0223 18:28:10.912468 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a826dc5e-08ae-48eb-a02b-c451d026a6a4-utilities\") pod \"redhat-operators-z2blh\" (UID: \"a826dc5e-08ae-48eb-a02b-c451d026a6a4\") " pod="openshift-marketplace/redhat-operators-z2blh" Feb 23 18:28:11 crc kubenswrapper[4724]: I0223 18:28:11.015523 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmd5n\" (UniqueName: \"kubernetes.io/projected/a826dc5e-08ae-48eb-a02b-c451d026a6a4-kube-api-access-jmd5n\") pod \"redhat-operators-z2blh\" (UID: \"a826dc5e-08ae-48eb-a02b-c451d026a6a4\") " pod="openshift-marketplace/redhat-operators-z2blh" Feb 23 18:28:11 crc kubenswrapper[4724]: I0223 18:28:11.015628 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a826dc5e-08ae-48eb-a02b-c451d026a6a4-catalog-content\") pod \"redhat-operators-z2blh\" (UID: \"a826dc5e-08ae-48eb-a02b-c451d026a6a4\") " pod="openshift-marketplace/redhat-operators-z2blh" Feb 23 18:28:11 crc kubenswrapper[4724]: I0223 18:28:11.015650 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a826dc5e-08ae-48eb-a02b-c451d026a6a4-utilities\") pod \"redhat-operators-z2blh\" (UID: \"a826dc5e-08ae-48eb-a02b-c451d026a6a4\") " pod="openshift-marketplace/redhat-operators-z2blh" Feb 23 18:28:11 crc kubenswrapper[4724]: I0223 18:28:11.016153 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a826dc5e-08ae-48eb-a02b-c451d026a6a4-utilities\") pod \"redhat-operators-z2blh\" (UID: \"a826dc5e-08ae-48eb-a02b-c451d026a6a4\") " pod="openshift-marketplace/redhat-operators-z2blh" Feb 23 18:28:11 crc kubenswrapper[4724]: I0223 18:28:11.016144 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a826dc5e-08ae-48eb-a02b-c451d026a6a4-catalog-content\") pod \"redhat-operators-z2blh\" (UID: \"a826dc5e-08ae-48eb-a02b-c451d026a6a4\") " pod="openshift-marketplace/redhat-operators-z2blh" Feb 23 18:28:11 crc kubenswrapper[4724]: I0223 18:28:11.041830 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmd5n\" (UniqueName: \"kubernetes.io/projected/a826dc5e-08ae-48eb-a02b-c451d026a6a4-kube-api-access-jmd5n\") pod \"redhat-operators-z2blh\" (UID: \"a826dc5e-08ae-48eb-a02b-c451d026a6a4\") " pod="openshift-marketplace/redhat-operators-z2blh" Feb 23 18:28:11 crc kubenswrapper[4724]: I0223 18:28:11.066265 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vpc4r" event={"ID":"b4c51f96-112f-4c4a-89a1-23b4c6151a22","Type":"ContainerStarted","Data":"607e3bc15d7e7432ce87f7233595bfd61d529cd6f7f45adfaabd3dd33dd9a78b"} Feb 23 18:28:11 crc kubenswrapper[4724]: I0223 18:28:11.158461 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z2blh" Feb 23 18:28:11 crc kubenswrapper[4724]: I0223 18:28:11.631001 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z2blh"] Feb 23 18:28:11 crc kubenswrapper[4724]: W0223 18:28:11.636936 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda826dc5e_08ae_48eb_a02b_c451d026a6a4.slice/crio-5b119b0790063c13fc4942c22e158847df37a21d8132f390ce80d4190e35c397 WatchSource:0}: Error finding container 5b119b0790063c13fc4942c22e158847df37a21d8132f390ce80d4190e35c397: Status 404 returned error can't find the container with id 5b119b0790063c13fc4942c22e158847df37a21d8132f390ce80d4190e35c397 Feb 23 18:28:12 crc kubenswrapper[4724]: I0223 18:28:12.075330 4724 generic.go:334] "Generic (PLEG): container finished" podID="a826dc5e-08ae-48eb-a02b-c451d026a6a4" containerID="c00cbef29e64ab98aaf89dae7ff33e62beef2509cdbc23e5b0d37d37f7676434" exitCode=0 Feb 23 18:28:12 crc kubenswrapper[4724]: I0223 18:28:12.075404 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z2blh" event={"ID":"a826dc5e-08ae-48eb-a02b-c451d026a6a4","Type":"ContainerDied","Data":"c00cbef29e64ab98aaf89dae7ff33e62beef2509cdbc23e5b0d37d37f7676434"} Feb 23 18:28:12 crc kubenswrapper[4724]: I0223 18:28:12.075687 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z2blh" event={"ID":"a826dc5e-08ae-48eb-a02b-c451d026a6a4","Type":"ContainerStarted","Data":"5b119b0790063c13fc4942c22e158847df37a21d8132f390ce80d4190e35c397"} Feb 23 18:28:12 crc kubenswrapper[4724]: I0223 18:28:12.079126 4724 generic.go:334] "Generic (PLEG): container finished" podID="b4c51f96-112f-4c4a-89a1-23b4c6151a22" containerID="607e3bc15d7e7432ce87f7233595bfd61d529cd6f7f45adfaabd3dd33dd9a78b" exitCode=0 Feb 23 18:28:12 crc kubenswrapper[4724]: I0223 18:28:12.079179 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vpc4r" event={"ID":"b4c51f96-112f-4c4a-89a1-23b4c6151a22","Type":"ContainerDied","Data":"607e3bc15d7e7432ce87f7233595bfd61d529cd6f7f45adfaabd3dd33dd9a78b"} Feb 23 18:28:13 crc kubenswrapper[4724]: I0223 18:28:13.097048 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z2blh" event={"ID":"a826dc5e-08ae-48eb-a02b-c451d026a6a4","Type":"ContainerStarted","Data":"f48e6ca70a5d6b07a8cc4b575d17659044c562d60b887bf06891905ce1a63b4f"} Feb 23 18:28:13 crc kubenswrapper[4724]: I0223 18:28:13.106028 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vpc4r" event={"ID":"b4c51f96-112f-4c4a-89a1-23b4c6151a22","Type":"ContainerStarted","Data":"7931a51a619b0f4ec10fc536222d21e3d77cbee21fd4e0fc990666bcd24a2cfd"} Feb 23 18:28:13 crc kubenswrapper[4724]: I0223 18:28:13.144940 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vpc4r" podStartSLOduration=2.7211757629999997 podStartE2EDuration="5.144922552s" podCreationTimestamp="2026-02-23 18:28:08 +0000 UTC" firstStartedPulling="2026-02-23 18:28:10.053027733 +0000 UTC m=+3445.869227323" lastFinishedPulling="2026-02-23 18:28:12.476774512 +0000 UTC m=+3448.292974112" observedRunningTime="2026-02-23 18:28:13.137047654 +0000 UTC m=+3448.953247264" watchObservedRunningTime="2026-02-23 18:28:13.144922552 +0000 UTC m=+3448.961122152" Feb 23 18:28:17 crc kubenswrapper[4724]: I0223 18:28:17.141903 4724 generic.go:334] "Generic (PLEG): container finished" podID="a826dc5e-08ae-48eb-a02b-c451d026a6a4" containerID="f48e6ca70a5d6b07a8cc4b575d17659044c562d60b887bf06891905ce1a63b4f" exitCode=0 Feb 23 18:28:17 crc kubenswrapper[4724]: I0223 18:28:17.141991 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z2blh" event={"ID":"a826dc5e-08ae-48eb-a02b-c451d026a6a4","Type":"ContainerDied","Data":"f48e6ca70a5d6b07a8cc4b575d17659044c562d60b887bf06891905ce1a63b4f"} Feb 23 18:28:18 crc kubenswrapper[4724]: I0223 18:28:18.151823 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z2blh" event={"ID":"a826dc5e-08ae-48eb-a02b-c451d026a6a4","Type":"ContainerStarted","Data":"4cf960b9ab571a1dd1fb1f459df1c7178b329918fe8a606982b4f5864b952966"} Feb 23 18:28:18 crc kubenswrapper[4724]: I0223 18:28:18.180359 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-z2blh" podStartSLOduration=2.723272439 podStartE2EDuration="8.180341046s" podCreationTimestamp="2026-02-23 18:28:10 +0000 UTC" firstStartedPulling="2026-02-23 18:28:12.077003367 +0000 UTC m=+3447.893202967" lastFinishedPulling="2026-02-23 18:28:17.534071974 +0000 UTC m=+3453.350271574" observedRunningTime="2026-02-23 18:28:18.171629417 +0000 UTC m=+3453.987829017" watchObservedRunningTime="2026-02-23 18:28:18.180341046 +0000 UTC m=+3453.996540646" Feb 23 18:28:18 crc kubenswrapper[4724]: I0223 18:28:18.566737 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vpc4r" Feb 23 18:28:18 crc kubenswrapper[4724]: I0223 18:28:18.567049 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vpc4r" Feb 23 18:28:18 crc kubenswrapper[4724]: I0223 18:28:18.625177 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vpc4r" Feb 23 18:28:19 crc kubenswrapper[4724]: I0223 18:28:19.203643 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vpc4r" Feb 23 18:28:20 crc kubenswrapper[4724]: I0223 18:28:20.427218 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vpc4r"] Feb 23 18:28:20 crc kubenswrapper[4724]: I0223 18:28:20.950871 4724 scope.go:117] "RemoveContainer" containerID="64d71fc5aeb4931e4af4007838ccb218942291620b2aad62c58ece0405cf269d" Feb 23 18:28:20 crc kubenswrapper[4724]: E0223 18:28:20.951144 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:28:21 crc kubenswrapper[4724]: I0223 18:28:21.159084 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-z2blh" Feb 23 18:28:21 crc kubenswrapper[4724]: I0223 18:28:21.159445 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-z2blh" Feb 23 18:28:21 crc kubenswrapper[4724]: I0223 18:28:21.176118 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vpc4r" podUID="b4c51f96-112f-4c4a-89a1-23b4c6151a22" containerName="registry-server" containerID="cri-o://7931a51a619b0f4ec10fc536222d21e3d77cbee21fd4e0fc990666bcd24a2cfd" gracePeriod=2 Feb 23 18:28:21 crc kubenswrapper[4724]: I0223 18:28:21.659864 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vpc4r" Feb 23 18:28:21 crc kubenswrapper[4724]: I0223 18:28:21.757217 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4c51f96-112f-4c4a-89a1-23b4c6151a22-utilities\") pod \"b4c51f96-112f-4c4a-89a1-23b4c6151a22\" (UID: \"b4c51f96-112f-4c4a-89a1-23b4c6151a22\") " Feb 23 18:28:21 crc kubenswrapper[4724]: I0223 18:28:21.757386 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d79fc\" (UniqueName: \"kubernetes.io/projected/b4c51f96-112f-4c4a-89a1-23b4c6151a22-kube-api-access-d79fc\") pod \"b4c51f96-112f-4c4a-89a1-23b4c6151a22\" (UID: \"b4c51f96-112f-4c4a-89a1-23b4c6151a22\") " Feb 23 18:28:21 crc kubenswrapper[4724]: I0223 18:28:21.757437 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4c51f96-112f-4c4a-89a1-23b4c6151a22-catalog-content\") pod \"b4c51f96-112f-4c4a-89a1-23b4c6151a22\" (UID: \"b4c51f96-112f-4c4a-89a1-23b4c6151a22\") " Feb 23 18:28:21 crc kubenswrapper[4724]: I0223 18:28:21.757859 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4c51f96-112f-4c4a-89a1-23b4c6151a22-utilities" (OuterVolumeSpecName: "utilities") pod "b4c51f96-112f-4c4a-89a1-23b4c6151a22" (UID: "b4c51f96-112f-4c4a-89a1-23b4c6151a22"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:28:21 crc kubenswrapper[4724]: I0223 18:28:21.757987 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4c51f96-112f-4c4a-89a1-23b4c6151a22-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 18:28:21 crc kubenswrapper[4724]: I0223 18:28:21.763545 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4c51f96-112f-4c4a-89a1-23b4c6151a22-kube-api-access-d79fc" (OuterVolumeSpecName: "kube-api-access-d79fc") pod "b4c51f96-112f-4c4a-89a1-23b4c6151a22" (UID: "b4c51f96-112f-4c4a-89a1-23b4c6151a22"). InnerVolumeSpecName "kube-api-access-d79fc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:28:21 crc kubenswrapper[4724]: I0223 18:28:21.780731 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4c51f96-112f-4c4a-89a1-23b4c6151a22-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b4c51f96-112f-4c4a-89a1-23b4c6151a22" (UID: "b4c51f96-112f-4c4a-89a1-23b4c6151a22"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:28:21 crc kubenswrapper[4724]: I0223 18:28:21.860171 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4c51f96-112f-4c4a-89a1-23b4c6151a22-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 18:28:21 crc kubenswrapper[4724]: I0223 18:28:21.860215 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d79fc\" (UniqueName: \"kubernetes.io/projected/b4c51f96-112f-4c4a-89a1-23b4c6151a22-kube-api-access-d79fc\") on node \"crc\" DevicePath \"\"" Feb 23 18:28:22 crc kubenswrapper[4724]: I0223 18:28:22.185690 4724 generic.go:334] "Generic (PLEG): container finished" podID="b4c51f96-112f-4c4a-89a1-23b4c6151a22" containerID="7931a51a619b0f4ec10fc536222d21e3d77cbee21fd4e0fc990666bcd24a2cfd" exitCode=0 Feb 23 18:28:22 crc kubenswrapper[4724]: I0223 18:28:22.185745 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vpc4r" event={"ID":"b4c51f96-112f-4c4a-89a1-23b4c6151a22","Type":"ContainerDied","Data":"7931a51a619b0f4ec10fc536222d21e3d77cbee21fd4e0fc990666bcd24a2cfd"} Feb 23 18:28:22 crc kubenswrapper[4724]: I0223 18:28:22.185776 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vpc4r" Feb 23 18:28:22 crc kubenswrapper[4724]: I0223 18:28:22.185790 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vpc4r" event={"ID":"b4c51f96-112f-4c4a-89a1-23b4c6151a22","Type":"ContainerDied","Data":"576ba19e6d4caf64d679bf3a80d7ab032b60dcc20e43b48e5e61d9b282452f55"} Feb 23 18:28:22 crc kubenswrapper[4724]: I0223 18:28:22.185800 4724 scope.go:117] "RemoveContainer" containerID="7931a51a619b0f4ec10fc536222d21e3d77cbee21fd4e0fc990666bcd24a2cfd" Feb 23 18:28:22 crc kubenswrapper[4724]: I0223 18:28:22.210767 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-z2blh" podUID="a826dc5e-08ae-48eb-a02b-c451d026a6a4" containerName="registry-server" probeResult="failure" output=< Feb 23 18:28:22 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 23 18:28:22 crc kubenswrapper[4724]: > Feb 23 18:28:22 crc kubenswrapper[4724]: I0223 18:28:22.211821 4724 scope.go:117] "RemoveContainer" containerID="607e3bc15d7e7432ce87f7233595bfd61d529cd6f7f45adfaabd3dd33dd9a78b" Feb 23 18:28:22 crc kubenswrapper[4724]: I0223 18:28:22.235288 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vpc4r"] Feb 23 18:28:22 crc kubenswrapper[4724]: I0223 18:28:22.247617 4724 scope.go:117] "RemoveContainer" containerID="38b631e40a8c49cc309f166616befe77b121c1351d03313d8a13489fa292876e" Feb 23 18:28:22 crc kubenswrapper[4724]: I0223 18:28:22.247649 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vpc4r"] Feb 23 18:28:22 crc kubenswrapper[4724]: I0223 18:28:22.285758 4724 scope.go:117] "RemoveContainer" containerID="7931a51a619b0f4ec10fc536222d21e3d77cbee21fd4e0fc990666bcd24a2cfd" Feb 23 18:28:22 crc kubenswrapper[4724]: E0223 18:28:22.286447 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7931a51a619b0f4ec10fc536222d21e3d77cbee21fd4e0fc990666bcd24a2cfd\": container with ID starting with 7931a51a619b0f4ec10fc536222d21e3d77cbee21fd4e0fc990666bcd24a2cfd not found: ID does not exist" containerID="7931a51a619b0f4ec10fc536222d21e3d77cbee21fd4e0fc990666bcd24a2cfd" Feb 23 18:28:22 crc kubenswrapper[4724]: I0223 18:28:22.286545 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7931a51a619b0f4ec10fc536222d21e3d77cbee21fd4e0fc990666bcd24a2cfd"} err="failed to get container status \"7931a51a619b0f4ec10fc536222d21e3d77cbee21fd4e0fc990666bcd24a2cfd\": rpc error: code = NotFound desc = could not find container \"7931a51a619b0f4ec10fc536222d21e3d77cbee21fd4e0fc990666bcd24a2cfd\": container with ID starting with 7931a51a619b0f4ec10fc536222d21e3d77cbee21fd4e0fc990666bcd24a2cfd not found: ID does not exist" Feb 23 18:28:22 crc kubenswrapper[4724]: I0223 18:28:22.286587 4724 scope.go:117] "RemoveContainer" containerID="607e3bc15d7e7432ce87f7233595bfd61d529cd6f7f45adfaabd3dd33dd9a78b" Feb 23 18:28:22 crc kubenswrapper[4724]: E0223 18:28:22.287424 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"607e3bc15d7e7432ce87f7233595bfd61d529cd6f7f45adfaabd3dd33dd9a78b\": container with ID starting with 607e3bc15d7e7432ce87f7233595bfd61d529cd6f7f45adfaabd3dd33dd9a78b not found: ID does not exist" containerID="607e3bc15d7e7432ce87f7233595bfd61d529cd6f7f45adfaabd3dd33dd9a78b" Feb 23 18:28:22 crc kubenswrapper[4724]: I0223 18:28:22.287463 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"607e3bc15d7e7432ce87f7233595bfd61d529cd6f7f45adfaabd3dd33dd9a78b"} err="failed to get container status \"607e3bc15d7e7432ce87f7233595bfd61d529cd6f7f45adfaabd3dd33dd9a78b\": rpc error: code = NotFound desc = could not find container \"607e3bc15d7e7432ce87f7233595bfd61d529cd6f7f45adfaabd3dd33dd9a78b\": container with ID starting with 607e3bc15d7e7432ce87f7233595bfd61d529cd6f7f45adfaabd3dd33dd9a78b not found: ID does not exist" Feb 23 18:28:22 crc kubenswrapper[4724]: I0223 18:28:22.287581 4724 scope.go:117] "RemoveContainer" containerID="38b631e40a8c49cc309f166616befe77b121c1351d03313d8a13489fa292876e" Feb 23 18:28:22 crc kubenswrapper[4724]: E0223 18:28:22.288333 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38b631e40a8c49cc309f166616befe77b121c1351d03313d8a13489fa292876e\": container with ID starting with 38b631e40a8c49cc309f166616befe77b121c1351d03313d8a13489fa292876e not found: ID does not exist" containerID="38b631e40a8c49cc309f166616befe77b121c1351d03313d8a13489fa292876e" Feb 23 18:28:22 crc kubenswrapper[4724]: I0223 18:28:22.288368 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38b631e40a8c49cc309f166616befe77b121c1351d03313d8a13489fa292876e"} err="failed to get container status \"38b631e40a8c49cc309f166616befe77b121c1351d03313d8a13489fa292876e\": rpc error: code = NotFound desc = could not find container \"38b631e40a8c49cc309f166616befe77b121c1351d03313d8a13489fa292876e\": container with ID starting with 38b631e40a8c49cc309f166616befe77b121c1351d03313d8a13489fa292876e not found: ID does not exist" Feb 23 18:28:22 crc kubenswrapper[4724]: I0223 18:28:22.964106 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4c51f96-112f-4c4a-89a1-23b4c6151a22" path="/var/lib/kubelet/pods/b4c51f96-112f-4c4a-89a1-23b4c6151a22/volumes" Feb 23 18:28:31 crc kubenswrapper[4724]: I0223 18:28:31.202714 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-z2blh" Feb 23 18:28:31 crc kubenswrapper[4724]: I0223 18:28:31.245135 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-z2blh" Feb 23 18:28:31 crc kubenswrapper[4724]: I0223 18:28:31.436363 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z2blh"] Feb 23 18:28:32 crc kubenswrapper[4724]: I0223 18:28:32.282635 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-z2blh" podUID="a826dc5e-08ae-48eb-a02b-c451d026a6a4" containerName="registry-server" containerID="cri-o://4cf960b9ab571a1dd1fb1f459df1c7178b329918fe8a606982b4f5864b952966" gracePeriod=2 Feb 23 18:28:32 crc kubenswrapper[4724]: I0223 18:28:32.736798 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z2blh" Feb 23 18:28:32 crc kubenswrapper[4724]: I0223 18:28:32.787487 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a826dc5e-08ae-48eb-a02b-c451d026a6a4-catalog-content\") pod \"a826dc5e-08ae-48eb-a02b-c451d026a6a4\" (UID: \"a826dc5e-08ae-48eb-a02b-c451d026a6a4\") " Feb 23 18:28:32 crc kubenswrapper[4724]: I0223 18:28:32.788912 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmd5n\" (UniqueName: \"kubernetes.io/projected/a826dc5e-08ae-48eb-a02b-c451d026a6a4-kube-api-access-jmd5n\") pod \"a826dc5e-08ae-48eb-a02b-c451d026a6a4\" (UID: \"a826dc5e-08ae-48eb-a02b-c451d026a6a4\") " Feb 23 18:28:32 crc kubenswrapper[4724]: I0223 18:28:32.789055 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a826dc5e-08ae-48eb-a02b-c451d026a6a4-utilities\") pod \"a826dc5e-08ae-48eb-a02b-c451d026a6a4\" (UID: \"a826dc5e-08ae-48eb-a02b-c451d026a6a4\") " Feb 23 18:28:32 crc kubenswrapper[4724]: I0223 18:28:32.790547 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a826dc5e-08ae-48eb-a02b-c451d026a6a4-utilities" (OuterVolumeSpecName: "utilities") pod "a826dc5e-08ae-48eb-a02b-c451d026a6a4" (UID: "a826dc5e-08ae-48eb-a02b-c451d026a6a4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:28:32 crc kubenswrapper[4724]: I0223 18:28:32.794540 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a826dc5e-08ae-48eb-a02b-c451d026a6a4-kube-api-access-jmd5n" (OuterVolumeSpecName: "kube-api-access-jmd5n") pod "a826dc5e-08ae-48eb-a02b-c451d026a6a4" (UID: "a826dc5e-08ae-48eb-a02b-c451d026a6a4"). InnerVolumeSpecName "kube-api-access-jmd5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:28:32 crc kubenswrapper[4724]: I0223 18:28:32.892521 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a826dc5e-08ae-48eb-a02b-c451d026a6a4-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 18:28:32 crc kubenswrapper[4724]: I0223 18:28:32.892557 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmd5n\" (UniqueName: \"kubernetes.io/projected/a826dc5e-08ae-48eb-a02b-c451d026a6a4-kube-api-access-jmd5n\") on node \"crc\" DevicePath \"\"" Feb 23 18:28:32 crc kubenswrapper[4724]: I0223 18:28:32.909255 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a826dc5e-08ae-48eb-a02b-c451d026a6a4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a826dc5e-08ae-48eb-a02b-c451d026a6a4" (UID: "a826dc5e-08ae-48eb-a02b-c451d026a6a4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:28:32 crc kubenswrapper[4724]: I0223 18:28:32.994664 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a826dc5e-08ae-48eb-a02b-c451d026a6a4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 18:28:33 crc kubenswrapper[4724]: I0223 18:28:33.292464 4724 generic.go:334] "Generic (PLEG): container finished" podID="a826dc5e-08ae-48eb-a02b-c451d026a6a4" containerID="4cf960b9ab571a1dd1fb1f459df1c7178b329918fe8a606982b4f5864b952966" exitCode=0 Feb 23 18:28:33 crc kubenswrapper[4724]: I0223 18:28:33.292505 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z2blh" event={"ID":"a826dc5e-08ae-48eb-a02b-c451d026a6a4","Type":"ContainerDied","Data":"4cf960b9ab571a1dd1fb1f459df1c7178b329918fe8a606982b4f5864b952966"} Feb 23 18:28:33 crc kubenswrapper[4724]: I0223 18:28:33.292532 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z2blh" event={"ID":"a826dc5e-08ae-48eb-a02b-c451d026a6a4","Type":"ContainerDied","Data":"5b119b0790063c13fc4942c22e158847df37a21d8132f390ce80d4190e35c397"} Feb 23 18:28:33 crc kubenswrapper[4724]: I0223 18:28:33.292550 4724 scope.go:117] "RemoveContainer" containerID="4cf960b9ab571a1dd1fb1f459df1c7178b329918fe8a606982b4f5864b952966" Feb 23 18:28:33 crc kubenswrapper[4724]: I0223 18:28:33.292554 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z2blh" Feb 23 18:28:33 crc kubenswrapper[4724]: I0223 18:28:33.311109 4724 scope.go:117] "RemoveContainer" containerID="f48e6ca70a5d6b07a8cc4b575d17659044c562d60b887bf06891905ce1a63b4f" Feb 23 18:28:33 crc kubenswrapper[4724]: I0223 18:28:33.316242 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z2blh"] Feb 23 18:28:33 crc kubenswrapper[4724]: I0223 18:28:33.326328 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-z2blh"] Feb 23 18:28:33 crc kubenswrapper[4724]: I0223 18:28:33.337179 4724 scope.go:117] "RemoveContainer" containerID="c00cbef29e64ab98aaf89dae7ff33e62beef2509cdbc23e5b0d37d37f7676434" Feb 23 18:28:33 crc kubenswrapper[4724]: I0223 18:28:33.385200 4724 scope.go:117] "RemoveContainer" containerID="4cf960b9ab571a1dd1fb1f459df1c7178b329918fe8a606982b4f5864b952966" Feb 23 18:28:33 crc kubenswrapper[4724]: E0223 18:28:33.385925 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cf960b9ab571a1dd1fb1f459df1c7178b329918fe8a606982b4f5864b952966\": container with ID starting with 4cf960b9ab571a1dd1fb1f459df1c7178b329918fe8a606982b4f5864b952966 not found: ID does not exist" containerID="4cf960b9ab571a1dd1fb1f459df1c7178b329918fe8a606982b4f5864b952966" Feb 23 18:28:33 crc kubenswrapper[4724]: I0223 18:28:33.385963 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cf960b9ab571a1dd1fb1f459df1c7178b329918fe8a606982b4f5864b952966"} err="failed to get container status \"4cf960b9ab571a1dd1fb1f459df1c7178b329918fe8a606982b4f5864b952966\": rpc error: code = NotFound desc = could not find container \"4cf960b9ab571a1dd1fb1f459df1c7178b329918fe8a606982b4f5864b952966\": container with ID starting with 4cf960b9ab571a1dd1fb1f459df1c7178b329918fe8a606982b4f5864b952966 not found: ID does not exist" Feb 23 18:28:33 crc kubenswrapper[4724]: I0223 18:28:33.386001 4724 scope.go:117] "RemoveContainer" containerID="f48e6ca70a5d6b07a8cc4b575d17659044c562d60b887bf06891905ce1a63b4f" Feb 23 18:28:33 crc kubenswrapper[4724]: E0223 18:28:33.386376 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f48e6ca70a5d6b07a8cc4b575d17659044c562d60b887bf06891905ce1a63b4f\": container with ID starting with f48e6ca70a5d6b07a8cc4b575d17659044c562d60b887bf06891905ce1a63b4f not found: ID does not exist" containerID="f48e6ca70a5d6b07a8cc4b575d17659044c562d60b887bf06891905ce1a63b4f" Feb 23 18:28:33 crc kubenswrapper[4724]: I0223 18:28:33.386595 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f48e6ca70a5d6b07a8cc4b575d17659044c562d60b887bf06891905ce1a63b4f"} err="failed to get container status \"f48e6ca70a5d6b07a8cc4b575d17659044c562d60b887bf06891905ce1a63b4f\": rpc error: code = NotFound desc = could not find container \"f48e6ca70a5d6b07a8cc4b575d17659044c562d60b887bf06891905ce1a63b4f\": container with ID starting with f48e6ca70a5d6b07a8cc4b575d17659044c562d60b887bf06891905ce1a63b4f not found: ID does not exist" Feb 23 18:28:33 crc kubenswrapper[4724]: I0223 18:28:33.386747 4724 scope.go:117] "RemoveContainer" containerID="c00cbef29e64ab98aaf89dae7ff33e62beef2509cdbc23e5b0d37d37f7676434" Feb 23 18:28:33 crc kubenswrapper[4724]: E0223 18:28:33.387168 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c00cbef29e64ab98aaf89dae7ff33e62beef2509cdbc23e5b0d37d37f7676434\": container with ID starting with c00cbef29e64ab98aaf89dae7ff33e62beef2509cdbc23e5b0d37d37f7676434 not found: ID does not exist" containerID="c00cbef29e64ab98aaf89dae7ff33e62beef2509cdbc23e5b0d37d37f7676434" Feb 23 18:28:33 crc kubenswrapper[4724]: I0223 18:28:33.387219 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c00cbef29e64ab98aaf89dae7ff33e62beef2509cdbc23e5b0d37d37f7676434"} err="failed to get container status \"c00cbef29e64ab98aaf89dae7ff33e62beef2509cdbc23e5b0d37d37f7676434\": rpc error: code = NotFound desc = could not find container \"c00cbef29e64ab98aaf89dae7ff33e62beef2509cdbc23e5b0d37d37f7676434\": container with ID starting with c00cbef29e64ab98aaf89dae7ff33e62beef2509cdbc23e5b0d37d37f7676434 not found: ID does not exist" Feb 23 18:28:33 crc kubenswrapper[4724]: I0223 18:28:33.952281 4724 scope.go:117] "RemoveContainer" containerID="64d71fc5aeb4931e4af4007838ccb218942291620b2aad62c58ece0405cf269d" Feb 23 18:28:33 crc kubenswrapper[4724]: E0223 18:28:33.952961 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:28:34 crc kubenswrapper[4724]: I0223 18:28:34.970282 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a826dc5e-08ae-48eb-a02b-c451d026a6a4" path="/var/lib/kubelet/pods/a826dc5e-08ae-48eb-a02b-c451d026a6a4/volumes" Feb 23 18:28:45 crc kubenswrapper[4724]: I0223 18:28:45.951285 4724 scope.go:117] "RemoveContainer" containerID="64d71fc5aeb4931e4af4007838ccb218942291620b2aad62c58ece0405cf269d" Feb 23 18:28:45 crc kubenswrapper[4724]: E0223 18:28:45.952048 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:28:58 crc kubenswrapper[4724]: I0223 18:28:58.952004 4724 scope.go:117] "RemoveContainer" containerID="64d71fc5aeb4931e4af4007838ccb218942291620b2aad62c58ece0405cf269d" Feb 23 18:28:59 crc kubenswrapper[4724]: I0223 18:28:59.532884 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" event={"ID":"a065b197-b354-4d9b-b2e9-7d4882a3d1a2","Type":"ContainerStarted","Data":"a329854bcf38cc29bacd7c8178aa7127d980f56b324d82cbcadbbb04f0afe34d"} Feb 23 18:29:23 crc kubenswrapper[4724]: I0223 18:29:23.894505 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-f447dffc7-s2mfq" podUID="46ac9fa5-f3bd-48bf-b1e6-1b570b4bda5b" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 23 18:30:00 crc kubenswrapper[4724]: I0223 18:30:00.178419 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531190-wv86x"] Feb 23 18:30:00 crc kubenswrapper[4724]: E0223 18:30:00.180846 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4c51f96-112f-4c4a-89a1-23b4c6151a22" containerName="extract-content" Feb 23 18:30:00 crc kubenswrapper[4724]: I0223 18:30:00.180971 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4c51f96-112f-4c4a-89a1-23b4c6151a22" containerName="extract-content" Feb 23 18:30:00 crc kubenswrapper[4724]: E0223 18:30:00.181074 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a826dc5e-08ae-48eb-a02b-c451d026a6a4" containerName="extract-content" Feb 23 18:30:00 crc kubenswrapper[4724]: I0223 18:30:00.181162 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="a826dc5e-08ae-48eb-a02b-c451d026a6a4" containerName="extract-content" Feb 23 18:30:00 crc kubenswrapper[4724]: E0223 18:30:00.181245 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4c51f96-112f-4c4a-89a1-23b4c6151a22" containerName="extract-utilities" Feb 23 18:30:00 crc kubenswrapper[4724]: I0223 18:30:00.181337 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4c51f96-112f-4c4a-89a1-23b4c6151a22" containerName="extract-utilities" Feb 23 18:30:00 crc kubenswrapper[4724]: E0223 18:30:00.181466 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a826dc5e-08ae-48eb-a02b-c451d026a6a4" containerName="registry-server" Feb 23 18:30:00 crc kubenswrapper[4724]: I0223 18:30:00.181552 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="a826dc5e-08ae-48eb-a02b-c451d026a6a4" containerName="registry-server" Feb 23 18:30:00 crc kubenswrapper[4724]: E0223 18:30:00.181643 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4c51f96-112f-4c4a-89a1-23b4c6151a22" containerName="registry-server" Feb 23 18:30:00 crc kubenswrapper[4724]: I0223 18:30:00.181725 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4c51f96-112f-4c4a-89a1-23b4c6151a22" containerName="registry-server" Feb 23 18:30:00 crc kubenswrapper[4724]: E0223 18:30:00.181819 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a826dc5e-08ae-48eb-a02b-c451d026a6a4" containerName="extract-utilities" Feb 23 18:30:00 crc kubenswrapper[4724]: I0223 18:30:00.181905 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="a826dc5e-08ae-48eb-a02b-c451d026a6a4" containerName="extract-utilities" Feb 23 18:30:00 crc kubenswrapper[4724]: I0223 18:30:00.182296 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4c51f96-112f-4c4a-89a1-23b4c6151a22" containerName="registry-server" Feb 23 18:30:00 crc kubenswrapper[4724]: I0223 18:30:00.182466 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="a826dc5e-08ae-48eb-a02b-c451d026a6a4" containerName="registry-server" Feb 23 18:30:00 crc kubenswrapper[4724]: I0223 18:30:00.183426 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531190-wv86x" Feb 23 18:30:00 crc kubenswrapper[4724]: I0223 18:30:00.186381 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 23 18:30:00 crc kubenswrapper[4724]: I0223 18:30:00.188313 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531190-wv86x"] Feb 23 18:30:00 crc kubenswrapper[4724]: I0223 18:30:00.190815 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 23 18:30:00 crc kubenswrapper[4724]: I0223 18:30:00.235295 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dcc0214d-0a98-42f0-ac3c-0abf67e17341-config-volume\") pod \"collect-profiles-29531190-wv86x\" (UID: \"dcc0214d-0a98-42f0-ac3c-0abf67e17341\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531190-wv86x" Feb 23 18:30:00 crc kubenswrapper[4724]: I0223 18:30:00.235648 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfhwd\" (UniqueName: \"kubernetes.io/projected/dcc0214d-0a98-42f0-ac3c-0abf67e17341-kube-api-access-mfhwd\") pod \"collect-profiles-29531190-wv86x\" (UID: \"dcc0214d-0a98-42f0-ac3c-0abf67e17341\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531190-wv86x" Feb 23 18:30:00 crc kubenswrapper[4724]: I0223 18:30:00.235828 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dcc0214d-0a98-42f0-ac3c-0abf67e17341-secret-volume\") pod \"collect-profiles-29531190-wv86x\" (UID: \"dcc0214d-0a98-42f0-ac3c-0abf67e17341\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531190-wv86x" Feb 23 18:30:00 crc kubenswrapper[4724]: I0223 18:30:00.338220 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dcc0214d-0a98-42f0-ac3c-0abf67e17341-secret-volume\") pod \"collect-profiles-29531190-wv86x\" (UID: \"dcc0214d-0a98-42f0-ac3c-0abf67e17341\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531190-wv86x" Feb 23 18:30:00 crc kubenswrapper[4724]: I0223 18:30:00.338323 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dcc0214d-0a98-42f0-ac3c-0abf67e17341-config-volume\") pod \"collect-profiles-29531190-wv86x\" (UID: \"dcc0214d-0a98-42f0-ac3c-0abf67e17341\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531190-wv86x" Feb 23 18:30:00 crc kubenswrapper[4724]: I0223 18:30:00.338361 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfhwd\" (UniqueName: \"kubernetes.io/projected/dcc0214d-0a98-42f0-ac3c-0abf67e17341-kube-api-access-mfhwd\") pod \"collect-profiles-29531190-wv86x\" (UID: \"dcc0214d-0a98-42f0-ac3c-0abf67e17341\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531190-wv86x" Feb 23 18:30:00 crc kubenswrapper[4724]: I0223 18:30:00.348997 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dcc0214d-0a98-42f0-ac3c-0abf67e17341-config-volume\") pod \"collect-profiles-29531190-wv86x\" (UID: \"dcc0214d-0a98-42f0-ac3c-0abf67e17341\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531190-wv86x" Feb 23 18:30:00 crc kubenswrapper[4724]: I0223 18:30:00.352635 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dcc0214d-0a98-42f0-ac3c-0abf67e17341-secret-volume\") pod \"collect-profiles-29531190-wv86x\" (UID: \"dcc0214d-0a98-42f0-ac3c-0abf67e17341\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531190-wv86x" Feb 23 18:30:00 crc kubenswrapper[4724]: I0223 18:30:00.358287 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfhwd\" (UniqueName: \"kubernetes.io/projected/dcc0214d-0a98-42f0-ac3c-0abf67e17341-kube-api-access-mfhwd\") pod \"collect-profiles-29531190-wv86x\" (UID: \"dcc0214d-0a98-42f0-ac3c-0abf67e17341\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531190-wv86x" Feb 23 18:30:00 crc kubenswrapper[4724]: I0223 18:30:00.506420 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531190-wv86x" Feb 23 18:30:00 crc kubenswrapper[4724]: I0223 18:30:00.946231 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531190-wv86x"] Feb 23 18:30:00 crc kubenswrapper[4724]: W0223 18:30:00.948120 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddcc0214d_0a98_42f0_ac3c_0abf67e17341.slice/crio-89dd287cb2e794ad654ee6f6b8472aae7d5d1ec82d076dfc9a62d8bb63cb5370 WatchSource:0}: Error finding container 89dd287cb2e794ad654ee6f6b8472aae7d5d1ec82d076dfc9a62d8bb63cb5370: Status 404 returned error can't find the container with id 89dd287cb2e794ad654ee6f6b8472aae7d5d1ec82d076dfc9a62d8bb63cb5370 Feb 23 18:30:01 crc kubenswrapper[4724]: I0223 18:30:01.088090 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531190-wv86x" event={"ID":"dcc0214d-0a98-42f0-ac3c-0abf67e17341","Type":"ContainerStarted","Data":"89dd287cb2e794ad654ee6f6b8472aae7d5d1ec82d076dfc9a62d8bb63cb5370"} Feb 23 18:30:02 crc kubenswrapper[4724]: I0223 18:30:02.099284 4724 generic.go:334] "Generic (PLEG): container finished" podID="dcc0214d-0a98-42f0-ac3c-0abf67e17341" containerID="2f07eb74ab35b9a7d6e43b5413c8f325328b74d73d226ca08ee0dd3d03c67d28" exitCode=0 Feb 23 18:30:02 crc kubenswrapper[4724]: I0223 18:30:02.099577 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531190-wv86x" event={"ID":"dcc0214d-0a98-42f0-ac3c-0abf67e17341","Type":"ContainerDied","Data":"2f07eb74ab35b9a7d6e43b5413c8f325328b74d73d226ca08ee0dd3d03c67d28"} Feb 23 18:30:04 crc kubenswrapper[4724]: I0223 18:30:03.467080 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531190-wv86x" Feb 23 18:30:04 crc kubenswrapper[4724]: I0223 18:30:03.782595 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dcc0214d-0a98-42f0-ac3c-0abf67e17341-secret-volume\") pod \"dcc0214d-0a98-42f0-ac3c-0abf67e17341\" (UID: \"dcc0214d-0a98-42f0-ac3c-0abf67e17341\") " Feb 23 18:30:04 crc kubenswrapper[4724]: I0223 18:30:03.782661 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfhwd\" (UniqueName: \"kubernetes.io/projected/dcc0214d-0a98-42f0-ac3c-0abf67e17341-kube-api-access-mfhwd\") pod \"dcc0214d-0a98-42f0-ac3c-0abf67e17341\" (UID: \"dcc0214d-0a98-42f0-ac3c-0abf67e17341\") " Feb 23 18:30:04 crc kubenswrapper[4724]: I0223 18:30:03.782770 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dcc0214d-0a98-42f0-ac3c-0abf67e17341-config-volume\") pod \"dcc0214d-0a98-42f0-ac3c-0abf67e17341\" (UID: \"dcc0214d-0a98-42f0-ac3c-0abf67e17341\") " Feb 23 18:30:04 crc kubenswrapper[4724]: I0223 18:30:03.783520 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcc0214d-0a98-42f0-ac3c-0abf67e17341-config-volume" (OuterVolumeSpecName: "config-volume") pod "dcc0214d-0a98-42f0-ac3c-0abf67e17341" (UID: "dcc0214d-0a98-42f0-ac3c-0abf67e17341"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:30:04 crc kubenswrapper[4724]: I0223 18:30:03.789634 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcc0214d-0a98-42f0-ac3c-0abf67e17341-kube-api-access-mfhwd" (OuterVolumeSpecName: "kube-api-access-mfhwd") pod "dcc0214d-0a98-42f0-ac3c-0abf67e17341" (UID: "dcc0214d-0a98-42f0-ac3c-0abf67e17341"). InnerVolumeSpecName "kube-api-access-mfhwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:30:04 crc kubenswrapper[4724]: I0223 18:30:03.789924 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcc0214d-0a98-42f0-ac3c-0abf67e17341-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "dcc0214d-0a98-42f0-ac3c-0abf67e17341" (UID: "dcc0214d-0a98-42f0-ac3c-0abf67e17341"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:30:04 crc kubenswrapper[4724]: I0223 18:30:03.885514 4724 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dcc0214d-0a98-42f0-ac3c-0abf67e17341-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 23 18:30:04 crc kubenswrapper[4724]: I0223 18:30:03.885845 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfhwd\" (UniqueName: \"kubernetes.io/projected/dcc0214d-0a98-42f0-ac3c-0abf67e17341-kube-api-access-mfhwd\") on node \"crc\" DevicePath \"\"" Feb 23 18:30:04 crc kubenswrapper[4724]: I0223 18:30:03.885860 4724 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dcc0214d-0a98-42f0-ac3c-0abf67e17341-config-volume\") on node \"crc\" DevicePath \"\"" Feb 23 18:30:04 crc kubenswrapper[4724]: I0223 18:30:04.117887 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531190-wv86x" event={"ID":"dcc0214d-0a98-42f0-ac3c-0abf67e17341","Type":"ContainerDied","Data":"89dd287cb2e794ad654ee6f6b8472aae7d5d1ec82d076dfc9a62d8bb63cb5370"} Feb 23 18:30:04 crc kubenswrapper[4724]: I0223 18:30:04.117921 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89dd287cb2e794ad654ee6f6b8472aae7d5d1ec82d076dfc9a62d8bb63cb5370" Feb 23 18:30:04 crc kubenswrapper[4724]: I0223 18:30:04.117925 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531190-wv86x" Feb 23 18:30:04 crc kubenswrapper[4724]: I0223 18:30:04.548689 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531145-668zc"] Feb 23 18:30:04 crc kubenswrapper[4724]: I0223 18:30:04.560736 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531145-668zc"] Feb 23 18:30:04 crc kubenswrapper[4724]: I0223 18:30:04.966867 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cee97caf-66fd-4f32-bb1e-e69f22806a7b" path="/var/lib/kubelet/pods/cee97caf-66fd-4f32-bb1e-e69f22806a7b/volumes" Feb 23 18:30:50 crc kubenswrapper[4724]: I0223 18:30:50.905273 4724 scope.go:117] "RemoveContainer" containerID="436d965bef8a9bbe24686f042c83e50357c529ed1634eecbd00a8fc85a22ea9c" Feb 23 18:31:27 crc kubenswrapper[4724]: I0223 18:31:27.752050 4724 patch_prober.go:28] interesting pod/machine-config-daemon-rw78r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 18:31:27 crc kubenswrapper[4724]: I0223 18:31:27.752545 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 18:31:57 crc kubenswrapper[4724]: I0223 18:31:57.752630 4724 patch_prober.go:28] interesting pod/machine-config-daemon-rw78r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 18:31:57 crc kubenswrapper[4724]: I0223 18:31:57.753210 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 18:32:27 crc kubenswrapper[4724]: I0223 18:32:27.752611 4724 patch_prober.go:28] interesting pod/machine-config-daemon-rw78r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 18:32:27 crc kubenswrapper[4724]: I0223 18:32:27.753166 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 18:32:27 crc kubenswrapper[4724]: I0223 18:32:27.753215 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" Feb 23 18:32:27 crc kubenswrapper[4724]: I0223 18:32:27.754082 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a329854bcf38cc29bacd7c8178aa7127d980f56b324d82cbcadbbb04f0afe34d"} pod="openshift-machine-config-operator/machine-config-daemon-rw78r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 18:32:27 crc kubenswrapper[4724]: I0223 18:32:27.754144 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" containerID="cri-o://a329854bcf38cc29bacd7c8178aa7127d980f56b324d82cbcadbbb04f0afe34d" gracePeriod=600 Feb 23 18:32:28 crc kubenswrapper[4724]: I0223 18:32:28.437456 4724 generic.go:334] "Generic (PLEG): container finished" podID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerID="a329854bcf38cc29bacd7c8178aa7127d980f56b324d82cbcadbbb04f0afe34d" exitCode=0 Feb 23 18:32:28 crc kubenswrapper[4724]: I0223 18:32:28.437645 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" event={"ID":"a065b197-b354-4d9b-b2e9-7d4882a3d1a2","Type":"ContainerDied","Data":"a329854bcf38cc29bacd7c8178aa7127d980f56b324d82cbcadbbb04f0afe34d"} Feb 23 18:32:28 crc kubenswrapper[4724]: I0223 18:32:28.438103 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" event={"ID":"a065b197-b354-4d9b-b2e9-7d4882a3d1a2","Type":"ContainerStarted","Data":"b209a8a23ffecc79bcc0a715d72e7230168eca6b1013b5c775a2fb6a015494f3"} Feb 23 18:32:28 crc kubenswrapper[4724]: I0223 18:32:28.438124 4724 scope.go:117] "RemoveContainer" containerID="64d71fc5aeb4931e4af4007838ccb218942291620b2aad62c58ece0405cf269d" Feb 23 18:32:41 crc kubenswrapper[4724]: I0223 18:32:41.795074 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2lffk"] Feb 23 18:32:41 crc kubenswrapper[4724]: E0223 18:32:41.796077 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcc0214d-0a98-42f0-ac3c-0abf67e17341" containerName="collect-profiles" Feb 23 18:32:41 crc kubenswrapper[4724]: I0223 18:32:41.796096 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcc0214d-0a98-42f0-ac3c-0abf67e17341" containerName="collect-profiles" Feb 23 18:32:41 crc kubenswrapper[4724]: I0223 18:32:41.796348 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcc0214d-0a98-42f0-ac3c-0abf67e17341" containerName="collect-profiles" Feb 23 18:32:41 crc kubenswrapper[4724]: I0223 18:32:41.798205 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2lffk" Feb 23 18:32:41 crc kubenswrapper[4724]: I0223 18:32:41.811085 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2lffk"] Feb 23 18:32:41 crc kubenswrapper[4724]: I0223 18:32:41.942994 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8dtj\" (UniqueName: \"kubernetes.io/projected/3e0093b5-3833-4b90-83f0-4ad6747ba032-kube-api-access-w8dtj\") pod \"certified-operators-2lffk\" (UID: \"3e0093b5-3833-4b90-83f0-4ad6747ba032\") " pod="openshift-marketplace/certified-operators-2lffk" Feb 23 18:32:41 crc kubenswrapper[4724]: I0223 18:32:41.943054 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e0093b5-3833-4b90-83f0-4ad6747ba032-catalog-content\") pod \"certified-operators-2lffk\" (UID: \"3e0093b5-3833-4b90-83f0-4ad6747ba032\") " pod="openshift-marketplace/certified-operators-2lffk" Feb 23 18:32:41 crc kubenswrapper[4724]: I0223 18:32:41.943243 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e0093b5-3833-4b90-83f0-4ad6747ba032-utilities\") pod \"certified-operators-2lffk\" (UID: \"3e0093b5-3833-4b90-83f0-4ad6747ba032\") " pod="openshift-marketplace/certified-operators-2lffk" Feb 23 18:32:42 crc kubenswrapper[4724]: I0223 18:32:42.045740 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8dtj\" (UniqueName: \"kubernetes.io/projected/3e0093b5-3833-4b90-83f0-4ad6747ba032-kube-api-access-w8dtj\") pod \"certified-operators-2lffk\" (UID: \"3e0093b5-3833-4b90-83f0-4ad6747ba032\") " pod="openshift-marketplace/certified-operators-2lffk" Feb 23 18:32:42 crc kubenswrapper[4724]: I0223 18:32:42.045797 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e0093b5-3833-4b90-83f0-4ad6747ba032-catalog-content\") pod \"certified-operators-2lffk\" (UID: \"3e0093b5-3833-4b90-83f0-4ad6747ba032\") " pod="openshift-marketplace/certified-operators-2lffk" Feb 23 18:32:42 crc kubenswrapper[4724]: I0223 18:32:42.045862 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e0093b5-3833-4b90-83f0-4ad6747ba032-utilities\") pod \"certified-operators-2lffk\" (UID: \"3e0093b5-3833-4b90-83f0-4ad6747ba032\") " pod="openshift-marketplace/certified-operators-2lffk" Feb 23 18:32:42 crc kubenswrapper[4724]: I0223 18:32:42.046465 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e0093b5-3833-4b90-83f0-4ad6747ba032-catalog-content\") pod \"certified-operators-2lffk\" (UID: \"3e0093b5-3833-4b90-83f0-4ad6747ba032\") " pod="openshift-marketplace/certified-operators-2lffk" Feb 23 18:32:42 crc kubenswrapper[4724]: I0223 18:32:42.046652 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e0093b5-3833-4b90-83f0-4ad6747ba032-utilities\") pod \"certified-operators-2lffk\" (UID: \"3e0093b5-3833-4b90-83f0-4ad6747ba032\") " pod="openshift-marketplace/certified-operators-2lffk" Feb 23 18:32:42 crc kubenswrapper[4724]: I0223 18:32:42.070222 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8dtj\" (UniqueName: \"kubernetes.io/projected/3e0093b5-3833-4b90-83f0-4ad6747ba032-kube-api-access-w8dtj\") pod \"certified-operators-2lffk\" (UID: \"3e0093b5-3833-4b90-83f0-4ad6747ba032\") " pod="openshift-marketplace/certified-operators-2lffk" Feb 23 18:32:42 crc kubenswrapper[4724]: I0223 18:32:42.117839 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2lffk" Feb 23 18:32:42 crc kubenswrapper[4724]: I0223 18:32:42.598747 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2lffk"] Feb 23 18:32:42 crc kubenswrapper[4724]: W0223 18:32:42.602223 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e0093b5_3833_4b90_83f0_4ad6747ba032.slice/crio-8bf3fd97dbafc5c7b7524039b9b9a62937d6f55048f2ee0c7804404a31d12874 WatchSource:0}: Error finding container 8bf3fd97dbafc5c7b7524039b9b9a62937d6f55048f2ee0c7804404a31d12874: Status 404 returned error can't find the container with id 8bf3fd97dbafc5c7b7524039b9b9a62937d6f55048f2ee0c7804404a31d12874 Feb 23 18:32:43 crc kubenswrapper[4724]: I0223 18:32:43.576147 4724 generic.go:334] "Generic (PLEG): container finished" podID="3e0093b5-3833-4b90-83f0-4ad6747ba032" containerID="ed141fa827934cb56515de9613ba8f79da6f6e9722ee0f92020c6f6e9478fcb9" exitCode=0 Feb 23 18:32:43 crc kubenswrapper[4724]: I0223 18:32:43.576197 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2lffk" event={"ID":"3e0093b5-3833-4b90-83f0-4ad6747ba032","Type":"ContainerDied","Data":"ed141fa827934cb56515de9613ba8f79da6f6e9722ee0f92020c6f6e9478fcb9"} Feb 23 18:32:43 crc kubenswrapper[4724]: I0223 18:32:43.576545 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2lffk" event={"ID":"3e0093b5-3833-4b90-83f0-4ad6747ba032","Type":"ContainerStarted","Data":"8bf3fd97dbafc5c7b7524039b9b9a62937d6f55048f2ee0c7804404a31d12874"} Feb 23 18:32:44 crc kubenswrapper[4724]: I0223 18:32:44.584989 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2lffk" event={"ID":"3e0093b5-3833-4b90-83f0-4ad6747ba032","Type":"ContainerStarted","Data":"445212a5febfd84363247d76a05ea7c3af936d9fa80e4028a998e6bc8c6f2e36"} Feb 23 18:32:46 crc kubenswrapper[4724]: I0223 18:32:46.609893 4724 generic.go:334] "Generic (PLEG): container finished" podID="3e0093b5-3833-4b90-83f0-4ad6747ba032" containerID="445212a5febfd84363247d76a05ea7c3af936d9fa80e4028a998e6bc8c6f2e36" exitCode=0 Feb 23 18:32:46 crc kubenswrapper[4724]: I0223 18:32:46.610026 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2lffk" event={"ID":"3e0093b5-3833-4b90-83f0-4ad6747ba032","Type":"ContainerDied","Data":"445212a5febfd84363247d76a05ea7c3af936d9fa80e4028a998e6bc8c6f2e36"} Feb 23 18:32:47 crc kubenswrapper[4724]: I0223 18:32:47.628138 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2lffk" event={"ID":"3e0093b5-3833-4b90-83f0-4ad6747ba032","Type":"ContainerStarted","Data":"4269530ea887ba6be2e764640fa30a3d5f6744c1b54cf996159ed3da58de0705"} Feb 23 18:32:47 crc kubenswrapper[4724]: I0223 18:32:47.654817 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2lffk" podStartSLOduration=3.259061076 podStartE2EDuration="6.654797464s" podCreationTimestamp="2026-02-23 18:32:41 +0000 UTC" firstStartedPulling="2026-02-23 18:32:43.578567214 +0000 UTC m=+3719.394766814" lastFinishedPulling="2026-02-23 18:32:46.974303602 +0000 UTC m=+3722.790503202" observedRunningTime="2026-02-23 18:32:47.645156837 +0000 UTC m=+3723.461356437" watchObservedRunningTime="2026-02-23 18:32:47.654797464 +0000 UTC m=+3723.470997064" Feb 23 18:32:52 crc kubenswrapper[4724]: I0223 18:32:52.118409 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2lffk" Feb 23 18:32:52 crc kubenswrapper[4724]: I0223 18:32:52.119304 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2lffk" Feb 23 18:32:52 crc kubenswrapper[4724]: I0223 18:32:52.202220 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2lffk" Feb 23 18:32:52 crc kubenswrapper[4724]: I0223 18:32:52.754055 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2lffk" Feb 23 18:32:52 crc kubenswrapper[4724]: I0223 18:32:52.804361 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2lffk"] Feb 23 18:32:54 crc kubenswrapper[4724]: I0223 18:32:54.714208 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2lffk" podUID="3e0093b5-3833-4b90-83f0-4ad6747ba032" containerName="registry-server" containerID="cri-o://4269530ea887ba6be2e764640fa30a3d5f6744c1b54cf996159ed3da58de0705" gracePeriod=2 Feb 23 18:32:54 crc kubenswrapper[4724]: E0223 18:32:54.938178 4724 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e0093b5_3833_4b90_83f0_4ad6747ba032.slice/crio-conmon-4269530ea887ba6be2e764640fa30a3d5f6744c1b54cf996159ed3da58de0705.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e0093b5_3833_4b90_83f0_4ad6747ba032.slice/crio-4269530ea887ba6be2e764640fa30a3d5f6744c1b54cf996159ed3da58de0705.scope\": RecentStats: unable to find data in memory cache]" Feb 23 18:32:55 crc kubenswrapper[4724]: I0223 18:32:55.140956 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2lffk" Feb 23 18:32:55 crc kubenswrapper[4724]: I0223 18:32:55.214230 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e0093b5-3833-4b90-83f0-4ad6747ba032-catalog-content\") pod \"3e0093b5-3833-4b90-83f0-4ad6747ba032\" (UID: \"3e0093b5-3833-4b90-83f0-4ad6747ba032\") " Feb 23 18:32:55 crc kubenswrapper[4724]: I0223 18:32:55.214356 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e0093b5-3833-4b90-83f0-4ad6747ba032-utilities\") pod \"3e0093b5-3833-4b90-83f0-4ad6747ba032\" (UID: \"3e0093b5-3833-4b90-83f0-4ad6747ba032\") " Feb 23 18:32:55 crc kubenswrapper[4724]: I0223 18:32:55.214505 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8dtj\" (UniqueName: \"kubernetes.io/projected/3e0093b5-3833-4b90-83f0-4ad6747ba032-kube-api-access-w8dtj\") pod \"3e0093b5-3833-4b90-83f0-4ad6747ba032\" (UID: \"3e0093b5-3833-4b90-83f0-4ad6747ba032\") " Feb 23 18:32:55 crc kubenswrapper[4724]: I0223 18:32:55.215343 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e0093b5-3833-4b90-83f0-4ad6747ba032-utilities" (OuterVolumeSpecName: "utilities") pod "3e0093b5-3833-4b90-83f0-4ad6747ba032" (UID: "3e0093b5-3833-4b90-83f0-4ad6747ba032"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:32:55 crc kubenswrapper[4724]: I0223 18:32:55.221489 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e0093b5-3833-4b90-83f0-4ad6747ba032-kube-api-access-w8dtj" (OuterVolumeSpecName: "kube-api-access-w8dtj") pod "3e0093b5-3833-4b90-83f0-4ad6747ba032" (UID: "3e0093b5-3833-4b90-83f0-4ad6747ba032"). InnerVolumeSpecName "kube-api-access-w8dtj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:32:55 crc kubenswrapper[4724]: I0223 18:32:55.271374 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e0093b5-3833-4b90-83f0-4ad6747ba032-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3e0093b5-3833-4b90-83f0-4ad6747ba032" (UID: "3e0093b5-3833-4b90-83f0-4ad6747ba032"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:32:55 crc kubenswrapper[4724]: I0223 18:32:55.316726 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e0093b5-3833-4b90-83f0-4ad6747ba032-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 18:32:55 crc kubenswrapper[4724]: I0223 18:32:55.316760 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8dtj\" (UniqueName: \"kubernetes.io/projected/3e0093b5-3833-4b90-83f0-4ad6747ba032-kube-api-access-w8dtj\") on node \"crc\" DevicePath \"\"" Feb 23 18:32:55 crc kubenswrapper[4724]: I0223 18:32:55.316770 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e0093b5-3833-4b90-83f0-4ad6747ba032-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 18:32:55 crc kubenswrapper[4724]: I0223 18:32:55.725975 4724 generic.go:334] "Generic (PLEG): container finished" podID="3e0093b5-3833-4b90-83f0-4ad6747ba032" containerID="4269530ea887ba6be2e764640fa30a3d5f6744c1b54cf996159ed3da58de0705" exitCode=0 Feb 23 18:32:55 crc kubenswrapper[4724]: I0223 18:32:55.726021 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2lffk" event={"ID":"3e0093b5-3833-4b90-83f0-4ad6747ba032","Type":"ContainerDied","Data":"4269530ea887ba6be2e764640fa30a3d5f6744c1b54cf996159ed3da58de0705"} Feb 23 18:32:55 crc kubenswrapper[4724]: I0223 18:32:55.726069 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2lffk" event={"ID":"3e0093b5-3833-4b90-83f0-4ad6747ba032","Type":"ContainerDied","Data":"8bf3fd97dbafc5c7b7524039b9b9a62937d6f55048f2ee0c7804404a31d12874"} Feb 23 18:32:55 crc kubenswrapper[4724]: I0223 18:32:55.726076 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2lffk" Feb 23 18:32:55 crc kubenswrapper[4724]: I0223 18:32:55.726090 4724 scope.go:117] "RemoveContainer" containerID="4269530ea887ba6be2e764640fa30a3d5f6744c1b54cf996159ed3da58de0705" Feb 23 18:32:55 crc kubenswrapper[4724]: I0223 18:32:55.758112 4724 scope.go:117] "RemoveContainer" containerID="445212a5febfd84363247d76a05ea7c3af936d9fa80e4028a998e6bc8c6f2e36" Feb 23 18:32:55 crc kubenswrapper[4724]: I0223 18:32:55.760472 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2lffk"] Feb 23 18:32:55 crc kubenswrapper[4724]: I0223 18:32:55.771279 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2lffk"] Feb 23 18:32:55 crc kubenswrapper[4724]: I0223 18:32:55.781139 4724 scope.go:117] "RemoveContainer" containerID="ed141fa827934cb56515de9613ba8f79da6f6e9722ee0f92020c6f6e9478fcb9" Feb 23 18:32:55 crc kubenswrapper[4724]: I0223 18:32:55.846408 4724 scope.go:117] "RemoveContainer" containerID="4269530ea887ba6be2e764640fa30a3d5f6744c1b54cf996159ed3da58de0705" Feb 23 18:32:55 crc kubenswrapper[4724]: E0223 18:32:55.846792 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4269530ea887ba6be2e764640fa30a3d5f6744c1b54cf996159ed3da58de0705\": container with ID starting with 4269530ea887ba6be2e764640fa30a3d5f6744c1b54cf996159ed3da58de0705 not found: ID does not exist" containerID="4269530ea887ba6be2e764640fa30a3d5f6744c1b54cf996159ed3da58de0705" Feb 23 18:32:55 crc kubenswrapper[4724]: I0223 18:32:55.846821 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4269530ea887ba6be2e764640fa30a3d5f6744c1b54cf996159ed3da58de0705"} err="failed to get container status \"4269530ea887ba6be2e764640fa30a3d5f6744c1b54cf996159ed3da58de0705\": rpc error: code = NotFound desc = could not find container \"4269530ea887ba6be2e764640fa30a3d5f6744c1b54cf996159ed3da58de0705\": container with ID starting with 4269530ea887ba6be2e764640fa30a3d5f6744c1b54cf996159ed3da58de0705 not found: ID does not exist" Feb 23 18:32:55 crc kubenswrapper[4724]: I0223 18:32:55.846842 4724 scope.go:117] "RemoveContainer" containerID="445212a5febfd84363247d76a05ea7c3af936d9fa80e4028a998e6bc8c6f2e36" Feb 23 18:32:55 crc kubenswrapper[4724]: E0223 18:32:55.847024 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"445212a5febfd84363247d76a05ea7c3af936d9fa80e4028a998e6bc8c6f2e36\": container with ID starting with 445212a5febfd84363247d76a05ea7c3af936d9fa80e4028a998e6bc8c6f2e36 not found: ID does not exist" containerID="445212a5febfd84363247d76a05ea7c3af936d9fa80e4028a998e6bc8c6f2e36" Feb 23 18:32:55 crc kubenswrapper[4724]: I0223 18:32:55.847045 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"445212a5febfd84363247d76a05ea7c3af936d9fa80e4028a998e6bc8c6f2e36"} err="failed to get container status \"445212a5febfd84363247d76a05ea7c3af936d9fa80e4028a998e6bc8c6f2e36\": rpc error: code = NotFound desc = could not find container \"445212a5febfd84363247d76a05ea7c3af936d9fa80e4028a998e6bc8c6f2e36\": container with ID starting with 445212a5febfd84363247d76a05ea7c3af936d9fa80e4028a998e6bc8c6f2e36 not found: ID does not exist" Feb 23 18:32:55 crc kubenswrapper[4724]: I0223 18:32:55.847060 4724 scope.go:117] "RemoveContainer" containerID="ed141fa827934cb56515de9613ba8f79da6f6e9722ee0f92020c6f6e9478fcb9" Feb 23 18:32:55 crc kubenswrapper[4724]: E0223 18:32:55.847400 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed141fa827934cb56515de9613ba8f79da6f6e9722ee0f92020c6f6e9478fcb9\": container with ID starting with ed141fa827934cb56515de9613ba8f79da6f6e9722ee0f92020c6f6e9478fcb9 not found: ID does not exist" containerID="ed141fa827934cb56515de9613ba8f79da6f6e9722ee0f92020c6f6e9478fcb9" Feb 23 18:32:55 crc kubenswrapper[4724]: I0223 18:32:55.847429 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed141fa827934cb56515de9613ba8f79da6f6e9722ee0f92020c6f6e9478fcb9"} err="failed to get container status \"ed141fa827934cb56515de9613ba8f79da6f6e9722ee0f92020c6f6e9478fcb9\": rpc error: code = NotFound desc = could not find container \"ed141fa827934cb56515de9613ba8f79da6f6e9722ee0f92020c6f6e9478fcb9\": container with ID starting with ed141fa827934cb56515de9613ba8f79da6f6e9722ee0f92020c6f6e9478fcb9 not found: ID does not exist" Feb 23 18:32:56 crc kubenswrapper[4724]: I0223 18:32:56.961464 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e0093b5-3833-4b90-83f0-4ad6747ba032" path="/var/lib/kubelet/pods/3e0093b5-3833-4b90-83f0-4ad6747ba032/volumes" Feb 23 18:34:57 crc kubenswrapper[4724]: I0223 18:34:57.752755 4724 patch_prober.go:28] interesting pod/machine-config-daemon-rw78r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 18:34:57 crc kubenswrapper[4724]: I0223 18:34:57.753318 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 18:35:27 crc kubenswrapper[4724]: I0223 18:35:27.751940 4724 patch_prober.go:28] interesting pod/machine-config-daemon-rw78r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 18:35:27 crc kubenswrapper[4724]: I0223 18:35:27.752368 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 18:35:57 crc kubenswrapper[4724]: I0223 18:35:57.752301 4724 patch_prober.go:28] interesting pod/machine-config-daemon-rw78r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 18:35:57 crc kubenswrapper[4724]: I0223 18:35:57.752888 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 18:35:57 crc kubenswrapper[4724]: I0223 18:35:57.752932 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" Feb 23 18:35:57 crc kubenswrapper[4724]: I0223 18:35:57.753760 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b209a8a23ffecc79bcc0a715d72e7230168eca6b1013b5c775a2fb6a015494f3"} pod="openshift-machine-config-operator/machine-config-daemon-rw78r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 18:35:57 crc kubenswrapper[4724]: I0223 18:35:57.753818 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" containerID="cri-o://b209a8a23ffecc79bcc0a715d72e7230168eca6b1013b5c775a2fb6a015494f3" gracePeriod=600 Feb 23 18:35:57 crc kubenswrapper[4724]: E0223 18:35:57.918566 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:35:58 crc kubenswrapper[4724]: I0223 18:35:58.381771 4724 generic.go:334] "Generic (PLEG): container finished" podID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerID="b209a8a23ffecc79bcc0a715d72e7230168eca6b1013b5c775a2fb6a015494f3" exitCode=0 Feb 23 18:35:58 crc kubenswrapper[4724]: I0223 18:35:58.381867 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" event={"ID":"a065b197-b354-4d9b-b2e9-7d4882a3d1a2","Type":"ContainerDied","Data":"b209a8a23ffecc79bcc0a715d72e7230168eca6b1013b5c775a2fb6a015494f3"} Feb 23 18:35:58 crc kubenswrapper[4724]: I0223 18:35:58.382112 4724 scope.go:117] "RemoveContainer" containerID="a329854bcf38cc29bacd7c8178aa7127d980f56b324d82cbcadbbb04f0afe34d" Feb 23 18:35:58 crc kubenswrapper[4724]: I0223 18:35:58.383348 4724 scope.go:117] "RemoveContainer" containerID="b209a8a23ffecc79bcc0a715d72e7230168eca6b1013b5c775a2fb6a015494f3" Feb 23 18:35:58 crc kubenswrapper[4724]: E0223 18:35:58.383811 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:36:11 crc kubenswrapper[4724]: I0223 18:36:11.951218 4724 scope.go:117] "RemoveContainer" containerID="b209a8a23ffecc79bcc0a715d72e7230168eca6b1013b5c775a2fb6a015494f3" Feb 23 18:36:11 crc kubenswrapper[4724]: E0223 18:36:11.952519 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:36:22 crc kubenswrapper[4724]: I0223 18:36:22.951159 4724 scope.go:117] "RemoveContainer" containerID="b209a8a23ffecc79bcc0a715d72e7230168eca6b1013b5c775a2fb6a015494f3" Feb 23 18:36:22 crc kubenswrapper[4724]: E0223 18:36:22.951960 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:36:34 crc kubenswrapper[4724]: I0223 18:36:34.959768 4724 scope.go:117] "RemoveContainer" containerID="b209a8a23ffecc79bcc0a715d72e7230168eca6b1013b5c775a2fb6a015494f3" Feb 23 18:36:34 crc kubenswrapper[4724]: E0223 18:36:34.960565 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:36:50 crc kubenswrapper[4724]: I0223 18:36:50.951114 4724 scope.go:117] "RemoveContainer" containerID="b209a8a23ffecc79bcc0a715d72e7230168eca6b1013b5c775a2fb6a015494f3" Feb 23 18:36:50 crc kubenswrapper[4724]: E0223 18:36:50.952009 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:37:02 crc kubenswrapper[4724]: I0223 18:37:02.951245 4724 scope.go:117] "RemoveContainer" containerID="b209a8a23ffecc79bcc0a715d72e7230168eca6b1013b5c775a2fb6a015494f3" Feb 23 18:37:02 crc kubenswrapper[4724]: E0223 18:37:02.952030 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:37:17 crc kubenswrapper[4724]: I0223 18:37:17.950869 4724 scope.go:117] "RemoveContainer" containerID="b209a8a23ffecc79bcc0a715d72e7230168eca6b1013b5c775a2fb6a015494f3" Feb 23 18:37:17 crc kubenswrapper[4724]: E0223 18:37:17.951790 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:37:32 crc kubenswrapper[4724]: I0223 18:37:32.952125 4724 scope.go:117] "RemoveContainer" containerID="b209a8a23ffecc79bcc0a715d72e7230168eca6b1013b5c775a2fb6a015494f3" Feb 23 18:37:32 crc kubenswrapper[4724]: E0223 18:37:32.952929 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:37:36 crc kubenswrapper[4724]: I0223 18:37:36.995459 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7sh4b"] Feb 23 18:37:36 crc kubenswrapper[4724]: E0223 18:37:36.998195 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e0093b5-3833-4b90-83f0-4ad6747ba032" containerName="extract-content" Feb 23 18:37:36 crc kubenswrapper[4724]: I0223 18:37:36.998253 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e0093b5-3833-4b90-83f0-4ad6747ba032" containerName="extract-content" Feb 23 18:37:36 crc kubenswrapper[4724]: E0223 18:37:36.998285 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e0093b5-3833-4b90-83f0-4ad6747ba032" containerName="registry-server" Feb 23 18:37:36 crc kubenswrapper[4724]: I0223 18:37:36.998299 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e0093b5-3833-4b90-83f0-4ad6747ba032" containerName="registry-server" Feb 23 18:37:36 crc kubenswrapper[4724]: E0223 18:37:36.998313 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e0093b5-3833-4b90-83f0-4ad6747ba032" containerName="extract-utilities" Feb 23 18:37:36 crc kubenswrapper[4724]: I0223 18:37:36.998325 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e0093b5-3833-4b90-83f0-4ad6747ba032" containerName="extract-utilities" Feb 23 18:37:36 crc kubenswrapper[4724]: I0223 18:37:36.998738 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e0093b5-3833-4b90-83f0-4ad6747ba032" containerName="registry-server" Feb 23 18:37:37 crc kubenswrapper[4724]: I0223 18:37:37.001190 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7sh4b" Feb 23 18:37:37 crc kubenswrapper[4724]: I0223 18:37:37.023773 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7sh4b"] Feb 23 18:37:37 crc kubenswrapper[4724]: I0223 18:37:37.148839 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff7b4352-a9d9-4e8d-ad00-78e1fe4c496e-catalog-content\") pod \"community-operators-7sh4b\" (UID: \"ff7b4352-a9d9-4e8d-ad00-78e1fe4c496e\") " pod="openshift-marketplace/community-operators-7sh4b" Feb 23 18:37:37 crc kubenswrapper[4724]: I0223 18:37:37.148933 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff7b4352-a9d9-4e8d-ad00-78e1fe4c496e-utilities\") pod \"community-operators-7sh4b\" (UID: \"ff7b4352-a9d9-4e8d-ad00-78e1fe4c496e\") " pod="openshift-marketplace/community-operators-7sh4b" Feb 23 18:37:37 crc kubenswrapper[4724]: I0223 18:37:37.148992 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m45lg\" (UniqueName: \"kubernetes.io/projected/ff7b4352-a9d9-4e8d-ad00-78e1fe4c496e-kube-api-access-m45lg\") pod \"community-operators-7sh4b\" (UID: \"ff7b4352-a9d9-4e8d-ad00-78e1fe4c496e\") " pod="openshift-marketplace/community-operators-7sh4b" Feb 23 18:37:37 crc kubenswrapper[4724]: I0223 18:37:37.250614 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff7b4352-a9d9-4e8d-ad00-78e1fe4c496e-catalog-content\") pod \"community-operators-7sh4b\" (UID: \"ff7b4352-a9d9-4e8d-ad00-78e1fe4c496e\") " pod="openshift-marketplace/community-operators-7sh4b" Feb 23 18:37:37 crc kubenswrapper[4724]: I0223 18:37:37.250692 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff7b4352-a9d9-4e8d-ad00-78e1fe4c496e-utilities\") pod \"community-operators-7sh4b\" (UID: \"ff7b4352-a9d9-4e8d-ad00-78e1fe4c496e\") " pod="openshift-marketplace/community-operators-7sh4b" Feb 23 18:37:37 crc kubenswrapper[4724]: I0223 18:37:37.250734 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m45lg\" (UniqueName: \"kubernetes.io/projected/ff7b4352-a9d9-4e8d-ad00-78e1fe4c496e-kube-api-access-m45lg\") pod \"community-operators-7sh4b\" (UID: \"ff7b4352-a9d9-4e8d-ad00-78e1fe4c496e\") " pod="openshift-marketplace/community-operators-7sh4b" Feb 23 18:37:37 crc kubenswrapper[4724]: I0223 18:37:37.251439 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff7b4352-a9d9-4e8d-ad00-78e1fe4c496e-catalog-content\") pod \"community-operators-7sh4b\" (UID: \"ff7b4352-a9d9-4e8d-ad00-78e1fe4c496e\") " pod="openshift-marketplace/community-operators-7sh4b" Feb 23 18:37:37 crc kubenswrapper[4724]: I0223 18:37:37.251536 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff7b4352-a9d9-4e8d-ad00-78e1fe4c496e-utilities\") pod \"community-operators-7sh4b\" (UID: \"ff7b4352-a9d9-4e8d-ad00-78e1fe4c496e\") " pod="openshift-marketplace/community-operators-7sh4b" Feb 23 18:37:37 crc kubenswrapper[4724]: I0223 18:37:37.282589 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m45lg\" (UniqueName: \"kubernetes.io/projected/ff7b4352-a9d9-4e8d-ad00-78e1fe4c496e-kube-api-access-m45lg\") pod \"community-operators-7sh4b\" (UID: \"ff7b4352-a9d9-4e8d-ad00-78e1fe4c496e\") " pod="openshift-marketplace/community-operators-7sh4b" Feb 23 18:37:37 crc kubenswrapper[4724]: I0223 18:37:37.321874 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7sh4b" Feb 23 18:37:37 crc kubenswrapper[4724]: I0223 18:37:37.885274 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7sh4b"] Feb 23 18:37:38 crc kubenswrapper[4724]: I0223 18:37:38.346838 4724 generic.go:334] "Generic (PLEG): container finished" podID="ff7b4352-a9d9-4e8d-ad00-78e1fe4c496e" containerID="3aa9d7a04869066c709689a0cb14b6e611c8e63c8b3da6af4b1233545a9f79c5" exitCode=0 Feb 23 18:37:38 crc kubenswrapper[4724]: I0223 18:37:38.346885 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7sh4b" event={"ID":"ff7b4352-a9d9-4e8d-ad00-78e1fe4c496e","Type":"ContainerDied","Data":"3aa9d7a04869066c709689a0cb14b6e611c8e63c8b3da6af4b1233545a9f79c5"} Feb 23 18:37:38 crc kubenswrapper[4724]: I0223 18:37:38.347177 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7sh4b" event={"ID":"ff7b4352-a9d9-4e8d-ad00-78e1fe4c496e","Type":"ContainerStarted","Data":"b3c96bd07aac7b814fdbe5dc30dc01b30fa2252451116e8e3dc7ff35f8019bb3"} Feb 23 18:37:38 crc kubenswrapper[4724]: I0223 18:37:38.349144 4724 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 18:37:39 crc kubenswrapper[4724]: I0223 18:37:39.359645 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7sh4b" event={"ID":"ff7b4352-a9d9-4e8d-ad00-78e1fe4c496e","Type":"ContainerStarted","Data":"899a86bf94c0e4b81ef9c19e3e158ec737b1e0a665481747d7eb565dce5a2d87"} Feb 23 18:37:41 crc kubenswrapper[4724]: I0223 18:37:41.387432 4724 generic.go:334] "Generic (PLEG): container finished" podID="ff7b4352-a9d9-4e8d-ad00-78e1fe4c496e" containerID="899a86bf94c0e4b81ef9c19e3e158ec737b1e0a665481747d7eb565dce5a2d87" exitCode=0 Feb 23 18:37:41 crc kubenswrapper[4724]: I0223 18:37:41.387593 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7sh4b" event={"ID":"ff7b4352-a9d9-4e8d-ad00-78e1fe4c496e","Type":"ContainerDied","Data":"899a86bf94c0e4b81ef9c19e3e158ec737b1e0a665481747d7eb565dce5a2d87"} Feb 23 18:37:42 crc kubenswrapper[4724]: I0223 18:37:42.400013 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7sh4b" event={"ID":"ff7b4352-a9d9-4e8d-ad00-78e1fe4c496e","Type":"ContainerStarted","Data":"14c979c86ac9a125370999b1016acefd7e830a8cd1cb698d0db0a0f3be1c74d2"} Feb 23 18:37:42 crc kubenswrapper[4724]: I0223 18:37:42.432256 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7sh4b" podStartSLOduration=2.992188541 podStartE2EDuration="6.43223902s" podCreationTimestamp="2026-02-23 18:37:36 +0000 UTC" firstStartedPulling="2026-02-23 18:37:38.348946164 +0000 UTC m=+4014.165145754" lastFinishedPulling="2026-02-23 18:37:41.788996593 +0000 UTC m=+4017.605196233" observedRunningTime="2026-02-23 18:37:42.427522303 +0000 UTC m=+4018.243721913" watchObservedRunningTime="2026-02-23 18:37:42.43223902 +0000 UTC m=+4018.248438610" Feb 23 18:37:44 crc kubenswrapper[4724]: I0223 18:37:44.959334 4724 scope.go:117] "RemoveContainer" containerID="b209a8a23ffecc79bcc0a715d72e7230168eca6b1013b5c775a2fb6a015494f3" Feb 23 18:37:44 crc kubenswrapper[4724]: E0223 18:37:44.960075 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:37:47 crc kubenswrapper[4724]: I0223 18:37:47.322775 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7sh4b" Feb 23 18:37:47 crc kubenswrapper[4724]: I0223 18:37:47.323096 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7sh4b" Feb 23 18:37:47 crc kubenswrapper[4724]: I0223 18:37:47.368208 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7sh4b" Feb 23 18:37:47 crc kubenswrapper[4724]: I0223 18:37:47.504139 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7sh4b" Feb 23 18:37:47 crc kubenswrapper[4724]: I0223 18:37:47.606333 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7sh4b"] Feb 23 18:37:49 crc kubenswrapper[4724]: I0223 18:37:49.462708 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7sh4b" podUID="ff7b4352-a9d9-4e8d-ad00-78e1fe4c496e" containerName="registry-server" containerID="cri-o://14c979c86ac9a125370999b1016acefd7e830a8cd1cb698d0db0a0f3be1c74d2" gracePeriod=2 Feb 23 18:37:49 crc kubenswrapper[4724]: I0223 18:37:49.959225 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7sh4b" Feb 23 18:37:50 crc kubenswrapper[4724]: I0223 18:37:50.045962 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff7b4352-a9d9-4e8d-ad00-78e1fe4c496e-utilities\") pod \"ff7b4352-a9d9-4e8d-ad00-78e1fe4c496e\" (UID: \"ff7b4352-a9d9-4e8d-ad00-78e1fe4c496e\") " Feb 23 18:37:50 crc kubenswrapper[4724]: I0223 18:37:50.046069 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff7b4352-a9d9-4e8d-ad00-78e1fe4c496e-catalog-content\") pod \"ff7b4352-a9d9-4e8d-ad00-78e1fe4c496e\" (UID: \"ff7b4352-a9d9-4e8d-ad00-78e1fe4c496e\") " Feb 23 18:37:50 crc kubenswrapper[4724]: I0223 18:37:50.046220 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m45lg\" (UniqueName: \"kubernetes.io/projected/ff7b4352-a9d9-4e8d-ad00-78e1fe4c496e-kube-api-access-m45lg\") pod \"ff7b4352-a9d9-4e8d-ad00-78e1fe4c496e\" (UID: \"ff7b4352-a9d9-4e8d-ad00-78e1fe4c496e\") " Feb 23 18:37:50 crc kubenswrapper[4724]: I0223 18:37:50.046880 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff7b4352-a9d9-4e8d-ad00-78e1fe4c496e-utilities" (OuterVolumeSpecName: "utilities") pod "ff7b4352-a9d9-4e8d-ad00-78e1fe4c496e" (UID: "ff7b4352-a9d9-4e8d-ad00-78e1fe4c496e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:37:50 crc kubenswrapper[4724]: I0223 18:37:50.052877 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff7b4352-a9d9-4e8d-ad00-78e1fe4c496e-kube-api-access-m45lg" (OuterVolumeSpecName: "kube-api-access-m45lg") pod "ff7b4352-a9d9-4e8d-ad00-78e1fe4c496e" (UID: "ff7b4352-a9d9-4e8d-ad00-78e1fe4c496e"). InnerVolumeSpecName "kube-api-access-m45lg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:37:50 crc kubenswrapper[4724]: I0223 18:37:50.101227 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff7b4352-a9d9-4e8d-ad00-78e1fe4c496e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ff7b4352-a9d9-4e8d-ad00-78e1fe4c496e" (UID: "ff7b4352-a9d9-4e8d-ad00-78e1fe4c496e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:37:50 crc kubenswrapper[4724]: I0223 18:37:50.148952 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff7b4352-a9d9-4e8d-ad00-78e1fe4c496e-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 18:37:50 crc kubenswrapper[4724]: I0223 18:37:50.148987 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff7b4352-a9d9-4e8d-ad00-78e1fe4c496e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 18:37:50 crc kubenswrapper[4724]: I0223 18:37:50.148998 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m45lg\" (UniqueName: \"kubernetes.io/projected/ff7b4352-a9d9-4e8d-ad00-78e1fe4c496e-kube-api-access-m45lg\") on node \"crc\" DevicePath \"\"" Feb 23 18:37:50 crc kubenswrapper[4724]: I0223 18:37:50.475477 4724 generic.go:334] "Generic (PLEG): container finished" podID="ff7b4352-a9d9-4e8d-ad00-78e1fe4c496e" containerID="14c979c86ac9a125370999b1016acefd7e830a8cd1cb698d0db0a0f3be1c74d2" exitCode=0 Feb 23 18:37:50 crc kubenswrapper[4724]: I0223 18:37:50.475541 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7sh4b" Feb 23 18:37:50 crc kubenswrapper[4724]: I0223 18:37:50.476434 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7sh4b" event={"ID":"ff7b4352-a9d9-4e8d-ad00-78e1fe4c496e","Type":"ContainerDied","Data":"14c979c86ac9a125370999b1016acefd7e830a8cd1cb698d0db0a0f3be1c74d2"} Feb 23 18:37:50 crc kubenswrapper[4724]: I0223 18:37:50.476522 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7sh4b" event={"ID":"ff7b4352-a9d9-4e8d-ad00-78e1fe4c496e","Type":"ContainerDied","Data":"b3c96bd07aac7b814fdbe5dc30dc01b30fa2252451116e8e3dc7ff35f8019bb3"} Feb 23 18:37:50 crc kubenswrapper[4724]: I0223 18:37:50.476552 4724 scope.go:117] "RemoveContainer" containerID="14c979c86ac9a125370999b1016acefd7e830a8cd1cb698d0db0a0f3be1c74d2" Feb 23 18:37:50 crc kubenswrapper[4724]: I0223 18:37:50.513475 4724 scope.go:117] "RemoveContainer" containerID="899a86bf94c0e4b81ef9c19e3e158ec737b1e0a665481747d7eb565dce5a2d87" Feb 23 18:37:50 crc kubenswrapper[4724]: I0223 18:37:50.514835 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7sh4b"] Feb 23 18:37:50 crc kubenswrapper[4724]: I0223 18:37:50.526474 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7sh4b"] Feb 23 18:37:50 crc kubenswrapper[4724]: I0223 18:37:50.539527 4724 scope.go:117] "RemoveContainer" containerID="3aa9d7a04869066c709689a0cb14b6e611c8e63c8b3da6af4b1233545a9f79c5" Feb 23 18:37:50 crc kubenswrapper[4724]: I0223 18:37:50.579110 4724 scope.go:117] "RemoveContainer" containerID="14c979c86ac9a125370999b1016acefd7e830a8cd1cb698d0db0a0f3be1c74d2" Feb 23 18:37:50 crc kubenswrapper[4724]: E0223 18:37:50.579458 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14c979c86ac9a125370999b1016acefd7e830a8cd1cb698d0db0a0f3be1c74d2\": container with ID starting with 14c979c86ac9a125370999b1016acefd7e830a8cd1cb698d0db0a0f3be1c74d2 not found: ID does not exist" containerID="14c979c86ac9a125370999b1016acefd7e830a8cd1cb698d0db0a0f3be1c74d2" Feb 23 18:37:50 crc kubenswrapper[4724]: I0223 18:37:50.579518 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14c979c86ac9a125370999b1016acefd7e830a8cd1cb698d0db0a0f3be1c74d2"} err="failed to get container status \"14c979c86ac9a125370999b1016acefd7e830a8cd1cb698d0db0a0f3be1c74d2\": rpc error: code = NotFound desc = could not find container \"14c979c86ac9a125370999b1016acefd7e830a8cd1cb698d0db0a0f3be1c74d2\": container with ID starting with 14c979c86ac9a125370999b1016acefd7e830a8cd1cb698d0db0a0f3be1c74d2 not found: ID does not exist" Feb 23 18:37:50 crc kubenswrapper[4724]: I0223 18:37:50.579542 4724 scope.go:117] "RemoveContainer" containerID="899a86bf94c0e4b81ef9c19e3e158ec737b1e0a665481747d7eb565dce5a2d87" Feb 23 18:37:50 crc kubenswrapper[4724]: E0223 18:37:50.579996 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"899a86bf94c0e4b81ef9c19e3e158ec737b1e0a665481747d7eb565dce5a2d87\": container with ID starting with 899a86bf94c0e4b81ef9c19e3e158ec737b1e0a665481747d7eb565dce5a2d87 not found: ID does not exist" containerID="899a86bf94c0e4b81ef9c19e3e158ec737b1e0a665481747d7eb565dce5a2d87" Feb 23 18:37:50 crc kubenswrapper[4724]: I0223 18:37:50.580029 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"899a86bf94c0e4b81ef9c19e3e158ec737b1e0a665481747d7eb565dce5a2d87"} err="failed to get container status \"899a86bf94c0e4b81ef9c19e3e158ec737b1e0a665481747d7eb565dce5a2d87\": rpc error: code = NotFound desc = could not find container \"899a86bf94c0e4b81ef9c19e3e158ec737b1e0a665481747d7eb565dce5a2d87\": container with ID starting with 899a86bf94c0e4b81ef9c19e3e158ec737b1e0a665481747d7eb565dce5a2d87 not found: ID does not exist" Feb 23 18:37:50 crc kubenswrapper[4724]: I0223 18:37:50.580050 4724 scope.go:117] "RemoveContainer" containerID="3aa9d7a04869066c709689a0cb14b6e611c8e63c8b3da6af4b1233545a9f79c5" Feb 23 18:37:50 crc kubenswrapper[4724]: E0223 18:37:50.580375 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3aa9d7a04869066c709689a0cb14b6e611c8e63c8b3da6af4b1233545a9f79c5\": container with ID starting with 3aa9d7a04869066c709689a0cb14b6e611c8e63c8b3da6af4b1233545a9f79c5 not found: ID does not exist" containerID="3aa9d7a04869066c709689a0cb14b6e611c8e63c8b3da6af4b1233545a9f79c5" Feb 23 18:37:50 crc kubenswrapper[4724]: I0223 18:37:50.580478 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3aa9d7a04869066c709689a0cb14b6e611c8e63c8b3da6af4b1233545a9f79c5"} err="failed to get container status \"3aa9d7a04869066c709689a0cb14b6e611c8e63c8b3da6af4b1233545a9f79c5\": rpc error: code = NotFound desc = could not find container \"3aa9d7a04869066c709689a0cb14b6e611c8e63c8b3da6af4b1233545a9f79c5\": container with ID starting with 3aa9d7a04869066c709689a0cb14b6e611c8e63c8b3da6af4b1233545a9f79c5 not found: ID does not exist" Feb 23 18:37:50 crc kubenswrapper[4724]: I0223 18:37:50.970430 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff7b4352-a9d9-4e8d-ad00-78e1fe4c496e" path="/var/lib/kubelet/pods/ff7b4352-a9d9-4e8d-ad00-78e1fe4c496e/volumes" Feb 23 18:37:59 crc kubenswrapper[4724]: I0223 18:37:59.951724 4724 scope.go:117] "RemoveContainer" containerID="b209a8a23ffecc79bcc0a715d72e7230168eca6b1013b5c775a2fb6a015494f3" Feb 23 18:37:59 crc kubenswrapper[4724]: E0223 18:37:59.952664 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:38:10 crc kubenswrapper[4724]: I0223 18:38:10.951250 4724 scope.go:117] "RemoveContainer" containerID="b209a8a23ffecc79bcc0a715d72e7230168eca6b1013b5c775a2fb6a015494f3" Feb 23 18:38:10 crc kubenswrapper[4724]: E0223 18:38:10.952254 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:38:24 crc kubenswrapper[4724]: I0223 18:38:24.951350 4724 scope.go:117] "RemoveContainer" containerID="b209a8a23ffecc79bcc0a715d72e7230168eca6b1013b5c775a2fb6a015494f3" Feb 23 18:38:24 crc kubenswrapper[4724]: E0223 18:38:24.952128 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:38:38 crc kubenswrapper[4724]: I0223 18:38:38.954183 4724 scope.go:117] "RemoveContainer" containerID="b209a8a23ffecc79bcc0a715d72e7230168eca6b1013b5c775a2fb6a015494f3" Feb 23 18:38:38 crc kubenswrapper[4724]: E0223 18:38:38.956547 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:38:49 crc kubenswrapper[4724]: I0223 18:38:49.951655 4724 scope.go:117] "RemoveContainer" containerID="b209a8a23ffecc79bcc0a715d72e7230168eca6b1013b5c775a2fb6a015494f3" Feb 23 18:38:49 crc kubenswrapper[4724]: E0223 18:38:49.953072 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:38:57 crc kubenswrapper[4724]: I0223 18:38:57.999168 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-px9cf"] Feb 23 18:38:58 crc kubenswrapper[4724]: E0223 18:38:58.000206 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff7b4352-a9d9-4e8d-ad00-78e1fe4c496e" containerName="extract-utilities" Feb 23 18:38:58 crc kubenswrapper[4724]: I0223 18:38:58.000224 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff7b4352-a9d9-4e8d-ad00-78e1fe4c496e" containerName="extract-utilities" Feb 23 18:38:58 crc kubenswrapper[4724]: E0223 18:38:58.000263 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff7b4352-a9d9-4e8d-ad00-78e1fe4c496e" containerName="registry-server" Feb 23 18:38:58 crc kubenswrapper[4724]: I0223 18:38:58.000271 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff7b4352-a9d9-4e8d-ad00-78e1fe4c496e" containerName="registry-server" Feb 23 18:38:58 crc kubenswrapper[4724]: E0223 18:38:58.000300 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff7b4352-a9d9-4e8d-ad00-78e1fe4c496e" containerName="extract-content" Feb 23 18:38:58 crc kubenswrapper[4724]: I0223 18:38:58.000309 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff7b4352-a9d9-4e8d-ad00-78e1fe4c496e" containerName="extract-content" Feb 23 18:38:58 crc kubenswrapper[4724]: I0223 18:38:58.000545 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff7b4352-a9d9-4e8d-ad00-78e1fe4c496e" containerName="registry-server" Feb 23 18:38:58 crc kubenswrapper[4724]: I0223 18:38:58.002300 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-px9cf" Feb 23 18:38:58 crc kubenswrapper[4724]: I0223 18:38:58.018068 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-px9cf"] Feb 23 18:38:58 crc kubenswrapper[4724]: I0223 18:38:58.144501 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/731eb56e-9370-4eb9-ae10-bb72e1b4bfff-catalog-content\") pod \"redhat-operators-px9cf\" (UID: \"731eb56e-9370-4eb9-ae10-bb72e1b4bfff\") " pod="openshift-marketplace/redhat-operators-px9cf" Feb 23 18:38:58 crc kubenswrapper[4724]: I0223 18:38:58.144600 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99h76\" (UniqueName: \"kubernetes.io/projected/731eb56e-9370-4eb9-ae10-bb72e1b4bfff-kube-api-access-99h76\") pod \"redhat-operators-px9cf\" (UID: \"731eb56e-9370-4eb9-ae10-bb72e1b4bfff\") " pod="openshift-marketplace/redhat-operators-px9cf" Feb 23 18:38:58 crc kubenswrapper[4724]: I0223 18:38:58.144749 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/731eb56e-9370-4eb9-ae10-bb72e1b4bfff-utilities\") pod \"redhat-operators-px9cf\" (UID: \"731eb56e-9370-4eb9-ae10-bb72e1b4bfff\") " pod="openshift-marketplace/redhat-operators-px9cf" Feb 23 18:38:58 crc kubenswrapper[4724]: I0223 18:38:58.247226 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/731eb56e-9370-4eb9-ae10-bb72e1b4bfff-catalog-content\") pod \"redhat-operators-px9cf\" (UID: \"731eb56e-9370-4eb9-ae10-bb72e1b4bfff\") " pod="openshift-marketplace/redhat-operators-px9cf" Feb 23 18:38:58 crc kubenswrapper[4724]: I0223 18:38:58.247348 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99h76\" (UniqueName: \"kubernetes.io/projected/731eb56e-9370-4eb9-ae10-bb72e1b4bfff-kube-api-access-99h76\") pod \"redhat-operators-px9cf\" (UID: \"731eb56e-9370-4eb9-ae10-bb72e1b4bfff\") " pod="openshift-marketplace/redhat-operators-px9cf" Feb 23 18:38:58 crc kubenswrapper[4724]: I0223 18:38:58.247458 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/731eb56e-9370-4eb9-ae10-bb72e1b4bfff-utilities\") pod \"redhat-operators-px9cf\" (UID: \"731eb56e-9370-4eb9-ae10-bb72e1b4bfff\") " pod="openshift-marketplace/redhat-operators-px9cf" Feb 23 18:38:58 crc kubenswrapper[4724]: I0223 18:38:58.247700 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/731eb56e-9370-4eb9-ae10-bb72e1b4bfff-catalog-content\") pod \"redhat-operators-px9cf\" (UID: \"731eb56e-9370-4eb9-ae10-bb72e1b4bfff\") " pod="openshift-marketplace/redhat-operators-px9cf" Feb 23 18:38:58 crc kubenswrapper[4724]: I0223 18:38:58.247830 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/731eb56e-9370-4eb9-ae10-bb72e1b4bfff-utilities\") pod \"redhat-operators-px9cf\" (UID: \"731eb56e-9370-4eb9-ae10-bb72e1b4bfff\") " pod="openshift-marketplace/redhat-operators-px9cf" Feb 23 18:38:58 crc kubenswrapper[4724]: I0223 18:38:58.273970 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99h76\" (UniqueName: \"kubernetes.io/projected/731eb56e-9370-4eb9-ae10-bb72e1b4bfff-kube-api-access-99h76\") pod \"redhat-operators-px9cf\" (UID: \"731eb56e-9370-4eb9-ae10-bb72e1b4bfff\") " pod="openshift-marketplace/redhat-operators-px9cf" Feb 23 18:38:58 crc kubenswrapper[4724]: I0223 18:38:58.326221 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-px9cf" Feb 23 18:38:58 crc kubenswrapper[4724]: I0223 18:38:58.804701 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-px9cf"] Feb 23 18:38:59 crc kubenswrapper[4724]: I0223 18:38:59.165509 4724 generic.go:334] "Generic (PLEG): container finished" podID="731eb56e-9370-4eb9-ae10-bb72e1b4bfff" containerID="8450ec1e1f952dc6b0b1509adb6491e30b4e43adf83083aed90c0b58aaff10a7" exitCode=0 Feb 23 18:38:59 crc kubenswrapper[4724]: I0223 18:38:59.165576 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-px9cf" event={"ID":"731eb56e-9370-4eb9-ae10-bb72e1b4bfff","Type":"ContainerDied","Data":"8450ec1e1f952dc6b0b1509adb6491e30b4e43adf83083aed90c0b58aaff10a7"} Feb 23 18:38:59 crc kubenswrapper[4724]: I0223 18:38:59.165819 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-px9cf" event={"ID":"731eb56e-9370-4eb9-ae10-bb72e1b4bfff","Type":"ContainerStarted","Data":"5fdeb92e9791fd200150c64cf9e666c55aca67627059c167cfc77d9d569bd4cf"} Feb 23 18:39:01 crc kubenswrapper[4724]: I0223 18:39:01.188112 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-px9cf" event={"ID":"731eb56e-9370-4eb9-ae10-bb72e1b4bfff","Type":"ContainerStarted","Data":"6781765cd4c277fa06a15776c25be7ec4325053915ec3d2a870ead1bf3a6eea8"} Feb 23 18:39:01 crc kubenswrapper[4724]: I0223 18:39:01.951603 4724 scope.go:117] "RemoveContainer" containerID="b209a8a23ffecc79bcc0a715d72e7230168eca6b1013b5c775a2fb6a015494f3" Feb 23 18:39:01 crc kubenswrapper[4724]: E0223 18:39:01.951913 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:39:05 crc kubenswrapper[4724]: I0223 18:39:05.294284 4724 generic.go:334] "Generic (PLEG): container finished" podID="731eb56e-9370-4eb9-ae10-bb72e1b4bfff" containerID="6781765cd4c277fa06a15776c25be7ec4325053915ec3d2a870ead1bf3a6eea8" exitCode=0 Feb 23 18:39:05 crc kubenswrapper[4724]: I0223 18:39:05.294325 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-px9cf" event={"ID":"731eb56e-9370-4eb9-ae10-bb72e1b4bfff","Type":"ContainerDied","Data":"6781765cd4c277fa06a15776c25be7ec4325053915ec3d2a870ead1bf3a6eea8"} Feb 23 18:39:06 crc kubenswrapper[4724]: I0223 18:39:06.307137 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-px9cf" event={"ID":"731eb56e-9370-4eb9-ae10-bb72e1b4bfff","Type":"ContainerStarted","Data":"7fc608ef6006c69b5af1ad525d294ba07cd5673d3ccb7ca6c23e7c6adae55e07"} Feb 23 18:39:06 crc kubenswrapper[4724]: I0223 18:39:06.346002 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-px9cf" podStartSLOduration=2.802394573 podStartE2EDuration="9.345981828s" podCreationTimestamp="2026-02-23 18:38:57 +0000 UTC" firstStartedPulling="2026-02-23 18:38:59.167035512 +0000 UTC m=+4094.983235112" lastFinishedPulling="2026-02-23 18:39:05.710622757 +0000 UTC m=+4101.526822367" observedRunningTime="2026-02-23 18:39:06.331801173 +0000 UTC m=+4102.148000813" watchObservedRunningTime="2026-02-23 18:39:06.345981828 +0000 UTC m=+4102.162181428" Feb 23 18:39:08 crc kubenswrapper[4724]: I0223 18:39:08.327630 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-px9cf" Feb 23 18:39:08 crc kubenswrapper[4724]: I0223 18:39:08.327967 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-px9cf" Feb 23 18:39:09 crc kubenswrapper[4724]: I0223 18:39:09.393687 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-px9cf" podUID="731eb56e-9370-4eb9-ae10-bb72e1b4bfff" containerName="registry-server" probeResult="failure" output=< Feb 23 18:39:09 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 23 18:39:09 crc kubenswrapper[4724]: > Feb 23 18:39:15 crc kubenswrapper[4724]: I0223 18:39:15.951830 4724 scope.go:117] "RemoveContainer" containerID="b209a8a23ffecc79bcc0a715d72e7230168eca6b1013b5c775a2fb6a015494f3" Feb 23 18:39:15 crc kubenswrapper[4724]: E0223 18:39:15.953654 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:39:18 crc kubenswrapper[4724]: I0223 18:39:18.628115 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-px9cf" Feb 23 18:39:18 crc kubenswrapper[4724]: I0223 18:39:18.696713 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-px9cf" Feb 23 18:39:18 crc kubenswrapper[4724]: I0223 18:39:18.871599 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-px9cf"] Feb 23 18:39:20 crc kubenswrapper[4724]: I0223 18:39:20.438051 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-px9cf" podUID="731eb56e-9370-4eb9-ae10-bb72e1b4bfff" containerName="registry-server" containerID="cri-o://7fc608ef6006c69b5af1ad525d294ba07cd5673d3ccb7ca6c23e7c6adae55e07" gracePeriod=2 Feb 23 18:39:21 crc kubenswrapper[4724]: I0223 18:39:21.173913 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-px9cf" Feb 23 18:39:21 crc kubenswrapper[4724]: I0223 18:39:21.346584 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/731eb56e-9370-4eb9-ae10-bb72e1b4bfff-catalog-content\") pod \"731eb56e-9370-4eb9-ae10-bb72e1b4bfff\" (UID: \"731eb56e-9370-4eb9-ae10-bb72e1b4bfff\") " Feb 23 18:39:21 crc kubenswrapper[4724]: I0223 18:39:21.346660 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/731eb56e-9370-4eb9-ae10-bb72e1b4bfff-utilities\") pod \"731eb56e-9370-4eb9-ae10-bb72e1b4bfff\" (UID: \"731eb56e-9370-4eb9-ae10-bb72e1b4bfff\") " Feb 23 18:39:21 crc kubenswrapper[4724]: I0223 18:39:21.346956 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99h76\" (UniqueName: \"kubernetes.io/projected/731eb56e-9370-4eb9-ae10-bb72e1b4bfff-kube-api-access-99h76\") pod \"731eb56e-9370-4eb9-ae10-bb72e1b4bfff\" (UID: \"731eb56e-9370-4eb9-ae10-bb72e1b4bfff\") " Feb 23 18:39:21 crc kubenswrapper[4724]: I0223 18:39:21.347540 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/731eb56e-9370-4eb9-ae10-bb72e1b4bfff-utilities" (OuterVolumeSpecName: "utilities") pod "731eb56e-9370-4eb9-ae10-bb72e1b4bfff" (UID: "731eb56e-9370-4eb9-ae10-bb72e1b4bfff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:39:21 crc kubenswrapper[4724]: I0223 18:39:21.347727 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/731eb56e-9370-4eb9-ae10-bb72e1b4bfff-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 18:39:21 crc kubenswrapper[4724]: I0223 18:39:21.354241 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/731eb56e-9370-4eb9-ae10-bb72e1b4bfff-kube-api-access-99h76" (OuterVolumeSpecName: "kube-api-access-99h76") pod "731eb56e-9370-4eb9-ae10-bb72e1b4bfff" (UID: "731eb56e-9370-4eb9-ae10-bb72e1b4bfff"). InnerVolumeSpecName "kube-api-access-99h76". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:39:21 crc kubenswrapper[4724]: I0223 18:39:21.448766 4724 generic.go:334] "Generic (PLEG): container finished" podID="731eb56e-9370-4eb9-ae10-bb72e1b4bfff" containerID="7fc608ef6006c69b5af1ad525d294ba07cd5673d3ccb7ca6c23e7c6adae55e07" exitCode=0 Feb 23 18:39:21 crc kubenswrapper[4724]: I0223 18:39:21.448821 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-px9cf" event={"ID":"731eb56e-9370-4eb9-ae10-bb72e1b4bfff","Type":"ContainerDied","Data":"7fc608ef6006c69b5af1ad525d294ba07cd5673d3ccb7ca6c23e7c6adae55e07"} Feb 23 18:39:21 crc kubenswrapper[4724]: I0223 18:39:21.448837 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-px9cf" Feb 23 18:39:21 crc kubenswrapper[4724]: I0223 18:39:21.448852 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-px9cf" event={"ID":"731eb56e-9370-4eb9-ae10-bb72e1b4bfff","Type":"ContainerDied","Data":"5fdeb92e9791fd200150c64cf9e666c55aca67627059c167cfc77d9d569bd4cf"} Feb 23 18:39:21 crc kubenswrapper[4724]: I0223 18:39:21.448873 4724 scope.go:117] "RemoveContainer" containerID="7fc608ef6006c69b5af1ad525d294ba07cd5673d3ccb7ca6c23e7c6adae55e07" Feb 23 18:39:21 crc kubenswrapper[4724]: I0223 18:39:21.449529 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99h76\" (UniqueName: \"kubernetes.io/projected/731eb56e-9370-4eb9-ae10-bb72e1b4bfff-kube-api-access-99h76\") on node \"crc\" DevicePath \"\"" Feb 23 18:39:21 crc kubenswrapper[4724]: I0223 18:39:21.465084 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/731eb56e-9370-4eb9-ae10-bb72e1b4bfff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "731eb56e-9370-4eb9-ae10-bb72e1b4bfff" (UID: "731eb56e-9370-4eb9-ae10-bb72e1b4bfff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:39:21 crc kubenswrapper[4724]: I0223 18:39:21.469203 4724 scope.go:117] "RemoveContainer" containerID="6781765cd4c277fa06a15776c25be7ec4325053915ec3d2a870ead1bf3a6eea8" Feb 23 18:39:21 crc kubenswrapper[4724]: I0223 18:39:21.499985 4724 scope.go:117] "RemoveContainer" containerID="8450ec1e1f952dc6b0b1509adb6491e30b4e43adf83083aed90c0b58aaff10a7" Feb 23 18:39:21 crc kubenswrapper[4724]: I0223 18:39:21.534770 4724 scope.go:117] "RemoveContainer" containerID="7fc608ef6006c69b5af1ad525d294ba07cd5673d3ccb7ca6c23e7c6adae55e07" Feb 23 18:39:21 crc kubenswrapper[4724]: E0223 18:39:21.535233 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fc608ef6006c69b5af1ad525d294ba07cd5673d3ccb7ca6c23e7c6adae55e07\": container with ID starting with 7fc608ef6006c69b5af1ad525d294ba07cd5673d3ccb7ca6c23e7c6adae55e07 not found: ID does not exist" containerID="7fc608ef6006c69b5af1ad525d294ba07cd5673d3ccb7ca6c23e7c6adae55e07" Feb 23 18:39:21 crc kubenswrapper[4724]: I0223 18:39:21.535289 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fc608ef6006c69b5af1ad525d294ba07cd5673d3ccb7ca6c23e7c6adae55e07"} err="failed to get container status \"7fc608ef6006c69b5af1ad525d294ba07cd5673d3ccb7ca6c23e7c6adae55e07\": rpc error: code = NotFound desc = could not find container \"7fc608ef6006c69b5af1ad525d294ba07cd5673d3ccb7ca6c23e7c6adae55e07\": container with ID starting with 7fc608ef6006c69b5af1ad525d294ba07cd5673d3ccb7ca6c23e7c6adae55e07 not found: ID does not exist" Feb 23 18:39:21 crc kubenswrapper[4724]: I0223 18:39:21.535320 4724 scope.go:117] "RemoveContainer" containerID="6781765cd4c277fa06a15776c25be7ec4325053915ec3d2a870ead1bf3a6eea8" Feb 23 18:39:21 crc kubenswrapper[4724]: E0223 18:39:21.536010 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6781765cd4c277fa06a15776c25be7ec4325053915ec3d2a870ead1bf3a6eea8\": container with ID starting with 6781765cd4c277fa06a15776c25be7ec4325053915ec3d2a870ead1bf3a6eea8 not found: ID does not exist" containerID="6781765cd4c277fa06a15776c25be7ec4325053915ec3d2a870ead1bf3a6eea8" Feb 23 18:39:21 crc kubenswrapper[4724]: I0223 18:39:21.536038 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6781765cd4c277fa06a15776c25be7ec4325053915ec3d2a870ead1bf3a6eea8"} err="failed to get container status \"6781765cd4c277fa06a15776c25be7ec4325053915ec3d2a870ead1bf3a6eea8\": rpc error: code = NotFound desc = could not find container \"6781765cd4c277fa06a15776c25be7ec4325053915ec3d2a870ead1bf3a6eea8\": container with ID starting with 6781765cd4c277fa06a15776c25be7ec4325053915ec3d2a870ead1bf3a6eea8 not found: ID does not exist" Feb 23 18:39:21 crc kubenswrapper[4724]: I0223 18:39:21.536056 4724 scope.go:117] "RemoveContainer" containerID="8450ec1e1f952dc6b0b1509adb6491e30b4e43adf83083aed90c0b58aaff10a7" Feb 23 18:39:21 crc kubenswrapper[4724]: E0223 18:39:21.536316 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8450ec1e1f952dc6b0b1509adb6491e30b4e43adf83083aed90c0b58aaff10a7\": container with ID starting with 8450ec1e1f952dc6b0b1509adb6491e30b4e43adf83083aed90c0b58aaff10a7 not found: ID does not exist" containerID="8450ec1e1f952dc6b0b1509adb6491e30b4e43adf83083aed90c0b58aaff10a7" Feb 23 18:39:21 crc kubenswrapper[4724]: I0223 18:39:21.536354 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8450ec1e1f952dc6b0b1509adb6491e30b4e43adf83083aed90c0b58aaff10a7"} err="failed to get container status \"8450ec1e1f952dc6b0b1509adb6491e30b4e43adf83083aed90c0b58aaff10a7\": rpc error: code = NotFound desc = could not find container \"8450ec1e1f952dc6b0b1509adb6491e30b4e43adf83083aed90c0b58aaff10a7\": container with ID starting with 8450ec1e1f952dc6b0b1509adb6491e30b4e43adf83083aed90c0b58aaff10a7 not found: ID does not exist" Feb 23 18:39:21 crc kubenswrapper[4724]: I0223 18:39:21.551983 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/731eb56e-9370-4eb9-ae10-bb72e1b4bfff-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 18:39:21 crc kubenswrapper[4724]: I0223 18:39:21.782307 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-px9cf"] Feb 23 18:39:21 crc kubenswrapper[4724]: I0223 18:39:21.794326 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-px9cf"] Feb 23 18:39:22 crc kubenswrapper[4724]: I0223 18:39:22.961497 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="731eb56e-9370-4eb9-ae10-bb72e1b4bfff" path="/var/lib/kubelet/pods/731eb56e-9370-4eb9-ae10-bb72e1b4bfff/volumes" Feb 23 18:39:30 crc kubenswrapper[4724]: I0223 18:39:30.952205 4724 scope.go:117] "RemoveContainer" containerID="b209a8a23ffecc79bcc0a715d72e7230168eca6b1013b5c775a2fb6a015494f3" Feb 23 18:39:30 crc kubenswrapper[4724]: E0223 18:39:30.953530 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:39:31 crc kubenswrapper[4724]: I0223 18:39:31.231961 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cwjrw"] Feb 23 18:39:31 crc kubenswrapper[4724]: E0223 18:39:31.232379 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="731eb56e-9370-4eb9-ae10-bb72e1b4bfff" containerName="extract-utilities" Feb 23 18:39:31 crc kubenswrapper[4724]: I0223 18:39:31.232744 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="731eb56e-9370-4eb9-ae10-bb72e1b4bfff" containerName="extract-utilities" Feb 23 18:39:31 crc kubenswrapper[4724]: E0223 18:39:31.232761 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="731eb56e-9370-4eb9-ae10-bb72e1b4bfff" containerName="registry-server" Feb 23 18:39:31 crc kubenswrapper[4724]: I0223 18:39:31.232768 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="731eb56e-9370-4eb9-ae10-bb72e1b4bfff" containerName="registry-server" Feb 23 18:39:31 crc kubenswrapper[4724]: E0223 18:39:31.232793 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="731eb56e-9370-4eb9-ae10-bb72e1b4bfff" containerName="extract-content" Feb 23 18:39:31 crc kubenswrapper[4724]: I0223 18:39:31.232809 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="731eb56e-9370-4eb9-ae10-bb72e1b4bfff" containerName="extract-content" Feb 23 18:39:31 crc kubenswrapper[4724]: I0223 18:39:31.233105 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="731eb56e-9370-4eb9-ae10-bb72e1b4bfff" containerName="registry-server" Feb 23 18:39:31 crc kubenswrapper[4724]: I0223 18:39:31.234765 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cwjrw" Feb 23 18:39:31 crc kubenswrapper[4724]: I0223 18:39:31.255165 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cwjrw"] Feb 23 18:39:31 crc kubenswrapper[4724]: I0223 18:39:31.266351 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8zlv\" (UniqueName: \"kubernetes.io/projected/2165b868-e869-4b0f-84d1-f193334efa14-kube-api-access-j8zlv\") pod \"redhat-marketplace-cwjrw\" (UID: \"2165b868-e869-4b0f-84d1-f193334efa14\") " pod="openshift-marketplace/redhat-marketplace-cwjrw" Feb 23 18:39:31 crc kubenswrapper[4724]: I0223 18:39:31.266890 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2165b868-e869-4b0f-84d1-f193334efa14-utilities\") pod \"redhat-marketplace-cwjrw\" (UID: \"2165b868-e869-4b0f-84d1-f193334efa14\") " pod="openshift-marketplace/redhat-marketplace-cwjrw" Feb 23 18:39:31 crc kubenswrapper[4724]: I0223 18:39:31.267151 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2165b868-e869-4b0f-84d1-f193334efa14-catalog-content\") pod \"redhat-marketplace-cwjrw\" (UID: \"2165b868-e869-4b0f-84d1-f193334efa14\") " pod="openshift-marketplace/redhat-marketplace-cwjrw" Feb 23 18:39:31 crc kubenswrapper[4724]: I0223 18:39:31.371329 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2165b868-e869-4b0f-84d1-f193334efa14-catalog-content\") pod \"redhat-marketplace-cwjrw\" (UID: \"2165b868-e869-4b0f-84d1-f193334efa14\") " pod="openshift-marketplace/redhat-marketplace-cwjrw" Feb 23 18:39:31 crc kubenswrapper[4724]: I0223 18:39:31.371488 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8zlv\" (UniqueName: \"kubernetes.io/projected/2165b868-e869-4b0f-84d1-f193334efa14-kube-api-access-j8zlv\") pod \"redhat-marketplace-cwjrw\" (UID: \"2165b868-e869-4b0f-84d1-f193334efa14\") " pod="openshift-marketplace/redhat-marketplace-cwjrw" Feb 23 18:39:31 crc kubenswrapper[4724]: I0223 18:39:31.371920 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2165b868-e869-4b0f-84d1-f193334efa14-catalog-content\") pod \"redhat-marketplace-cwjrw\" (UID: \"2165b868-e869-4b0f-84d1-f193334efa14\") " pod="openshift-marketplace/redhat-marketplace-cwjrw" Feb 23 18:39:31 crc kubenswrapper[4724]: I0223 18:39:31.372037 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2165b868-e869-4b0f-84d1-f193334efa14-utilities\") pod \"redhat-marketplace-cwjrw\" (UID: \"2165b868-e869-4b0f-84d1-f193334efa14\") " pod="openshift-marketplace/redhat-marketplace-cwjrw" Feb 23 18:39:31 crc kubenswrapper[4724]: I0223 18:39:31.372303 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2165b868-e869-4b0f-84d1-f193334efa14-utilities\") pod \"redhat-marketplace-cwjrw\" (UID: \"2165b868-e869-4b0f-84d1-f193334efa14\") " pod="openshift-marketplace/redhat-marketplace-cwjrw" Feb 23 18:39:31 crc kubenswrapper[4724]: I0223 18:39:31.771466 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8zlv\" (UniqueName: \"kubernetes.io/projected/2165b868-e869-4b0f-84d1-f193334efa14-kube-api-access-j8zlv\") pod \"redhat-marketplace-cwjrw\" (UID: \"2165b868-e869-4b0f-84d1-f193334efa14\") " pod="openshift-marketplace/redhat-marketplace-cwjrw" Feb 23 18:39:31 crc kubenswrapper[4724]: I0223 18:39:31.859132 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cwjrw" Feb 23 18:39:32 crc kubenswrapper[4724]: I0223 18:39:32.431515 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cwjrw"] Feb 23 18:39:32 crc kubenswrapper[4724]: I0223 18:39:32.585975 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cwjrw" event={"ID":"2165b868-e869-4b0f-84d1-f193334efa14","Type":"ContainerStarted","Data":"2ec5fd88a4687a7ff4a35f48d78af4499c28dd50e2b60c6da997f958ac15a27c"} Feb 23 18:39:33 crc kubenswrapper[4724]: I0223 18:39:33.595656 4724 generic.go:334] "Generic (PLEG): container finished" podID="2165b868-e869-4b0f-84d1-f193334efa14" containerID="572eb5a5294715d261a78d9b5c611ad57df100fb4ea6acd5683529170d51766d" exitCode=0 Feb 23 18:39:33 crc kubenswrapper[4724]: I0223 18:39:33.595711 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cwjrw" event={"ID":"2165b868-e869-4b0f-84d1-f193334efa14","Type":"ContainerDied","Data":"572eb5a5294715d261a78d9b5c611ad57df100fb4ea6acd5683529170d51766d"} Feb 23 18:39:34 crc kubenswrapper[4724]: I0223 18:39:34.606885 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cwjrw" event={"ID":"2165b868-e869-4b0f-84d1-f193334efa14","Type":"ContainerStarted","Data":"f29068b33de84fd06d791a8ce2cf15d5171ee240675d23a0beff94a183db7ae4"} Feb 23 18:39:35 crc kubenswrapper[4724]: I0223 18:39:35.616324 4724 generic.go:334] "Generic (PLEG): container finished" podID="2165b868-e869-4b0f-84d1-f193334efa14" containerID="f29068b33de84fd06d791a8ce2cf15d5171ee240675d23a0beff94a183db7ae4" exitCode=0 Feb 23 18:39:35 crc kubenswrapper[4724]: I0223 18:39:35.616440 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cwjrw" event={"ID":"2165b868-e869-4b0f-84d1-f193334efa14","Type":"ContainerDied","Data":"f29068b33de84fd06d791a8ce2cf15d5171ee240675d23a0beff94a183db7ae4"} Feb 23 18:39:36 crc kubenswrapper[4724]: I0223 18:39:36.630380 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cwjrw" event={"ID":"2165b868-e869-4b0f-84d1-f193334efa14","Type":"ContainerStarted","Data":"d85850d89d41684a746e54c3c4d1050ff369abed001fb27f96bb7cd73769902d"} Feb 23 18:39:36 crc kubenswrapper[4724]: I0223 18:39:36.654512 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cwjrw" podStartSLOduration=3.23128747 podStartE2EDuration="5.654490245s" podCreationTimestamp="2026-02-23 18:39:31 +0000 UTC" firstStartedPulling="2026-02-23 18:39:33.597831028 +0000 UTC m=+4129.414030628" lastFinishedPulling="2026-02-23 18:39:36.021033803 +0000 UTC m=+4131.837233403" observedRunningTime="2026-02-23 18:39:36.64709844 +0000 UTC m=+4132.463298050" watchObservedRunningTime="2026-02-23 18:39:36.654490245 +0000 UTC m=+4132.470689865" Feb 23 18:39:41 crc kubenswrapper[4724]: I0223 18:39:41.860048 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cwjrw" Feb 23 18:39:41 crc kubenswrapper[4724]: I0223 18:39:41.861094 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cwjrw" Feb 23 18:39:41 crc kubenswrapper[4724]: I0223 18:39:41.923340 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cwjrw" Feb 23 18:39:41 crc kubenswrapper[4724]: I0223 18:39:41.951212 4724 scope.go:117] "RemoveContainer" containerID="b209a8a23ffecc79bcc0a715d72e7230168eca6b1013b5c775a2fb6a015494f3" Feb 23 18:39:41 crc kubenswrapper[4724]: E0223 18:39:41.951607 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:39:42 crc kubenswrapper[4724]: I0223 18:39:42.733564 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cwjrw" Feb 23 18:39:44 crc kubenswrapper[4724]: I0223 18:39:44.402625 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cwjrw"] Feb 23 18:39:44 crc kubenswrapper[4724]: I0223 18:39:44.703421 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cwjrw" podUID="2165b868-e869-4b0f-84d1-f193334efa14" containerName="registry-server" containerID="cri-o://d85850d89d41684a746e54c3c4d1050ff369abed001fb27f96bb7cd73769902d" gracePeriod=2 Feb 23 18:39:45 crc kubenswrapper[4724]: I0223 18:39:45.227713 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cwjrw" Feb 23 18:39:45 crc kubenswrapper[4724]: I0223 18:39:45.380951 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2165b868-e869-4b0f-84d1-f193334efa14-utilities\") pod \"2165b868-e869-4b0f-84d1-f193334efa14\" (UID: \"2165b868-e869-4b0f-84d1-f193334efa14\") " Feb 23 18:39:45 crc kubenswrapper[4724]: I0223 18:39:45.381375 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8zlv\" (UniqueName: \"kubernetes.io/projected/2165b868-e869-4b0f-84d1-f193334efa14-kube-api-access-j8zlv\") pod \"2165b868-e869-4b0f-84d1-f193334efa14\" (UID: \"2165b868-e869-4b0f-84d1-f193334efa14\") " Feb 23 18:39:45 crc kubenswrapper[4724]: I0223 18:39:45.381418 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2165b868-e869-4b0f-84d1-f193334efa14-catalog-content\") pod \"2165b868-e869-4b0f-84d1-f193334efa14\" (UID: \"2165b868-e869-4b0f-84d1-f193334efa14\") " Feb 23 18:39:45 crc kubenswrapper[4724]: I0223 18:39:45.381776 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2165b868-e869-4b0f-84d1-f193334efa14-utilities" (OuterVolumeSpecName: "utilities") pod "2165b868-e869-4b0f-84d1-f193334efa14" (UID: "2165b868-e869-4b0f-84d1-f193334efa14"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:39:45 crc kubenswrapper[4724]: I0223 18:39:45.382003 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2165b868-e869-4b0f-84d1-f193334efa14-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 18:39:45 crc kubenswrapper[4724]: I0223 18:39:45.394742 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2165b868-e869-4b0f-84d1-f193334efa14-kube-api-access-j8zlv" (OuterVolumeSpecName: "kube-api-access-j8zlv") pod "2165b868-e869-4b0f-84d1-f193334efa14" (UID: "2165b868-e869-4b0f-84d1-f193334efa14"). InnerVolumeSpecName "kube-api-access-j8zlv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:39:45 crc kubenswrapper[4724]: I0223 18:39:45.430370 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2165b868-e869-4b0f-84d1-f193334efa14-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2165b868-e869-4b0f-84d1-f193334efa14" (UID: "2165b868-e869-4b0f-84d1-f193334efa14"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:39:45 crc kubenswrapper[4724]: I0223 18:39:45.483929 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8zlv\" (UniqueName: \"kubernetes.io/projected/2165b868-e869-4b0f-84d1-f193334efa14-kube-api-access-j8zlv\") on node \"crc\" DevicePath \"\"" Feb 23 18:39:45 crc kubenswrapper[4724]: I0223 18:39:45.484227 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2165b868-e869-4b0f-84d1-f193334efa14-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 18:39:45 crc kubenswrapper[4724]: I0223 18:39:45.714296 4724 generic.go:334] "Generic (PLEG): container finished" podID="2165b868-e869-4b0f-84d1-f193334efa14" containerID="d85850d89d41684a746e54c3c4d1050ff369abed001fb27f96bb7cd73769902d" exitCode=0 Feb 23 18:39:45 crc kubenswrapper[4724]: I0223 18:39:45.714352 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cwjrw" event={"ID":"2165b868-e869-4b0f-84d1-f193334efa14","Type":"ContainerDied","Data":"d85850d89d41684a746e54c3c4d1050ff369abed001fb27f96bb7cd73769902d"} Feb 23 18:39:45 crc kubenswrapper[4724]: I0223 18:39:45.714404 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cwjrw" event={"ID":"2165b868-e869-4b0f-84d1-f193334efa14","Type":"ContainerDied","Data":"2ec5fd88a4687a7ff4a35f48d78af4499c28dd50e2b60c6da997f958ac15a27c"} Feb 23 18:39:45 crc kubenswrapper[4724]: I0223 18:39:45.714431 4724 scope.go:117] "RemoveContainer" containerID="d85850d89d41684a746e54c3c4d1050ff369abed001fb27f96bb7cd73769902d" Feb 23 18:39:45 crc kubenswrapper[4724]: I0223 18:39:45.714668 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cwjrw" Feb 23 18:39:45 crc kubenswrapper[4724]: I0223 18:39:45.735590 4724 scope.go:117] "RemoveContainer" containerID="f29068b33de84fd06d791a8ce2cf15d5171ee240675d23a0beff94a183db7ae4" Feb 23 18:39:45 crc kubenswrapper[4724]: I0223 18:39:45.766692 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cwjrw"] Feb 23 18:39:45 crc kubenswrapper[4724]: I0223 18:39:45.768657 4724 scope.go:117] "RemoveContainer" containerID="572eb5a5294715d261a78d9b5c611ad57df100fb4ea6acd5683529170d51766d" Feb 23 18:39:45 crc kubenswrapper[4724]: I0223 18:39:45.775503 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cwjrw"] Feb 23 18:39:45 crc kubenswrapper[4724]: I0223 18:39:45.819600 4724 scope.go:117] "RemoveContainer" containerID="d85850d89d41684a746e54c3c4d1050ff369abed001fb27f96bb7cd73769902d" Feb 23 18:39:45 crc kubenswrapper[4724]: E0223 18:39:45.820254 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d85850d89d41684a746e54c3c4d1050ff369abed001fb27f96bb7cd73769902d\": container with ID starting with d85850d89d41684a746e54c3c4d1050ff369abed001fb27f96bb7cd73769902d not found: ID does not exist" containerID="d85850d89d41684a746e54c3c4d1050ff369abed001fb27f96bb7cd73769902d" Feb 23 18:39:45 crc kubenswrapper[4724]: I0223 18:39:45.820306 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d85850d89d41684a746e54c3c4d1050ff369abed001fb27f96bb7cd73769902d"} err="failed to get container status \"d85850d89d41684a746e54c3c4d1050ff369abed001fb27f96bb7cd73769902d\": rpc error: code = NotFound desc = could not find container \"d85850d89d41684a746e54c3c4d1050ff369abed001fb27f96bb7cd73769902d\": container with ID starting with d85850d89d41684a746e54c3c4d1050ff369abed001fb27f96bb7cd73769902d not found: ID does not exist" Feb 23 18:39:45 crc kubenswrapper[4724]: I0223 18:39:45.820339 4724 scope.go:117] "RemoveContainer" containerID="f29068b33de84fd06d791a8ce2cf15d5171ee240675d23a0beff94a183db7ae4" Feb 23 18:39:45 crc kubenswrapper[4724]: E0223 18:39:45.821600 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f29068b33de84fd06d791a8ce2cf15d5171ee240675d23a0beff94a183db7ae4\": container with ID starting with f29068b33de84fd06d791a8ce2cf15d5171ee240675d23a0beff94a183db7ae4 not found: ID does not exist" containerID="f29068b33de84fd06d791a8ce2cf15d5171ee240675d23a0beff94a183db7ae4" Feb 23 18:39:45 crc kubenswrapper[4724]: I0223 18:39:45.821643 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f29068b33de84fd06d791a8ce2cf15d5171ee240675d23a0beff94a183db7ae4"} err="failed to get container status \"f29068b33de84fd06d791a8ce2cf15d5171ee240675d23a0beff94a183db7ae4\": rpc error: code = NotFound desc = could not find container \"f29068b33de84fd06d791a8ce2cf15d5171ee240675d23a0beff94a183db7ae4\": container with ID starting with f29068b33de84fd06d791a8ce2cf15d5171ee240675d23a0beff94a183db7ae4 not found: ID does not exist" Feb 23 18:39:45 crc kubenswrapper[4724]: I0223 18:39:45.821672 4724 scope.go:117] "RemoveContainer" containerID="572eb5a5294715d261a78d9b5c611ad57df100fb4ea6acd5683529170d51766d" Feb 23 18:39:45 crc kubenswrapper[4724]: E0223 18:39:45.822079 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"572eb5a5294715d261a78d9b5c611ad57df100fb4ea6acd5683529170d51766d\": container with ID starting with 572eb5a5294715d261a78d9b5c611ad57df100fb4ea6acd5683529170d51766d not found: ID does not exist" containerID="572eb5a5294715d261a78d9b5c611ad57df100fb4ea6acd5683529170d51766d" Feb 23 18:39:45 crc kubenswrapper[4724]: I0223 18:39:45.822141 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"572eb5a5294715d261a78d9b5c611ad57df100fb4ea6acd5683529170d51766d"} err="failed to get container status \"572eb5a5294715d261a78d9b5c611ad57df100fb4ea6acd5683529170d51766d\": rpc error: code = NotFound desc = could not find container \"572eb5a5294715d261a78d9b5c611ad57df100fb4ea6acd5683529170d51766d\": container with ID starting with 572eb5a5294715d261a78d9b5c611ad57df100fb4ea6acd5683529170d51766d not found: ID does not exist" Feb 23 18:39:46 crc kubenswrapper[4724]: I0223 18:39:46.964480 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2165b868-e869-4b0f-84d1-f193334efa14" path="/var/lib/kubelet/pods/2165b868-e869-4b0f-84d1-f193334efa14/volumes" Feb 23 18:39:52 crc kubenswrapper[4724]: I0223 18:39:52.951592 4724 scope.go:117] "RemoveContainer" containerID="b209a8a23ffecc79bcc0a715d72e7230168eca6b1013b5c775a2fb6a015494f3" Feb 23 18:39:52 crc kubenswrapper[4724]: E0223 18:39:52.952330 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:40:03 crc kubenswrapper[4724]: I0223 18:40:03.951476 4724 scope.go:117] "RemoveContainer" containerID="b209a8a23ffecc79bcc0a715d72e7230168eca6b1013b5c775a2fb6a015494f3" Feb 23 18:40:03 crc kubenswrapper[4724]: E0223 18:40:03.952450 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:40:18 crc kubenswrapper[4724]: I0223 18:40:18.951701 4724 scope.go:117] "RemoveContainer" containerID="b209a8a23ffecc79bcc0a715d72e7230168eca6b1013b5c775a2fb6a015494f3" Feb 23 18:40:18 crc kubenswrapper[4724]: E0223 18:40:18.952517 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:40:33 crc kubenswrapper[4724]: I0223 18:40:33.952469 4724 scope.go:117] "RemoveContainer" containerID="b209a8a23ffecc79bcc0a715d72e7230168eca6b1013b5c775a2fb6a015494f3" Feb 23 18:40:33 crc kubenswrapper[4724]: E0223 18:40:33.958085 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:40:47 crc kubenswrapper[4724]: I0223 18:40:47.951364 4724 scope.go:117] "RemoveContainer" containerID="b209a8a23ffecc79bcc0a715d72e7230168eca6b1013b5c775a2fb6a015494f3" Feb 23 18:40:47 crc kubenswrapper[4724]: E0223 18:40:47.952205 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:41:00 crc kubenswrapper[4724]: I0223 18:41:00.951243 4724 scope.go:117] "RemoveContainer" containerID="b209a8a23ffecc79bcc0a715d72e7230168eca6b1013b5c775a2fb6a015494f3" Feb 23 18:41:01 crc kubenswrapper[4724]: I0223 18:41:01.726145 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" event={"ID":"a065b197-b354-4d9b-b2e9-7d4882a3d1a2","Type":"ContainerStarted","Data":"c2651a85152b12f77a4e31ce37ecf6c04ac00087b49a3fcfe0f78e33571bca07"} Feb 23 18:43:27 crc kubenswrapper[4724]: I0223 18:43:27.752468 4724 patch_prober.go:28] interesting pod/machine-config-daemon-rw78r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 18:43:27 crc kubenswrapper[4724]: I0223 18:43:27.753109 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 18:43:52 crc kubenswrapper[4724]: I0223 18:43:52.933306 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-z9v22"] Feb 23 18:43:52 crc kubenswrapper[4724]: E0223 18:43:52.934414 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2165b868-e869-4b0f-84d1-f193334efa14" containerName="extract-utilities" Feb 23 18:43:52 crc kubenswrapper[4724]: I0223 18:43:52.934433 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="2165b868-e869-4b0f-84d1-f193334efa14" containerName="extract-utilities" Feb 23 18:43:52 crc kubenswrapper[4724]: E0223 18:43:52.934458 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2165b868-e869-4b0f-84d1-f193334efa14" containerName="extract-content" Feb 23 18:43:52 crc kubenswrapper[4724]: I0223 18:43:52.934465 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="2165b868-e869-4b0f-84d1-f193334efa14" containerName="extract-content" Feb 23 18:43:52 crc kubenswrapper[4724]: E0223 18:43:52.934505 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2165b868-e869-4b0f-84d1-f193334efa14" containerName="registry-server" Feb 23 18:43:52 crc kubenswrapper[4724]: I0223 18:43:52.934513 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="2165b868-e869-4b0f-84d1-f193334efa14" containerName="registry-server" Feb 23 18:43:52 crc kubenswrapper[4724]: I0223 18:43:52.934755 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="2165b868-e869-4b0f-84d1-f193334efa14" containerName="registry-server" Feb 23 18:43:52 crc kubenswrapper[4724]: I0223 18:43:52.937039 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z9v22" Feb 23 18:43:52 crc kubenswrapper[4724]: I0223 18:43:52.943768 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z9v22"] Feb 23 18:43:52 crc kubenswrapper[4724]: I0223 18:43:52.999609 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf9gj\" (UniqueName: \"kubernetes.io/projected/25a1e5d6-17e9-4d24-b9ac-8d8d18e3600a-kube-api-access-mf9gj\") pod \"certified-operators-z9v22\" (UID: \"25a1e5d6-17e9-4d24-b9ac-8d8d18e3600a\") " pod="openshift-marketplace/certified-operators-z9v22" Feb 23 18:43:52 crc kubenswrapper[4724]: I0223 18:43:52.999718 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25a1e5d6-17e9-4d24-b9ac-8d8d18e3600a-utilities\") pod \"certified-operators-z9v22\" (UID: \"25a1e5d6-17e9-4d24-b9ac-8d8d18e3600a\") " pod="openshift-marketplace/certified-operators-z9v22" Feb 23 18:43:52 crc kubenswrapper[4724]: I0223 18:43:52.999845 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25a1e5d6-17e9-4d24-b9ac-8d8d18e3600a-catalog-content\") pod \"certified-operators-z9v22\" (UID: \"25a1e5d6-17e9-4d24-b9ac-8d8d18e3600a\") " pod="openshift-marketplace/certified-operators-z9v22" Feb 23 18:43:53 crc kubenswrapper[4724]: I0223 18:43:53.101512 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25a1e5d6-17e9-4d24-b9ac-8d8d18e3600a-catalog-content\") pod \"certified-operators-z9v22\" (UID: \"25a1e5d6-17e9-4d24-b9ac-8d8d18e3600a\") " pod="openshift-marketplace/certified-operators-z9v22" Feb 23 18:43:53 crc kubenswrapper[4724]: I0223 18:43:53.101672 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mf9gj\" (UniqueName: \"kubernetes.io/projected/25a1e5d6-17e9-4d24-b9ac-8d8d18e3600a-kube-api-access-mf9gj\") pod \"certified-operators-z9v22\" (UID: \"25a1e5d6-17e9-4d24-b9ac-8d8d18e3600a\") " pod="openshift-marketplace/certified-operators-z9v22" Feb 23 18:43:53 crc kubenswrapper[4724]: I0223 18:43:53.101703 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25a1e5d6-17e9-4d24-b9ac-8d8d18e3600a-utilities\") pod \"certified-operators-z9v22\" (UID: \"25a1e5d6-17e9-4d24-b9ac-8d8d18e3600a\") " pod="openshift-marketplace/certified-operators-z9v22" Feb 23 18:43:53 crc kubenswrapper[4724]: I0223 18:43:53.102071 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25a1e5d6-17e9-4d24-b9ac-8d8d18e3600a-catalog-content\") pod \"certified-operators-z9v22\" (UID: \"25a1e5d6-17e9-4d24-b9ac-8d8d18e3600a\") " pod="openshift-marketplace/certified-operators-z9v22" Feb 23 18:43:53 crc kubenswrapper[4724]: I0223 18:43:53.102116 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25a1e5d6-17e9-4d24-b9ac-8d8d18e3600a-utilities\") pod \"certified-operators-z9v22\" (UID: \"25a1e5d6-17e9-4d24-b9ac-8d8d18e3600a\") " pod="openshift-marketplace/certified-operators-z9v22" Feb 23 18:43:53 crc kubenswrapper[4724]: I0223 18:43:53.120219 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mf9gj\" (UniqueName: \"kubernetes.io/projected/25a1e5d6-17e9-4d24-b9ac-8d8d18e3600a-kube-api-access-mf9gj\") pod \"certified-operators-z9v22\" (UID: \"25a1e5d6-17e9-4d24-b9ac-8d8d18e3600a\") " pod="openshift-marketplace/certified-operators-z9v22" Feb 23 18:43:53 crc kubenswrapper[4724]: I0223 18:43:53.260893 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z9v22" Feb 23 18:43:53 crc kubenswrapper[4724]: I0223 18:43:53.743202 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z9v22"] Feb 23 18:43:54 crc kubenswrapper[4724]: I0223 18:43:54.430610 4724 generic.go:334] "Generic (PLEG): container finished" podID="25a1e5d6-17e9-4d24-b9ac-8d8d18e3600a" containerID="b56b3cf88d04813ba09fa8d2f22a2dc68124e3c8e13afa68dfcd168868d28fac" exitCode=0 Feb 23 18:43:54 crc kubenswrapper[4724]: I0223 18:43:54.430650 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z9v22" event={"ID":"25a1e5d6-17e9-4d24-b9ac-8d8d18e3600a","Type":"ContainerDied","Data":"b56b3cf88d04813ba09fa8d2f22a2dc68124e3c8e13afa68dfcd168868d28fac"} Feb 23 18:43:54 crc kubenswrapper[4724]: I0223 18:43:54.430939 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z9v22" event={"ID":"25a1e5d6-17e9-4d24-b9ac-8d8d18e3600a","Type":"ContainerStarted","Data":"1917f08460b8bb5f2ec9782ed2cc6f6de079e7667f982af90e1285ad2edfea98"} Feb 23 18:43:54 crc kubenswrapper[4724]: I0223 18:43:54.432701 4724 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 18:43:56 crc kubenswrapper[4724]: I0223 18:43:56.455614 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z9v22" event={"ID":"25a1e5d6-17e9-4d24-b9ac-8d8d18e3600a","Type":"ContainerStarted","Data":"1638784e65799a225bf3999a722f3d22a0d7b17b6c5b11a028b48a4ba6bd7eac"} Feb 23 18:43:57 crc kubenswrapper[4724]: I0223 18:43:57.474230 4724 generic.go:334] "Generic (PLEG): container finished" podID="25a1e5d6-17e9-4d24-b9ac-8d8d18e3600a" containerID="1638784e65799a225bf3999a722f3d22a0d7b17b6c5b11a028b48a4ba6bd7eac" exitCode=0 Feb 23 18:43:57 crc kubenswrapper[4724]: I0223 18:43:57.474285 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z9v22" event={"ID":"25a1e5d6-17e9-4d24-b9ac-8d8d18e3600a","Type":"ContainerDied","Data":"1638784e65799a225bf3999a722f3d22a0d7b17b6c5b11a028b48a4ba6bd7eac"} Feb 23 18:43:57 crc kubenswrapper[4724]: I0223 18:43:57.752037 4724 patch_prober.go:28] interesting pod/machine-config-daemon-rw78r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 18:43:57 crc kubenswrapper[4724]: I0223 18:43:57.752456 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 18:43:58 crc kubenswrapper[4724]: I0223 18:43:58.487762 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z9v22" event={"ID":"25a1e5d6-17e9-4d24-b9ac-8d8d18e3600a","Type":"ContainerStarted","Data":"a7bcbaa2060123e09f91d79dbd0db6d3b60b02e0110d67e0e540117224c5ddb4"} Feb 23 18:43:58 crc kubenswrapper[4724]: I0223 18:43:58.513156 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-z9v22" podStartSLOduration=3.027284424 podStartE2EDuration="6.513125058s" podCreationTimestamp="2026-02-23 18:43:52 +0000 UTC" firstStartedPulling="2026-02-23 18:43:54.432323484 +0000 UTC m=+4390.248523084" lastFinishedPulling="2026-02-23 18:43:57.918164118 +0000 UTC m=+4393.734363718" observedRunningTime="2026-02-23 18:43:58.507029237 +0000 UTC m=+4394.323228877" watchObservedRunningTime="2026-02-23 18:43:58.513125058 +0000 UTC m=+4394.329324698" Feb 23 18:44:03 crc kubenswrapper[4724]: I0223 18:44:03.261570 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-z9v22" Feb 23 18:44:03 crc kubenswrapper[4724]: I0223 18:44:03.262067 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-z9v22" Feb 23 18:44:03 crc kubenswrapper[4724]: I0223 18:44:03.307918 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-z9v22" Feb 23 18:44:03 crc kubenswrapper[4724]: I0223 18:44:03.573008 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-z9v22" Feb 23 18:44:03 crc kubenswrapper[4724]: I0223 18:44:03.631947 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z9v22"] Feb 23 18:44:05 crc kubenswrapper[4724]: I0223 18:44:05.547583 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-z9v22" podUID="25a1e5d6-17e9-4d24-b9ac-8d8d18e3600a" containerName="registry-server" containerID="cri-o://a7bcbaa2060123e09f91d79dbd0db6d3b60b02e0110d67e0e540117224c5ddb4" gracePeriod=2 Feb 23 18:44:06 crc kubenswrapper[4724]: I0223 18:44:06.565582 4724 generic.go:334] "Generic (PLEG): container finished" podID="25a1e5d6-17e9-4d24-b9ac-8d8d18e3600a" containerID="a7bcbaa2060123e09f91d79dbd0db6d3b60b02e0110d67e0e540117224c5ddb4" exitCode=0 Feb 23 18:44:06 crc kubenswrapper[4724]: I0223 18:44:06.565684 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z9v22" event={"ID":"25a1e5d6-17e9-4d24-b9ac-8d8d18e3600a","Type":"ContainerDied","Data":"a7bcbaa2060123e09f91d79dbd0db6d3b60b02e0110d67e0e540117224c5ddb4"} Feb 23 18:44:06 crc kubenswrapper[4724]: I0223 18:44:06.565992 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z9v22" event={"ID":"25a1e5d6-17e9-4d24-b9ac-8d8d18e3600a","Type":"ContainerDied","Data":"1917f08460b8bb5f2ec9782ed2cc6f6de079e7667f982af90e1285ad2edfea98"} Feb 23 18:44:06 crc kubenswrapper[4724]: I0223 18:44:06.566013 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1917f08460b8bb5f2ec9782ed2cc6f6de079e7667f982af90e1285ad2edfea98" Feb 23 18:44:06 crc kubenswrapper[4724]: I0223 18:44:06.748973 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z9v22" Feb 23 18:44:06 crc kubenswrapper[4724]: I0223 18:44:06.923398 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mf9gj\" (UniqueName: \"kubernetes.io/projected/25a1e5d6-17e9-4d24-b9ac-8d8d18e3600a-kube-api-access-mf9gj\") pod \"25a1e5d6-17e9-4d24-b9ac-8d8d18e3600a\" (UID: \"25a1e5d6-17e9-4d24-b9ac-8d8d18e3600a\") " Feb 23 18:44:06 crc kubenswrapper[4724]: I0223 18:44:06.923514 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25a1e5d6-17e9-4d24-b9ac-8d8d18e3600a-utilities\") pod \"25a1e5d6-17e9-4d24-b9ac-8d8d18e3600a\" (UID: \"25a1e5d6-17e9-4d24-b9ac-8d8d18e3600a\") " Feb 23 18:44:06 crc kubenswrapper[4724]: I0223 18:44:06.923585 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25a1e5d6-17e9-4d24-b9ac-8d8d18e3600a-catalog-content\") pod \"25a1e5d6-17e9-4d24-b9ac-8d8d18e3600a\" (UID: \"25a1e5d6-17e9-4d24-b9ac-8d8d18e3600a\") " Feb 23 18:44:06 crc kubenswrapper[4724]: I0223 18:44:06.924681 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25a1e5d6-17e9-4d24-b9ac-8d8d18e3600a-utilities" (OuterVolumeSpecName: "utilities") pod "25a1e5d6-17e9-4d24-b9ac-8d8d18e3600a" (UID: "25a1e5d6-17e9-4d24-b9ac-8d8d18e3600a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:44:06 crc kubenswrapper[4724]: I0223 18:44:06.929572 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25a1e5d6-17e9-4d24-b9ac-8d8d18e3600a-kube-api-access-mf9gj" (OuterVolumeSpecName: "kube-api-access-mf9gj") pod "25a1e5d6-17e9-4d24-b9ac-8d8d18e3600a" (UID: "25a1e5d6-17e9-4d24-b9ac-8d8d18e3600a"). InnerVolumeSpecName "kube-api-access-mf9gj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:44:06 crc kubenswrapper[4724]: I0223 18:44:06.978021 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25a1e5d6-17e9-4d24-b9ac-8d8d18e3600a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "25a1e5d6-17e9-4d24-b9ac-8d8d18e3600a" (UID: "25a1e5d6-17e9-4d24-b9ac-8d8d18e3600a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:44:07 crc kubenswrapper[4724]: I0223 18:44:07.026064 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mf9gj\" (UniqueName: \"kubernetes.io/projected/25a1e5d6-17e9-4d24-b9ac-8d8d18e3600a-kube-api-access-mf9gj\") on node \"crc\" DevicePath \"\"" Feb 23 18:44:07 crc kubenswrapper[4724]: I0223 18:44:07.026103 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25a1e5d6-17e9-4d24-b9ac-8d8d18e3600a-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 18:44:07 crc kubenswrapper[4724]: I0223 18:44:07.026119 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25a1e5d6-17e9-4d24-b9ac-8d8d18e3600a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 18:44:07 crc kubenswrapper[4724]: I0223 18:44:07.573113 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z9v22" Feb 23 18:44:07 crc kubenswrapper[4724]: I0223 18:44:07.612527 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z9v22"] Feb 23 18:44:07 crc kubenswrapper[4724]: I0223 18:44:07.628277 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-z9v22"] Feb 23 18:44:08 crc kubenswrapper[4724]: I0223 18:44:08.963666 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25a1e5d6-17e9-4d24-b9ac-8d8d18e3600a" path="/var/lib/kubelet/pods/25a1e5d6-17e9-4d24-b9ac-8d8d18e3600a/volumes" Feb 23 18:44:26 crc kubenswrapper[4724]: I0223 18:44:26.892745 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-f447dffc7-s2mfq" podUID="46ac9fa5-f3bd-48bf-b1e6-1b570b4bda5b" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 23 18:44:27 crc kubenswrapper[4724]: I0223 18:44:27.754357 4724 patch_prober.go:28] interesting pod/machine-config-daemon-rw78r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 18:44:27 crc kubenswrapper[4724]: I0223 18:44:27.754439 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 18:44:27 crc kubenswrapper[4724]: I0223 18:44:27.754487 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" Feb 23 18:44:27 crc kubenswrapper[4724]: I0223 18:44:27.755323 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c2651a85152b12f77a4e31ce37ecf6c04ac00087b49a3fcfe0f78e33571bca07"} pod="openshift-machine-config-operator/machine-config-daemon-rw78r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 18:44:27 crc kubenswrapper[4724]: I0223 18:44:27.755406 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" containerID="cri-o://c2651a85152b12f77a4e31ce37ecf6c04ac00087b49a3fcfe0f78e33571bca07" gracePeriod=600 Feb 23 18:44:28 crc kubenswrapper[4724]: I0223 18:44:28.773935 4724 generic.go:334] "Generic (PLEG): container finished" podID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerID="c2651a85152b12f77a4e31ce37ecf6c04ac00087b49a3fcfe0f78e33571bca07" exitCode=0 Feb 23 18:44:28 crc kubenswrapper[4724]: I0223 18:44:28.774025 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" event={"ID":"a065b197-b354-4d9b-b2e9-7d4882a3d1a2","Type":"ContainerDied","Data":"c2651a85152b12f77a4e31ce37ecf6c04ac00087b49a3fcfe0f78e33571bca07"} Feb 23 18:44:28 crc kubenswrapper[4724]: I0223 18:44:28.774669 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" event={"ID":"a065b197-b354-4d9b-b2e9-7d4882a3d1a2","Type":"ContainerStarted","Data":"bf1b52f0bf4f6d6849a1e01af7d9a0d575e3465a63d51b8f1787d5cd801f7d99"} Feb 23 18:44:28 crc kubenswrapper[4724]: I0223 18:44:28.774709 4724 scope.go:117] "RemoveContainer" containerID="b209a8a23ffecc79bcc0a715d72e7230168eca6b1013b5c775a2fb6a015494f3" Feb 23 18:44:58 crc kubenswrapper[4724]: E0223 18:44:58.195550 4724 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.174:37332->38.102.83.174:46225: write tcp 38.102.83.174:37332->38.102.83.174:46225: write: broken pipe Feb 23 18:45:00 crc kubenswrapper[4724]: I0223 18:45:00.173191 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531205-fh8gx"] Feb 23 18:45:00 crc kubenswrapper[4724]: E0223 18:45:00.175477 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25a1e5d6-17e9-4d24-b9ac-8d8d18e3600a" containerName="registry-server" Feb 23 18:45:00 crc kubenswrapper[4724]: I0223 18:45:00.175596 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="25a1e5d6-17e9-4d24-b9ac-8d8d18e3600a" containerName="registry-server" Feb 23 18:45:00 crc kubenswrapper[4724]: E0223 18:45:00.175725 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25a1e5d6-17e9-4d24-b9ac-8d8d18e3600a" containerName="extract-utilities" Feb 23 18:45:00 crc kubenswrapper[4724]: I0223 18:45:00.175804 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="25a1e5d6-17e9-4d24-b9ac-8d8d18e3600a" containerName="extract-utilities" Feb 23 18:45:00 crc kubenswrapper[4724]: E0223 18:45:00.175894 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25a1e5d6-17e9-4d24-b9ac-8d8d18e3600a" containerName="extract-content" Feb 23 18:45:00 crc kubenswrapper[4724]: I0223 18:45:00.175968 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="25a1e5d6-17e9-4d24-b9ac-8d8d18e3600a" containerName="extract-content" Feb 23 18:45:00 crc kubenswrapper[4724]: I0223 18:45:00.176287 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="25a1e5d6-17e9-4d24-b9ac-8d8d18e3600a" containerName="registry-server" Feb 23 18:45:00 crc kubenswrapper[4724]: I0223 18:45:00.177440 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531205-fh8gx" Feb 23 18:45:00 crc kubenswrapper[4724]: I0223 18:45:00.181055 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 23 18:45:00 crc kubenswrapper[4724]: I0223 18:45:00.181598 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 23 18:45:00 crc kubenswrapper[4724]: I0223 18:45:00.188182 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531205-fh8gx"] Feb 23 18:45:00 crc kubenswrapper[4724]: I0223 18:45:00.343473 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28l7j\" (UniqueName: \"kubernetes.io/projected/90e1d97e-ebff-4ce5-913d-a30c38a40673-kube-api-access-28l7j\") pod \"collect-profiles-29531205-fh8gx\" (UID: \"90e1d97e-ebff-4ce5-913d-a30c38a40673\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531205-fh8gx" Feb 23 18:45:00 crc kubenswrapper[4724]: I0223 18:45:00.343789 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/90e1d97e-ebff-4ce5-913d-a30c38a40673-secret-volume\") pod \"collect-profiles-29531205-fh8gx\" (UID: \"90e1d97e-ebff-4ce5-913d-a30c38a40673\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531205-fh8gx" Feb 23 18:45:00 crc kubenswrapper[4724]: I0223 18:45:00.343843 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90e1d97e-ebff-4ce5-913d-a30c38a40673-config-volume\") pod \"collect-profiles-29531205-fh8gx\" (UID: \"90e1d97e-ebff-4ce5-913d-a30c38a40673\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531205-fh8gx" Feb 23 18:45:00 crc kubenswrapper[4724]: I0223 18:45:00.446478 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28l7j\" (UniqueName: \"kubernetes.io/projected/90e1d97e-ebff-4ce5-913d-a30c38a40673-kube-api-access-28l7j\") pod \"collect-profiles-29531205-fh8gx\" (UID: \"90e1d97e-ebff-4ce5-913d-a30c38a40673\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531205-fh8gx" Feb 23 18:45:00 crc kubenswrapper[4724]: I0223 18:45:00.446594 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/90e1d97e-ebff-4ce5-913d-a30c38a40673-secret-volume\") pod \"collect-profiles-29531205-fh8gx\" (UID: \"90e1d97e-ebff-4ce5-913d-a30c38a40673\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531205-fh8gx" Feb 23 18:45:00 crc kubenswrapper[4724]: I0223 18:45:00.446692 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90e1d97e-ebff-4ce5-913d-a30c38a40673-config-volume\") pod \"collect-profiles-29531205-fh8gx\" (UID: \"90e1d97e-ebff-4ce5-913d-a30c38a40673\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531205-fh8gx" Feb 23 18:45:00 crc kubenswrapper[4724]: I0223 18:45:00.447883 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90e1d97e-ebff-4ce5-913d-a30c38a40673-config-volume\") pod \"collect-profiles-29531205-fh8gx\" (UID: \"90e1d97e-ebff-4ce5-913d-a30c38a40673\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531205-fh8gx" Feb 23 18:45:00 crc kubenswrapper[4724]: I0223 18:45:00.464046 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/90e1d97e-ebff-4ce5-913d-a30c38a40673-secret-volume\") pod \"collect-profiles-29531205-fh8gx\" (UID: \"90e1d97e-ebff-4ce5-913d-a30c38a40673\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531205-fh8gx" Feb 23 18:45:00 crc kubenswrapper[4724]: I0223 18:45:00.467363 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28l7j\" (UniqueName: \"kubernetes.io/projected/90e1d97e-ebff-4ce5-913d-a30c38a40673-kube-api-access-28l7j\") pod \"collect-profiles-29531205-fh8gx\" (UID: \"90e1d97e-ebff-4ce5-913d-a30c38a40673\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531205-fh8gx" Feb 23 18:45:00 crc kubenswrapper[4724]: E0223 18:45:00.482287 4724 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.174:37380->38.102.83.174:46225: write tcp 38.102.83.174:37380->38.102.83.174:46225: write: broken pipe Feb 23 18:45:00 crc kubenswrapper[4724]: I0223 18:45:00.499132 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531205-fh8gx" Feb 23 18:45:00 crc kubenswrapper[4724]: I0223 18:45:00.994793 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531205-fh8gx"] Feb 23 18:45:01 crc kubenswrapper[4724]: I0223 18:45:01.091179 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531205-fh8gx" event={"ID":"90e1d97e-ebff-4ce5-913d-a30c38a40673","Type":"ContainerStarted","Data":"1eaf68671a7dab216fe994d93e8612d3e5db419713dafd2c1440c2fe9c361814"} Feb 23 18:45:02 crc kubenswrapper[4724]: I0223 18:45:02.101761 4724 generic.go:334] "Generic (PLEG): container finished" podID="90e1d97e-ebff-4ce5-913d-a30c38a40673" containerID="bdcda3de95a1540b733fddd7d6b450de0a590b783605f16abfb58100214f5486" exitCode=0 Feb 23 18:45:02 crc kubenswrapper[4724]: I0223 18:45:02.101835 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531205-fh8gx" event={"ID":"90e1d97e-ebff-4ce5-913d-a30c38a40673","Type":"ContainerDied","Data":"bdcda3de95a1540b733fddd7d6b450de0a590b783605f16abfb58100214f5486"} Feb 23 18:45:03 crc kubenswrapper[4724]: I0223 18:45:03.973136 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531205-fh8gx" Feb 23 18:45:04 crc kubenswrapper[4724]: I0223 18:45:04.122177 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531205-fh8gx" event={"ID":"90e1d97e-ebff-4ce5-913d-a30c38a40673","Type":"ContainerDied","Data":"1eaf68671a7dab216fe994d93e8612d3e5db419713dafd2c1440c2fe9c361814"} Feb 23 18:45:04 crc kubenswrapper[4724]: I0223 18:45:04.122263 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1eaf68671a7dab216fe994d93e8612d3e5db419713dafd2c1440c2fe9c361814" Feb 23 18:45:04 crc kubenswrapper[4724]: I0223 18:45:04.122214 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531205-fh8gx" Feb 23 18:45:04 crc kubenswrapper[4724]: I0223 18:45:04.133354 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/90e1d97e-ebff-4ce5-913d-a30c38a40673-secret-volume\") pod \"90e1d97e-ebff-4ce5-913d-a30c38a40673\" (UID: \"90e1d97e-ebff-4ce5-913d-a30c38a40673\") " Feb 23 18:45:04 crc kubenswrapper[4724]: I0223 18:45:04.133529 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90e1d97e-ebff-4ce5-913d-a30c38a40673-config-volume\") pod \"90e1d97e-ebff-4ce5-913d-a30c38a40673\" (UID: \"90e1d97e-ebff-4ce5-913d-a30c38a40673\") " Feb 23 18:45:04 crc kubenswrapper[4724]: I0223 18:45:04.133959 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28l7j\" (UniqueName: \"kubernetes.io/projected/90e1d97e-ebff-4ce5-913d-a30c38a40673-kube-api-access-28l7j\") pod \"90e1d97e-ebff-4ce5-913d-a30c38a40673\" (UID: \"90e1d97e-ebff-4ce5-913d-a30c38a40673\") " Feb 23 18:45:04 crc kubenswrapper[4724]: I0223 18:45:04.134527 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90e1d97e-ebff-4ce5-913d-a30c38a40673-config-volume" (OuterVolumeSpecName: "config-volume") pod "90e1d97e-ebff-4ce5-913d-a30c38a40673" (UID: "90e1d97e-ebff-4ce5-913d-a30c38a40673"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:45:04 crc kubenswrapper[4724]: I0223 18:45:04.137133 4724 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90e1d97e-ebff-4ce5-913d-a30c38a40673-config-volume\") on node \"crc\" DevicePath \"\"" Feb 23 18:45:04 crc kubenswrapper[4724]: I0223 18:45:04.142487 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90e1d97e-ebff-4ce5-913d-a30c38a40673-kube-api-access-28l7j" (OuterVolumeSpecName: "kube-api-access-28l7j") pod "90e1d97e-ebff-4ce5-913d-a30c38a40673" (UID: "90e1d97e-ebff-4ce5-913d-a30c38a40673"). InnerVolumeSpecName "kube-api-access-28l7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:45:04 crc kubenswrapper[4724]: I0223 18:45:04.142948 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90e1d97e-ebff-4ce5-913d-a30c38a40673-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "90e1d97e-ebff-4ce5-913d-a30c38a40673" (UID: "90e1d97e-ebff-4ce5-913d-a30c38a40673"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:45:04 crc kubenswrapper[4724]: I0223 18:45:04.238658 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28l7j\" (UniqueName: \"kubernetes.io/projected/90e1d97e-ebff-4ce5-913d-a30c38a40673-kube-api-access-28l7j\") on node \"crc\" DevicePath \"\"" Feb 23 18:45:04 crc kubenswrapper[4724]: I0223 18:45:04.238698 4724 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/90e1d97e-ebff-4ce5-913d-a30c38a40673-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 23 18:45:05 crc kubenswrapper[4724]: I0223 18:45:05.042611 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531160-2jhl9"] Feb 23 18:45:05 crc kubenswrapper[4724]: I0223 18:45:05.052209 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531160-2jhl9"] Feb 23 18:45:06 crc kubenswrapper[4724]: I0223 18:45:06.972192 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b908dd80-78c0-49ab-9091-758eec839746" path="/var/lib/kubelet/pods/b908dd80-78c0-49ab-9091-758eec839746/volumes" Feb 23 18:45:51 crc kubenswrapper[4724]: I0223 18:45:51.310001 4724 scope.go:117] "RemoveContainer" containerID="a832189586f1522db0f96b9fe38520118a7e97e9accf22634fb6643b9a33d9b9" Feb 23 18:46:57 crc kubenswrapper[4724]: I0223 18:46:57.751982 4724 patch_prober.go:28] interesting pod/machine-config-daemon-rw78r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 18:46:57 crc kubenswrapper[4724]: I0223 18:46:57.752771 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 18:47:27 crc kubenswrapper[4724]: I0223 18:47:27.752301 4724 patch_prober.go:28] interesting pod/machine-config-daemon-rw78r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 18:47:27 crc kubenswrapper[4724]: I0223 18:47:27.752920 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 18:47:57 crc kubenswrapper[4724]: I0223 18:47:57.752807 4724 patch_prober.go:28] interesting pod/machine-config-daemon-rw78r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 18:47:57 crc kubenswrapper[4724]: I0223 18:47:57.753666 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 18:47:57 crc kubenswrapper[4724]: I0223 18:47:57.753761 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" Feb 23 18:47:57 crc kubenswrapper[4724]: I0223 18:47:57.755041 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bf1b52f0bf4f6d6849a1e01af7d9a0d575e3465a63d51b8f1787d5cd801f7d99"} pod="openshift-machine-config-operator/machine-config-daemon-rw78r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 18:47:57 crc kubenswrapper[4724]: I0223 18:47:57.755179 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" containerID="cri-o://bf1b52f0bf4f6d6849a1e01af7d9a0d575e3465a63d51b8f1787d5cd801f7d99" gracePeriod=600 Feb 23 18:47:57 crc kubenswrapper[4724]: E0223 18:47:57.875873 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:47:58 crc kubenswrapper[4724]: I0223 18:47:58.433559 4724 generic.go:334] "Generic (PLEG): container finished" podID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerID="bf1b52f0bf4f6d6849a1e01af7d9a0d575e3465a63d51b8f1787d5cd801f7d99" exitCode=0 Feb 23 18:47:58 crc kubenswrapper[4724]: I0223 18:47:58.433598 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" event={"ID":"a065b197-b354-4d9b-b2e9-7d4882a3d1a2","Type":"ContainerDied","Data":"bf1b52f0bf4f6d6849a1e01af7d9a0d575e3465a63d51b8f1787d5cd801f7d99"} Feb 23 18:47:58 crc kubenswrapper[4724]: I0223 18:47:58.433913 4724 scope.go:117] "RemoveContainer" containerID="c2651a85152b12f77a4e31ce37ecf6c04ac00087b49a3fcfe0f78e33571bca07" Feb 23 18:47:58 crc kubenswrapper[4724]: I0223 18:47:58.434938 4724 scope.go:117] "RemoveContainer" containerID="bf1b52f0bf4f6d6849a1e01af7d9a0d575e3465a63d51b8f1787d5cd801f7d99" Feb 23 18:47:58 crc kubenswrapper[4724]: E0223 18:47:58.435419 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:48:12 crc kubenswrapper[4724]: I0223 18:48:12.951770 4724 scope.go:117] "RemoveContainer" containerID="bf1b52f0bf4f6d6849a1e01af7d9a0d575e3465a63d51b8f1787d5cd801f7d99" Feb 23 18:48:12 crc kubenswrapper[4724]: E0223 18:48:12.952602 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:48:24 crc kubenswrapper[4724]: I0223 18:48:24.964481 4724 scope.go:117] "RemoveContainer" containerID="bf1b52f0bf4f6d6849a1e01af7d9a0d575e3465a63d51b8f1787d5cd801f7d99" Feb 23 18:48:24 crc kubenswrapper[4724]: E0223 18:48:24.965407 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:48:39 crc kubenswrapper[4724]: I0223 18:48:39.952043 4724 scope.go:117] "RemoveContainer" containerID="bf1b52f0bf4f6d6849a1e01af7d9a0d575e3465a63d51b8f1787d5cd801f7d99" Feb 23 18:48:39 crc kubenswrapper[4724]: E0223 18:48:39.953678 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:48:52 crc kubenswrapper[4724]: I0223 18:48:52.951528 4724 scope.go:117] "RemoveContainer" containerID="bf1b52f0bf4f6d6849a1e01af7d9a0d575e3465a63d51b8f1787d5cd801f7d99" Feb 23 18:48:52 crc kubenswrapper[4724]: E0223 18:48:52.952262 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:49:04 crc kubenswrapper[4724]: I0223 18:49:04.962173 4724 scope.go:117] "RemoveContainer" containerID="bf1b52f0bf4f6d6849a1e01af7d9a0d575e3465a63d51b8f1787d5cd801f7d99" Feb 23 18:49:04 crc kubenswrapper[4724]: E0223 18:49:04.966539 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:49:15 crc kubenswrapper[4724]: I0223 18:49:15.951275 4724 scope.go:117] "RemoveContainer" containerID="bf1b52f0bf4f6d6849a1e01af7d9a0d575e3465a63d51b8f1787d5cd801f7d99" Feb 23 18:49:15 crc kubenswrapper[4724]: E0223 18:49:15.952053 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:49:29 crc kubenswrapper[4724]: I0223 18:49:29.951157 4724 scope.go:117] "RemoveContainer" containerID="bf1b52f0bf4f6d6849a1e01af7d9a0d575e3465a63d51b8f1787d5cd801f7d99" Feb 23 18:49:29 crc kubenswrapper[4724]: E0223 18:49:29.952057 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:49:44 crc kubenswrapper[4724]: I0223 18:49:44.960154 4724 scope.go:117] "RemoveContainer" containerID="bf1b52f0bf4f6d6849a1e01af7d9a0d575e3465a63d51b8f1787d5cd801f7d99" Feb 23 18:49:44 crc kubenswrapper[4724]: E0223 18:49:44.961158 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:49:56 crc kubenswrapper[4724]: I0223 18:49:56.951474 4724 scope.go:117] "RemoveContainer" containerID="bf1b52f0bf4f6d6849a1e01af7d9a0d575e3465a63d51b8f1787d5cd801f7d99" Feb 23 18:49:56 crc kubenswrapper[4724]: E0223 18:49:56.952215 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:50:01 crc kubenswrapper[4724]: I0223 18:50:01.456637 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-n79gh"] Feb 23 18:50:01 crc kubenswrapper[4724]: E0223 18:50:01.458034 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90e1d97e-ebff-4ce5-913d-a30c38a40673" containerName="collect-profiles" Feb 23 18:50:01 crc kubenswrapper[4724]: I0223 18:50:01.458049 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="90e1d97e-ebff-4ce5-913d-a30c38a40673" containerName="collect-profiles" Feb 23 18:50:01 crc kubenswrapper[4724]: I0223 18:50:01.458362 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="90e1d97e-ebff-4ce5-913d-a30c38a40673" containerName="collect-profiles" Feb 23 18:50:01 crc kubenswrapper[4724]: I0223 18:50:01.460854 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n79gh" Feb 23 18:50:01 crc kubenswrapper[4724]: I0223 18:50:01.479526 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n79gh"] Feb 23 18:50:01 crc kubenswrapper[4724]: I0223 18:50:01.549520 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvjv8\" (UniqueName: \"kubernetes.io/projected/e26b2a18-8ac8-4b86-b13e-513820f9671e-kube-api-access-wvjv8\") pod \"redhat-marketplace-n79gh\" (UID: \"e26b2a18-8ac8-4b86-b13e-513820f9671e\") " pod="openshift-marketplace/redhat-marketplace-n79gh" Feb 23 18:50:01 crc kubenswrapper[4724]: I0223 18:50:01.549732 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e26b2a18-8ac8-4b86-b13e-513820f9671e-utilities\") pod \"redhat-marketplace-n79gh\" (UID: \"e26b2a18-8ac8-4b86-b13e-513820f9671e\") " pod="openshift-marketplace/redhat-marketplace-n79gh" Feb 23 18:50:01 crc kubenswrapper[4724]: I0223 18:50:01.549759 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e26b2a18-8ac8-4b86-b13e-513820f9671e-catalog-content\") pod \"redhat-marketplace-n79gh\" (UID: \"e26b2a18-8ac8-4b86-b13e-513820f9671e\") " pod="openshift-marketplace/redhat-marketplace-n79gh" Feb 23 18:50:01 crc kubenswrapper[4724]: I0223 18:50:01.651771 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvjv8\" (UniqueName: \"kubernetes.io/projected/e26b2a18-8ac8-4b86-b13e-513820f9671e-kube-api-access-wvjv8\") pod \"redhat-marketplace-n79gh\" (UID: \"e26b2a18-8ac8-4b86-b13e-513820f9671e\") " pod="openshift-marketplace/redhat-marketplace-n79gh" Feb 23 18:50:01 crc kubenswrapper[4724]: I0223 18:50:01.652205 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e26b2a18-8ac8-4b86-b13e-513820f9671e-utilities\") pod \"redhat-marketplace-n79gh\" (UID: \"e26b2a18-8ac8-4b86-b13e-513820f9671e\") " pod="openshift-marketplace/redhat-marketplace-n79gh" Feb 23 18:50:01 crc kubenswrapper[4724]: I0223 18:50:01.652231 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e26b2a18-8ac8-4b86-b13e-513820f9671e-catalog-content\") pod \"redhat-marketplace-n79gh\" (UID: \"e26b2a18-8ac8-4b86-b13e-513820f9671e\") " pod="openshift-marketplace/redhat-marketplace-n79gh" Feb 23 18:50:01 crc kubenswrapper[4724]: I0223 18:50:01.652713 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e26b2a18-8ac8-4b86-b13e-513820f9671e-utilities\") pod \"redhat-marketplace-n79gh\" (UID: \"e26b2a18-8ac8-4b86-b13e-513820f9671e\") " pod="openshift-marketplace/redhat-marketplace-n79gh" Feb 23 18:50:01 crc kubenswrapper[4724]: I0223 18:50:01.652775 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e26b2a18-8ac8-4b86-b13e-513820f9671e-catalog-content\") pod \"redhat-marketplace-n79gh\" (UID: \"e26b2a18-8ac8-4b86-b13e-513820f9671e\") " pod="openshift-marketplace/redhat-marketplace-n79gh" Feb 23 18:50:01 crc kubenswrapper[4724]: I0223 18:50:01.673255 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvjv8\" (UniqueName: \"kubernetes.io/projected/e26b2a18-8ac8-4b86-b13e-513820f9671e-kube-api-access-wvjv8\") pod \"redhat-marketplace-n79gh\" (UID: \"e26b2a18-8ac8-4b86-b13e-513820f9671e\") " pod="openshift-marketplace/redhat-marketplace-n79gh" Feb 23 18:50:01 crc kubenswrapper[4724]: I0223 18:50:01.788736 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n79gh" Feb 23 18:50:02 crc kubenswrapper[4724]: I0223 18:50:02.271294 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n79gh"] Feb 23 18:50:02 crc kubenswrapper[4724]: I0223 18:50:02.631293 4724 generic.go:334] "Generic (PLEG): container finished" podID="e26b2a18-8ac8-4b86-b13e-513820f9671e" containerID="f2cb5a0d21d0ef20ba00cf5ad7a45de3efd671f91336131edf1452b21658ee6b" exitCode=0 Feb 23 18:50:02 crc kubenswrapper[4724]: I0223 18:50:02.631348 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n79gh" event={"ID":"e26b2a18-8ac8-4b86-b13e-513820f9671e","Type":"ContainerDied","Data":"f2cb5a0d21d0ef20ba00cf5ad7a45de3efd671f91336131edf1452b21658ee6b"} Feb 23 18:50:02 crc kubenswrapper[4724]: I0223 18:50:02.631995 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n79gh" event={"ID":"e26b2a18-8ac8-4b86-b13e-513820f9671e","Type":"ContainerStarted","Data":"07fb2e7da19133815a40c23ab1ad4c8d3d1c9b9c166f52fb56ba50d52ca18ef8"} Feb 23 18:50:02 crc kubenswrapper[4724]: I0223 18:50:02.633372 4724 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 18:50:04 crc kubenswrapper[4724]: I0223 18:50:04.656437 4724 generic.go:334] "Generic (PLEG): container finished" podID="e26b2a18-8ac8-4b86-b13e-513820f9671e" containerID="614fa57afd6cd9cccb9bf0f9a606e2006029038e94040a6f751305fabf389391" exitCode=0 Feb 23 18:50:04 crc kubenswrapper[4724]: I0223 18:50:04.656521 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n79gh" event={"ID":"e26b2a18-8ac8-4b86-b13e-513820f9671e","Type":"ContainerDied","Data":"614fa57afd6cd9cccb9bf0f9a606e2006029038e94040a6f751305fabf389391"} Feb 23 18:50:05 crc kubenswrapper[4724]: I0223 18:50:05.675346 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n79gh" event={"ID":"e26b2a18-8ac8-4b86-b13e-513820f9671e","Type":"ContainerStarted","Data":"ff736629b933ff48f1e45365c4f64bff8f47d2e043e6efac1a496b64490c0600"} Feb 23 18:50:05 crc kubenswrapper[4724]: I0223 18:50:05.698927 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-n79gh" podStartSLOduration=2.2900469980000002 podStartE2EDuration="4.698899792s" podCreationTimestamp="2026-02-23 18:50:01 +0000 UTC" firstStartedPulling="2026-02-23 18:50:02.633130068 +0000 UTC m=+4758.449329668" lastFinishedPulling="2026-02-23 18:50:05.041982852 +0000 UTC m=+4760.858182462" observedRunningTime="2026-02-23 18:50:05.69365973 +0000 UTC m=+4761.509859350" watchObservedRunningTime="2026-02-23 18:50:05.698899792 +0000 UTC m=+4761.515099402" Feb 23 18:50:11 crc kubenswrapper[4724]: I0223 18:50:11.788925 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-n79gh" Feb 23 18:50:11 crc kubenswrapper[4724]: I0223 18:50:11.789505 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-n79gh" Feb 23 18:50:11 crc kubenswrapper[4724]: I0223 18:50:11.836088 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-n79gh" Feb 23 18:50:11 crc kubenswrapper[4724]: I0223 18:50:11.951467 4724 scope.go:117] "RemoveContainer" containerID="bf1b52f0bf4f6d6849a1e01af7d9a0d575e3465a63d51b8f1787d5cd801f7d99" Feb 23 18:50:11 crc kubenswrapper[4724]: E0223 18:50:11.951707 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:50:12 crc kubenswrapper[4724]: I0223 18:50:12.029709 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-n445s"] Feb 23 18:50:12 crc kubenswrapper[4724]: I0223 18:50:12.031885 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n445s" Feb 23 18:50:12 crc kubenswrapper[4724]: I0223 18:50:12.044639 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n445s"] Feb 23 18:50:12 crc kubenswrapper[4724]: I0223 18:50:12.188225 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpbk6\" (UniqueName: \"kubernetes.io/projected/9df81af7-c367-482b-a3bc-300150722639-kube-api-access-qpbk6\") pod \"redhat-operators-n445s\" (UID: \"9df81af7-c367-482b-a3bc-300150722639\") " pod="openshift-marketplace/redhat-operators-n445s" Feb 23 18:50:12 crc kubenswrapper[4724]: I0223 18:50:12.188269 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9df81af7-c367-482b-a3bc-300150722639-utilities\") pod \"redhat-operators-n445s\" (UID: \"9df81af7-c367-482b-a3bc-300150722639\") " pod="openshift-marketplace/redhat-operators-n445s" Feb 23 18:50:12 crc kubenswrapper[4724]: I0223 18:50:12.188379 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9df81af7-c367-482b-a3bc-300150722639-catalog-content\") pod \"redhat-operators-n445s\" (UID: \"9df81af7-c367-482b-a3bc-300150722639\") " pod="openshift-marketplace/redhat-operators-n445s" Feb 23 18:50:12 crc kubenswrapper[4724]: I0223 18:50:12.228450 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fnbkc"] Feb 23 18:50:12 crc kubenswrapper[4724]: I0223 18:50:12.231043 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fnbkc" Feb 23 18:50:12 crc kubenswrapper[4724]: I0223 18:50:12.240276 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fnbkc"] Feb 23 18:50:12 crc kubenswrapper[4724]: I0223 18:50:12.290207 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4nq2\" (UniqueName: \"kubernetes.io/projected/9f15a35e-170a-4bc9-b921-113e2075e656-kube-api-access-f4nq2\") pod \"community-operators-fnbkc\" (UID: \"9f15a35e-170a-4bc9-b921-113e2075e656\") " pod="openshift-marketplace/community-operators-fnbkc" Feb 23 18:50:12 crc kubenswrapper[4724]: I0223 18:50:12.290273 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f15a35e-170a-4bc9-b921-113e2075e656-utilities\") pod \"community-operators-fnbkc\" (UID: \"9f15a35e-170a-4bc9-b921-113e2075e656\") " pod="openshift-marketplace/community-operators-fnbkc" Feb 23 18:50:12 crc kubenswrapper[4724]: I0223 18:50:12.290345 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpbk6\" (UniqueName: \"kubernetes.io/projected/9df81af7-c367-482b-a3bc-300150722639-kube-api-access-qpbk6\") pod \"redhat-operators-n445s\" (UID: \"9df81af7-c367-482b-a3bc-300150722639\") " pod="openshift-marketplace/redhat-operators-n445s" Feb 23 18:50:12 crc kubenswrapper[4724]: I0223 18:50:12.290372 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9df81af7-c367-482b-a3bc-300150722639-utilities\") pod \"redhat-operators-n445s\" (UID: \"9df81af7-c367-482b-a3bc-300150722639\") " pod="openshift-marketplace/redhat-operators-n445s" Feb 23 18:50:12 crc kubenswrapper[4724]: I0223 18:50:12.290450 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f15a35e-170a-4bc9-b921-113e2075e656-catalog-content\") pod \"community-operators-fnbkc\" (UID: \"9f15a35e-170a-4bc9-b921-113e2075e656\") " pod="openshift-marketplace/community-operators-fnbkc" Feb 23 18:50:12 crc kubenswrapper[4724]: I0223 18:50:12.290479 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9df81af7-c367-482b-a3bc-300150722639-catalog-content\") pod \"redhat-operators-n445s\" (UID: \"9df81af7-c367-482b-a3bc-300150722639\") " pod="openshift-marketplace/redhat-operators-n445s" Feb 23 18:50:12 crc kubenswrapper[4724]: I0223 18:50:12.290924 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9df81af7-c367-482b-a3bc-300150722639-catalog-content\") pod \"redhat-operators-n445s\" (UID: \"9df81af7-c367-482b-a3bc-300150722639\") " pod="openshift-marketplace/redhat-operators-n445s" Feb 23 18:50:12 crc kubenswrapper[4724]: I0223 18:50:12.291172 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9df81af7-c367-482b-a3bc-300150722639-utilities\") pod \"redhat-operators-n445s\" (UID: \"9df81af7-c367-482b-a3bc-300150722639\") " pod="openshift-marketplace/redhat-operators-n445s" Feb 23 18:50:12 crc kubenswrapper[4724]: I0223 18:50:12.328977 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpbk6\" (UniqueName: \"kubernetes.io/projected/9df81af7-c367-482b-a3bc-300150722639-kube-api-access-qpbk6\") pod \"redhat-operators-n445s\" (UID: \"9df81af7-c367-482b-a3bc-300150722639\") " pod="openshift-marketplace/redhat-operators-n445s" Feb 23 18:50:12 crc kubenswrapper[4724]: I0223 18:50:12.353180 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n445s" Feb 23 18:50:12 crc kubenswrapper[4724]: I0223 18:50:12.392249 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f15a35e-170a-4bc9-b921-113e2075e656-catalog-content\") pod \"community-operators-fnbkc\" (UID: \"9f15a35e-170a-4bc9-b921-113e2075e656\") " pod="openshift-marketplace/community-operators-fnbkc" Feb 23 18:50:12 crc kubenswrapper[4724]: I0223 18:50:12.392612 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4nq2\" (UniqueName: \"kubernetes.io/projected/9f15a35e-170a-4bc9-b921-113e2075e656-kube-api-access-f4nq2\") pod \"community-operators-fnbkc\" (UID: \"9f15a35e-170a-4bc9-b921-113e2075e656\") " pod="openshift-marketplace/community-operators-fnbkc" Feb 23 18:50:12 crc kubenswrapper[4724]: I0223 18:50:12.392662 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f15a35e-170a-4bc9-b921-113e2075e656-utilities\") pod \"community-operators-fnbkc\" (UID: \"9f15a35e-170a-4bc9-b921-113e2075e656\") " pod="openshift-marketplace/community-operators-fnbkc" Feb 23 18:50:12 crc kubenswrapper[4724]: I0223 18:50:12.392786 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f15a35e-170a-4bc9-b921-113e2075e656-catalog-content\") pod \"community-operators-fnbkc\" (UID: \"9f15a35e-170a-4bc9-b921-113e2075e656\") " pod="openshift-marketplace/community-operators-fnbkc" Feb 23 18:50:12 crc kubenswrapper[4724]: I0223 18:50:12.393030 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f15a35e-170a-4bc9-b921-113e2075e656-utilities\") pod \"community-operators-fnbkc\" (UID: \"9f15a35e-170a-4bc9-b921-113e2075e656\") " pod="openshift-marketplace/community-operators-fnbkc" Feb 23 18:50:12 crc kubenswrapper[4724]: I0223 18:50:12.866240 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4nq2\" (UniqueName: \"kubernetes.io/projected/9f15a35e-170a-4bc9-b921-113e2075e656-kube-api-access-f4nq2\") pod \"community-operators-fnbkc\" (UID: \"9f15a35e-170a-4bc9-b921-113e2075e656\") " pod="openshift-marketplace/community-operators-fnbkc" Feb 23 18:50:13 crc kubenswrapper[4724]: I0223 18:50:13.108597 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-n79gh" Feb 23 18:50:13 crc kubenswrapper[4724]: I0223 18:50:13.156788 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fnbkc" Feb 23 18:50:13 crc kubenswrapper[4724]: I0223 18:50:13.293245 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n445s"] Feb 23 18:50:13 crc kubenswrapper[4724]: I0223 18:50:13.648573 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fnbkc"] Feb 23 18:50:13 crc kubenswrapper[4724]: I0223 18:50:13.765316 4724 generic.go:334] "Generic (PLEG): container finished" podID="9df81af7-c367-482b-a3bc-300150722639" containerID="c1c487cd0365da80b6b8c67ecb3ddc09f9deba9de5d125d1101b6ab6b1a179c9" exitCode=0 Feb 23 18:50:13 crc kubenswrapper[4724]: I0223 18:50:13.765403 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n445s" event={"ID":"9df81af7-c367-482b-a3bc-300150722639","Type":"ContainerDied","Data":"c1c487cd0365da80b6b8c67ecb3ddc09f9deba9de5d125d1101b6ab6b1a179c9"} Feb 23 18:50:13 crc kubenswrapper[4724]: I0223 18:50:13.765780 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n445s" event={"ID":"9df81af7-c367-482b-a3bc-300150722639","Type":"ContainerStarted","Data":"00ff460617a766b5b3cd49fa77cf088367a9fbaa1cd221f132d7500a5cb1454b"} Feb 23 18:50:13 crc kubenswrapper[4724]: I0223 18:50:13.767482 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fnbkc" event={"ID":"9f15a35e-170a-4bc9-b921-113e2075e656","Type":"ContainerStarted","Data":"889edb58c18d8d8f98f21519accb7d3f7ef162781e08a7cda3045f6008520e72"} Feb 23 18:50:14 crc kubenswrapper[4724]: I0223 18:50:14.622771 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n79gh"] Feb 23 18:50:14 crc kubenswrapper[4724]: I0223 18:50:14.778037 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n445s" event={"ID":"9df81af7-c367-482b-a3bc-300150722639","Type":"ContainerStarted","Data":"f2f8bfbe476b0e45363982b7953bcf87f872b20589ae45802a1fb024d224eead"} Feb 23 18:50:14 crc kubenswrapper[4724]: I0223 18:50:14.779740 4724 generic.go:334] "Generic (PLEG): container finished" podID="9f15a35e-170a-4bc9-b921-113e2075e656" containerID="5fa7a32ea7876d4d324a5dbe484867b1d9640a66a1b31e90cae6936050482740" exitCode=0 Feb 23 18:50:14 crc kubenswrapper[4724]: I0223 18:50:14.779797 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fnbkc" event={"ID":"9f15a35e-170a-4bc9-b921-113e2075e656","Type":"ContainerDied","Data":"5fa7a32ea7876d4d324a5dbe484867b1d9640a66a1b31e90cae6936050482740"} Feb 23 18:50:14 crc kubenswrapper[4724]: I0223 18:50:14.780016 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-n79gh" podUID="e26b2a18-8ac8-4b86-b13e-513820f9671e" containerName="registry-server" containerID="cri-o://ff736629b933ff48f1e45365c4f64bff8f47d2e043e6efac1a496b64490c0600" gracePeriod=2 Feb 23 18:50:15 crc kubenswrapper[4724]: I0223 18:50:15.245800 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n79gh" Feb 23 18:50:15 crc kubenswrapper[4724]: I0223 18:50:15.362318 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e26b2a18-8ac8-4b86-b13e-513820f9671e-utilities\") pod \"e26b2a18-8ac8-4b86-b13e-513820f9671e\" (UID: \"e26b2a18-8ac8-4b86-b13e-513820f9671e\") " Feb 23 18:50:15 crc kubenswrapper[4724]: I0223 18:50:15.362489 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvjv8\" (UniqueName: \"kubernetes.io/projected/e26b2a18-8ac8-4b86-b13e-513820f9671e-kube-api-access-wvjv8\") pod \"e26b2a18-8ac8-4b86-b13e-513820f9671e\" (UID: \"e26b2a18-8ac8-4b86-b13e-513820f9671e\") " Feb 23 18:50:15 crc kubenswrapper[4724]: I0223 18:50:15.362539 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e26b2a18-8ac8-4b86-b13e-513820f9671e-catalog-content\") pod \"e26b2a18-8ac8-4b86-b13e-513820f9671e\" (UID: \"e26b2a18-8ac8-4b86-b13e-513820f9671e\") " Feb 23 18:50:15 crc kubenswrapper[4724]: I0223 18:50:15.363313 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e26b2a18-8ac8-4b86-b13e-513820f9671e-utilities" (OuterVolumeSpecName: "utilities") pod "e26b2a18-8ac8-4b86-b13e-513820f9671e" (UID: "e26b2a18-8ac8-4b86-b13e-513820f9671e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:50:15 crc kubenswrapper[4724]: I0223 18:50:15.369658 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e26b2a18-8ac8-4b86-b13e-513820f9671e-kube-api-access-wvjv8" (OuterVolumeSpecName: "kube-api-access-wvjv8") pod "e26b2a18-8ac8-4b86-b13e-513820f9671e" (UID: "e26b2a18-8ac8-4b86-b13e-513820f9671e"). InnerVolumeSpecName "kube-api-access-wvjv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:50:15 crc kubenswrapper[4724]: I0223 18:50:15.379000 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e26b2a18-8ac8-4b86-b13e-513820f9671e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e26b2a18-8ac8-4b86-b13e-513820f9671e" (UID: "e26b2a18-8ac8-4b86-b13e-513820f9671e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:50:15 crc kubenswrapper[4724]: I0223 18:50:15.465617 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e26b2a18-8ac8-4b86-b13e-513820f9671e-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:15 crc kubenswrapper[4724]: I0223 18:50:15.465911 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvjv8\" (UniqueName: \"kubernetes.io/projected/e26b2a18-8ac8-4b86-b13e-513820f9671e-kube-api-access-wvjv8\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:15 crc kubenswrapper[4724]: I0223 18:50:15.465921 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e26b2a18-8ac8-4b86-b13e-513820f9671e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:15 crc kubenswrapper[4724]: I0223 18:50:15.793157 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fnbkc" event={"ID":"9f15a35e-170a-4bc9-b921-113e2075e656","Type":"ContainerStarted","Data":"86988d09c2296f351e3cb6b2f335d99344a35266ee73687d3cf4cb761275158f"} Feb 23 18:50:15 crc kubenswrapper[4724]: I0223 18:50:15.796445 4724 generic.go:334] "Generic (PLEG): container finished" podID="e26b2a18-8ac8-4b86-b13e-513820f9671e" containerID="ff736629b933ff48f1e45365c4f64bff8f47d2e043e6efac1a496b64490c0600" exitCode=0 Feb 23 18:50:15 crc kubenswrapper[4724]: I0223 18:50:15.796529 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n79gh" event={"ID":"e26b2a18-8ac8-4b86-b13e-513820f9671e","Type":"ContainerDied","Data":"ff736629b933ff48f1e45365c4f64bff8f47d2e043e6efac1a496b64490c0600"} Feb 23 18:50:15 crc kubenswrapper[4724]: I0223 18:50:15.796573 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n79gh" event={"ID":"e26b2a18-8ac8-4b86-b13e-513820f9671e","Type":"ContainerDied","Data":"07fb2e7da19133815a40c23ab1ad4c8d3d1c9b9c166f52fb56ba50d52ca18ef8"} Feb 23 18:50:15 crc kubenswrapper[4724]: I0223 18:50:15.796572 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n79gh" Feb 23 18:50:15 crc kubenswrapper[4724]: I0223 18:50:15.796595 4724 scope.go:117] "RemoveContainer" containerID="ff736629b933ff48f1e45365c4f64bff8f47d2e043e6efac1a496b64490c0600" Feb 23 18:50:15 crc kubenswrapper[4724]: I0223 18:50:15.829624 4724 scope.go:117] "RemoveContainer" containerID="614fa57afd6cd9cccb9bf0f9a606e2006029038e94040a6f751305fabf389391" Feb 23 18:50:15 crc kubenswrapper[4724]: I0223 18:50:15.844613 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n79gh"] Feb 23 18:50:15 crc kubenswrapper[4724]: I0223 18:50:15.857372 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-n79gh"] Feb 23 18:50:15 crc kubenswrapper[4724]: I0223 18:50:15.857920 4724 scope.go:117] "RemoveContainer" containerID="f2cb5a0d21d0ef20ba00cf5ad7a45de3efd671f91336131edf1452b21658ee6b" Feb 23 18:50:15 crc kubenswrapper[4724]: I0223 18:50:15.909451 4724 scope.go:117] "RemoveContainer" containerID="ff736629b933ff48f1e45365c4f64bff8f47d2e043e6efac1a496b64490c0600" Feb 23 18:50:15 crc kubenswrapper[4724]: E0223 18:50:15.910085 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff736629b933ff48f1e45365c4f64bff8f47d2e043e6efac1a496b64490c0600\": container with ID starting with ff736629b933ff48f1e45365c4f64bff8f47d2e043e6efac1a496b64490c0600 not found: ID does not exist" containerID="ff736629b933ff48f1e45365c4f64bff8f47d2e043e6efac1a496b64490c0600" Feb 23 18:50:15 crc kubenswrapper[4724]: I0223 18:50:15.910121 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff736629b933ff48f1e45365c4f64bff8f47d2e043e6efac1a496b64490c0600"} err="failed to get container status \"ff736629b933ff48f1e45365c4f64bff8f47d2e043e6efac1a496b64490c0600\": rpc error: code = NotFound desc = could not find container \"ff736629b933ff48f1e45365c4f64bff8f47d2e043e6efac1a496b64490c0600\": container with ID starting with ff736629b933ff48f1e45365c4f64bff8f47d2e043e6efac1a496b64490c0600 not found: ID does not exist" Feb 23 18:50:15 crc kubenswrapper[4724]: I0223 18:50:15.910142 4724 scope.go:117] "RemoveContainer" containerID="614fa57afd6cd9cccb9bf0f9a606e2006029038e94040a6f751305fabf389391" Feb 23 18:50:15 crc kubenswrapper[4724]: E0223 18:50:15.910550 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"614fa57afd6cd9cccb9bf0f9a606e2006029038e94040a6f751305fabf389391\": container with ID starting with 614fa57afd6cd9cccb9bf0f9a606e2006029038e94040a6f751305fabf389391 not found: ID does not exist" containerID="614fa57afd6cd9cccb9bf0f9a606e2006029038e94040a6f751305fabf389391" Feb 23 18:50:15 crc kubenswrapper[4724]: I0223 18:50:15.910589 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"614fa57afd6cd9cccb9bf0f9a606e2006029038e94040a6f751305fabf389391"} err="failed to get container status \"614fa57afd6cd9cccb9bf0f9a606e2006029038e94040a6f751305fabf389391\": rpc error: code = NotFound desc = could not find container \"614fa57afd6cd9cccb9bf0f9a606e2006029038e94040a6f751305fabf389391\": container with ID starting with 614fa57afd6cd9cccb9bf0f9a606e2006029038e94040a6f751305fabf389391 not found: ID does not exist" Feb 23 18:50:15 crc kubenswrapper[4724]: I0223 18:50:15.910614 4724 scope.go:117] "RemoveContainer" containerID="f2cb5a0d21d0ef20ba00cf5ad7a45de3efd671f91336131edf1452b21658ee6b" Feb 23 18:50:15 crc kubenswrapper[4724]: E0223 18:50:15.911085 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2cb5a0d21d0ef20ba00cf5ad7a45de3efd671f91336131edf1452b21658ee6b\": container with ID starting with f2cb5a0d21d0ef20ba00cf5ad7a45de3efd671f91336131edf1452b21658ee6b not found: ID does not exist" containerID="f2cb5a0d21d0ef20ba00cf5ad7a45de3efd671f91336131edf1452b21658ee6b" Feb 23 18:50:15 crc kubenswrapper[4724]: I0223 18:50:15.911168 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2cb5a0d21d0ef20ba00cf5ad7a45de3efd671f91336131edf1452b21658ee6b"} err="failed to get container status \"f2cb5a0d21d0ef20ba00cf5ad7a45de3efd671f91336131edf1452b21658ee6b\": rpc error: code = NotFound desc = could not find container \"f2cb5a0d21d0ef20ba00cf5ad7a45de3efd671f91336131edf1452b21658ee6b\": container with ID starting with f2cb5a0d21d0ef20ba00cf5ad7a45de3efd671f91336131edf1452b21658ee6b not found: ID does not exist" Feb 23 18:50:16 crc kubenswrapper[4724]: I0223 18:50:16.962612 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e26b2a18-8ac8-4b86-b13e-513820f9671e" path="/var/lib/kubelet/pods/e26b2a18-8ac8-4b86-b13e-513820f9671e/volumes" Feb 23 18:50:17 crc kubenswrapper[4724]: I0223 18:50:17.819719 4724 generic.go:334] "Generic (PLEG): container finished" podID="9f15a35e-170a-4bc9-b921-113e2075e656" containerID="86988d09c2296f351e3cb6b2f335d99344a35266ee73687d3cf4cb761275158f" exitCode=0 Feb 23 18:50:17 crc kubenswrapper[4724]: I0223 18:50:17.819797 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fnbkc" event={"ID":"9f15a35e-170a-4bc9-b921-113e2075e656","Type":"ContainerDied","Data":"86988d09c2296f351e3cb6b2f335d99344a35266ee73687d3cf4cb761275158f"} Feb 23 18:50:18 crc kubenswrapper[4724]: I0223 18:50:18.834128 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fnbkc" event={"ID":"9f15a35e-170a-4bc9-b921-113e2075e656","Type":"ContainerStarted","Data":"84b73c03d14470a28522ddd2fbc45173bfc493ab2654dee888a6bc97b2097f83"} Feb 23 18:50:18 crc kubenswrapper[4724]: I0223 18:50:18.857993 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fnbkc" podStartSLOduration=3.430350856 podStartE2EDuration="6.857976405s" podCreationTimestamp="2026-02-23 18:50:12 +0000 UTC" firstStartedPulling="2026-02-23 18:50:14.781798878 +0000 UTC m=+4770.597998478" lastFinishedPulling="2026-02-23 18:50:18.209424427 +0000 UTC m=+4774.025624027" observedRunningTime="2026-02-23 18:50:18.856679183 +0000 UTC m=+4774.672878783" watchObservedRunningTime="2026-02-23 18:50:18.857976405 +0000 UTC m=+4774.674176005" Feb 23 18:50:19 crc kubenswrapper[4724]: I0223 18:50:19.844276 4724 generic.go:334] "Generic (PLEG): container finished" podID="9df81af7-c367-482b-a3bc-300150722639" containerID="f2f8bfbe476b0e45363982b7953bcf87f872b20589ae45802a1fb024d224eead" exitCode=0 Feb 23 18:50:19 crc kubenswrapper[4724]: I0223 18:50:19.844323 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n445s" event={"ID":"9df81af7-c367-482b-a3bc-300150722639","Type":"ContainerDied","Data":"f2f8bfbe476b0e45363982b7953bcf87f872b20589ae45802a1fb024d224eead"} Feb 23 18:50:20 crc kubenswrapper[4724]: I0223 18:50:20.859011 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n445s" event={"ID":"9df81af7-c367-482b-a3bc-300150722639","Type":"ContainerStarted","Data":"6ed26c786717d6b68494906cc4293f074daef841abd253f99d341c61973fdd5b"} Feb 23 18:50:20 crc kubenswrapper[4724]: I0223 18:50:20.879445 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-n445s" podStartSLOduration=2.346217774 podStartE2EDuration="8.879428029s" podCreationTimestamp="2026-02-23 18:50:12 +0000 UTC" firstStartedPulling="2026-02-23 18:50:13.76727143 +0000 UTC m=+4769.583471030" lastFinishedPulling="2026-02-23 18:50:20.300481675 +0000 UTC m=+4776.116681285" observedRunningTime="2026-02-23 18:50:20.878510546 +0000 UTC m=+4776.694710146" watchObservedRunningTime="2026-02-23 18:50:20.879428029 +0000 UTC m=+4776.695627629" Feb 23 18:50:22 crc kubenswrapper[4724]: I0223 18:50:22.353457 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-n445s" Feb 23 18:50:22 crc kubenswrapper[4724]: I0223 18:50:22.353778 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-n445s" Feb 23 18:50:23 crc kubenswrapper[4724]: I0223 18:50:23.157417 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fnbkc" Feb 23 18:50:23 crc kubenswrapper[4724]: I0223 18:50:23.157463 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fnbkc" Feb 23 18:50:23 crc kubenswrapper[4724]: I0223 18:50:23.400660 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n445s" podUID="9df81af7-c367-482b-a3bc-300150722639" containerName="registry-server" probeResult="failure" output=< Feb 23 18:50:23 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 23 18:50:23 crc kubenswrapper[4724]: > Feb 23 18:50:24 crc kubenswrapper[4724]: I0223 18:50:24.703470 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-fnbkc" podUID="9f15a35e-170a-4bc9-b921-113e2075e656" containerName="registry-server" probeResult="failure" output=< Feb 23 18:50:24 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 23 18:50:24 crc kubenswrapper[4724]: > Feb 23 18:50:26 crc kubenswrapper[4724]: I0223 18:50:26.951537 4724 scope.go:117] "RemoveContainer" containerID="bf1b52f0bf4f6d6849a1e01af7d9a0d575e3465a63d51b8f1787d5cd801f7d99" Feb 23 18:50:26 crc kubenswrapper[4724]: E0223 18:50:26.952096 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:50:33 crc kubenswrapper[4724]: I0223 18:50:33.405913 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n445s" podUID="9df81af7-c367-482b-a3bc-300150722639" containerName="registry-server" probeResult="failure" output=< Feb 23 18:50:33 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 23 18:50:33 crc kubenswrapper[4724]: > Feb 23 18:50:34 crc kubenswrapper[4724]: I0223 18:50:34.204111 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-fnbkc" podUID="9f15a35e-170a-4bc9-b921-113e2075e656" containerName="registry-server" probeResult="failure" output=< Feb 23 18:50:34 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 23 18:50:34 crc kubenswrapper[4724]: > Feb 23 18:50:37 crc kubenswrapper[4724]: I0223 18:50:37.951065 4724 scope.go:117] "RemoveContainer" containerID="bf1b52f0bf4f6d6849a1e01af7d9a0d575e3465a63d51b8f1787d5cd801f7d99" Feb 23 18:50:37 crc kubenswrapper[4724]: E0223 18:50:37.951567 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:50:43 crc kubenswrapper[4724]: I0223 18:50:43.225035 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fnbkc" Feb 23 18:50:43 crc kubenswrapper[4724]: I0223 18:50:43.272274 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fnbkc" Feb 23 18:50:43 crc kubenswrapper[4724]: I0223 18:50:43.397197 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n445s" podUID="9df81af7-c367-482b-a3bc-300150722639" containerName="registry-server" probeResult="failure" output=< Feb 23 18:50:43 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 23 18:50:43 crc kubenswrapper[4724]: > Feb 23 18:50:44 crc kubenswrapper[4724]: I0223 18:50:44.229307 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fnbkc"] Feb 23 18:50:45 crc kubenswrapper[4724]: I0223 18:50:45.107302 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fnbkc" podUID="9f15a35e-170a-4bc9-b921-113e2075e656" containerName="registry-server" containerID="cri-o://84b73c03d14470a28522ddd2fbc45173bfc493ab2654dee888a6bc97b2097f83" gracePeriod=2 Feb 23 18:50:45 crc kubenswrapper[4724]: I0223 18:50:45.625493 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fnbkc" Feb 23 18:50:45 crc kubenswrapper[4724]: I0223 18:50:45.719326 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f15a35e-170a-4bc9-b921-113e2075e656-catalog-content\") pod \"9f15a35e-170a-4bc9-b921-113e2075e656\" (UID: \"9f15a35e-170a-4bc9-b921-113e2075e656\") " Feb 23 18:50:45 crc kubenswrapper[4724]: I0223 18:50:45.719962 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4nq2\" (UniqueName: \"kubernetes.io/projected/9f15a35e-170a-4bc9-b921-113e2075e656-kube-api-access-f4nq2\") pod \"9f15a35e-170a-4bc9-b921-113e2075e656\" (UID: \"9f15a35e-170a-4bc9-b921-113e2075e656\") " Feb 23 18:50:45 crc kubenswrapper[4724]: I0223 18:50:45.720144 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f15a35e-170a-4bc9-b921-113e2075e656-utilities\") pod \"9f15a35e-170a-4bc9-b921-113e2075e656\" (UID: \"9f15a35e-170a-4bc9-b921-113e2075e656\") " Feb 23 18:50:45 crc kubenswrapper[4724]: I0223 18:50:45.720903 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f15a35e-170a-4bc9-b921-113e2075e656-utilities" (OuterVolumeSpecName: "utilities") pod "9f15a35e-170a-4bc9-b921-113e2075e656" (UID: "9f15a35e-170a-4bc9-b921-113e2075e656"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:50:45 crc kubenswrapper[4724]: I0223 18:50:45.721621 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f15a35e-170a-4bc9-b921-113e2075e656-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:45 crc kubenswrapper[4724]: I0223 18:50:45.727578 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f15a35e-170a-4bc9-b921-113e2075e656-kube-api-access-f4nq2" (OuterVolumeSpecName: "kube-api-access-f4nq2") pod "9f15a35e-170a-4bc9-b921-113e2075e656" (UID: "9f15a35e-170a-4bc9-b921-113e2075e656"). InnerVolumeSpecName "kube-api-access-f4nq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:50:45 crc kubenswrapper[4724]: I0223 18:50:45.773854 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f15a35e-170a-4bc9-b921-113e2075e656-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9f15a35e-170a-4bc9-b921-113e2075e656" (UID: "9f15a35e-170a-4bc9-b921-113e2075e656"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:50:45 crc kubenswrapper[4724]: I0223 18:50:45.824467 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f15a35e-170a-4bc9-b921-113e2075e656-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:45 crc kubenswrapper[4724]: I0223 18:50:45.825178 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4nq2\" (UniqueName: \"kubernetes.io/projected/9f15a35e-170a-4bc9-b921-113e2075e656-kube-api-access-f4nq2\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:46 crc kubenswrapper[4724]: I0223 18:50:46.117791 4724 generic.go:334] "Generic (PLEG): container finished" podID="9f15a35e-170a-4bc9-b921-113e2075e656" containerID="84b73c03d14470a28522ddd2fbc45173bfc493ab2654dee888a6bc97b2097f83" exitCode=0 Feb 23 18:50:46 crc kubenswrapper[4724]: I0223 18:50:46.117865 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fnbkc" event={"ID":"9f15a35e-170a-4bc9-b921-113e2075e656","Type":"ContainerDied","Data":"84b73c03d14470a28522ddd2fbc45173bfc493ab2654dee888a6bc97b2097f83"} Feb 23 18:50:46 crc kubenswrapper[4724]: I0223 18:50:46.117920 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fnbkc" event={"ID":"9f15a35e-170a-4bc9-b921-113e2075e656","Type":"ContainerDied","Data":"889edb58c18d8d8f98f21519accb7d3f7ef162781e08a7cda3045f6008520e72"} Feb 23 18:50:46 crc kubenswrapper[4724]: I0223 18:50:46.117948 4724 scope.go:117] "RemoveContainer" containerID="84b73c03d14470a28522ddd2fbc45173bfc493ab2654dee888a6bc97b2097f83" Feb 23 18:50:46 crc kubenswrapper[4724]: I0223 18:50:46.117869 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fnbkc" Feb 23 18:50:46 crc kubenswrapper[4724]: I0223 18:50:46.147354 4724 scope.go:117] "RemoveContainer" containerID="86988d09c2296f351e3cb6b2f335d99344a35266ee73687d3cf4cb761275158f" Feb 23 18:50:46 crc kubenswrapper[4724]: I0223 18:50:46.161053 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fnbkc"] Feb 23 18:50:46 crc kubenswrapper[4724]: I0223 18:50:46.170779 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fnbkc"] Feb 23 18:50:46 crc kubenswrapper[4724]: I0223 18:50:46.180574 4724 scope.go:117] "RemoveContainer" containerID="5fa7a32ea7876d4d324a5dbe484867b1d9640a66a1b31e90cae6936050482740" Feb 23 18:50:46 crc kubenswrapper[4724]: I0223 18:50:46.238830 4724 scope.go:117] "RemoveContainer" containerID="84b73c03d14470a28522ddd2fbc45173bfc493ab2654dee888a6bc97b2097f83" Feb 23 18:50:46 crc kubenswrapper[4724]: E0223 18:50:46.239507 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84b73c03d14470a28522ddd2fbc45173bfc493ab2654dee888a6bc97b2097f83\": container with ID starting with 84b73c03d14470a28522ddd2fbc45173bfc493ab2654dee888a6bc97b2097f83 not found: ID does not exist" containerID="84b73c03d14470a28522ddd2fbc45173bfc493ab2654dee888a6bc97b2097f83" Feb 23 18:50:46 crc kubenswrapper[4724]: I0223 18:50:46.239543 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84b73c03d14470a28522ddd2fbc45173bfc493ab2654dee888a6bc97b2097f83"} err="failed to get container status \"84b73c03d14470a28522ddd2fbc45173bfc493ab2654dee888a6bc97b2097f83\": rpc error: code = NotFound desc = could not find container \"84b73c03d14470a28522ddd2fbc45173bfc493ab2654dee888a6bc97b2097f83\": container with ID starting with 84b73c03d14470a28522ddd2fbc45173bfc493ab2654dee888a6bc97b2097f83 not found: ID does not exist" Feb 23 18:50:46 crc kubenswrapper[4724]: I0223 18:50:46.239563 4724 scope.go:117] "RemoveContainer" containerID="86988d09c2296f351e3cb6b2f335d99344a35266ee73687d3cf4cb761275158f" Feb 23 18:50:46 crc kubenswrapper[4724]: E0223 18:50:46.239987 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86988d09c2296f351e3cb6b2f335d99344a35266ee73687d3cf4cb761275158f\": container with ID starting with 86988d09c2296f351e3cb6b2f335d99344a35266ee73687d3cf4cb761275158f not found: ID does not exist" containerID="86988d09c2296f351e3cb6b2f335d99344a35266ee73687d3cf4cb761275158f" Feb 23 18:50:46 crc kubenswrapper[4724]: I0223 18:50:46.240027 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86988d09c2296f351e3cb6b2f335d99344a35266ee73687d3cf4cb761275158f"} err="failed to get container status \"86988d09c2296f351e3cb6b2f335d99344a35266ee73687d3cf4cb761275158f\": rpc error: code = NotFound desc = could not find container \"86988d09c2296f351e3cb6b2f335d99344a35266ee73687d3cf4cb761275158f\": container with ID starting with 86988d09c2296f351e3cb6b2f335d99344a35266ee73687d3cf4cb761275158f not found: ID does not exist" Feb 23 18:50:46 crc kubenswrapper[4724]: I0223 18:50:46.240058 4724 scope.go:117] "RemoveContainer" containerID="5fa7a32ea7876d4d324a5dbe484867b1d9640a66a1b31e90cae6936050482740" Feb 23 18:50:46 crc kubenswrapper[4724]: E0223 18:50:46.240305 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fa7a32ea7876d4d324a5dbe484867b1d9640a66a1b31e90cae6936050482740\": container with ID starting with 5fa7a32ea7876d4d324a5dbe484867b1d9640a66a1b31e90cae6936050482740 not found: ID does not exist" containerID="5fa7a32ea7876d4d324a5dbe484867b1d9640a66a1b31e90cae6936050482740" Feb 23 18:50:46 crc kubenswrapper[4724]: I0223 18:50:46.240335 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fa7a32ea7876d4d324a5dbe484867b1d9640a66a1b31e90cae6936050482740"} err="failed to get container status \"5fa7a32ea7876d4d324a5dbe484867b1d9640a66a1b31e90cae6936050482740\": rpc error: code = NotFound desc = could not find container \"5fa7a32ea7876d4d324a5dbe484867b1d9640a66a1b31e90cae6936050482740\": container with ID starting with 5fa7a32ea7876d4d324a5dbe484867b1d9640a66a1b31e90cae6936050482740 not found: ID does not exist" Feb 23 18:50:46 crc kubenswrapper[4724]: I0223 18:50:46.964678 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f15a35e-170a-4bc9-b921-113e2075e656" path="/var/lib/kubelet/pods/9f15a35e-170a-4bc9-b921-113e2075e656/volumes" Feb 23 18:50:51 crc kubenswrapper[4724]: I0223 18:50:51.615773 4724 scope.go:117] "RemoveContainer" containerID="b56b3cf88d04813ba09fa8d2f22a2dc68124e3c8e13afa68dfcd168868d28fac" Feb 23 18:50:51 crc kubenswrapper[4724]: I0223 18:50:51.643643 4724 scope.go:117] "RemoveContainer" containerID="1638784e65799a225bf3999a722f3d22a0d7b17b6c5b11a028b48a4ba6bd7eac" Feb 23 18:50:51 crc kubenswrapper[4724]: I0223 18:50:51.694379 4724 scope.go:117] "RemoveContainer" containerID="a7bcbaa2060123e09f91d79dbd0db6d3b60b02e0110d67e0e540117224c5ddb4" Feb 23 18:50:52 crc kubenswrapper[4724]: I0223 18:50:52.611222 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-n445s" Feb 23 18:50:52 crc kubenswrapper[4724]: I0223 18:50:52.669457 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-n445s" Feb 23 18:50:52 crc kubenswrapper[4724]: I0223 18:50:52.848198 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n445s"] Feb 23 18:50:52 crc kubenswrapper[4724]: I0223 18:50:52.951205 4724 scope.go:117] "RemoveContainer" containerID="bf1b52f0bf4f6d6849a1e01af7d9a0d575e3465a63d51b8f1787d5cd801f7d99" Feb 23 18:50:52 crc kubenswrapper[4724]: E0223 18:50:52.951464 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:50:54 crc kubenswrapper[4724]: I0223 18:50:54.202284 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-n445s" podUID="9df81af7-c367-482b-a3bc-300150722639" containerName="registry-server" containerID="cri-o://6ed26c786717d6b68494906cc4293f074daef841abd253f99d341c61973fdd5b" gracePeriod=2 Feb 23 18:50:54 crc kubenswrapper[4724]: I0223 18:50:54.679737 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n445s" Feb 23 18:50:54 crc kubenswrapper[4724]: I0223 18:50:54.833372 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9df81af7-c367-482b-a3bc-300150722639-catalog-content\") pod \"9df81af7-c367-482b-a3bc-300150722639\" (UID: \"9df81af7-c367-482b-a3bc-300150722639\") " Feb 23 18:50:54 crc kubenswrapper[4724]: I0223 18:50:54.833449 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9df81af7-c367-482b-a3bc-300150722639-utilities\") pod \"9df81af7-c367-482b-a3bc-300150722639\" (UID: \"9df81af7-c367-482b-a3bc-300150722639\") " Feb 23 18:50:54 crc kubenswrapper[4724]: I0223 18:50:54.833763 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpbk6\" (UniqueName: \"kubernetes.io/projected/9df81af7-c367-482b-a3bc-300150722639-kube-api-access-qpbk6\") pod \"9df81af7-c367-482b-a3bc-300150722639\" (UID: \"9df81af7-c367-482b-a3bc-300150722639\") " Feb 23 18:50:54 crc kubenswrapper[4724]: I0223 18:50:54.834692 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9df81af7-c367-482b-a3bc-300150722639-utilities" (OuterVolumeSpecName: "utilities") pod "9df81af7-c367-482b-a3bc-300150722639" (UID: "9df81af7-c367-482b-a3bc-300150722639"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:50:54 crc kubenswrapper[4724]: I0223 18:50:54.840550 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9df81af7-c367-482b-a3bc-300150722639-kube-api-access-qpbk6" (OuterVolumeSpecName: "kube-api-access-qpbk6") pod "9df81af7-c367-482b-a3bc-300150722639" (UID: "9df81af7-c367-482b-a3bc-300150722639"). InnerVolumeSpecName "kube-api-access-qpbk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:50:54 crc kubenswrapper[4724]: I0223 18:50:54.936625 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpbk6\" (UniqueName: \"kubernetes.io/projected/9df81af7-c367-482b-a3bc-300150722639-kube-api-access-qpbk6\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:54 crc kubenswrapper[4724]: I0223 18:50:54.936685 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9df81af7-c367-482b-a3bc-300150722639-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:54 crc kubenswrapper[4724]: I0223 18:50:54.975737 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9df81af7-c367-482b-a3bc-300150722639-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9df81af7-c367-482b-a3bc-300150722639" (UID: "9df81af7-c367-482b-a3bc-300150722639"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:50:55 crc kubenswrapper[4724]: I0223 18:50:55.038543 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9df81af7-c367-482b-a3bc-300150722639-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:55 crc kubenswrapper[4724]: I0223 18:50:55.217720 4724 generic.go:334] "Generic (PLEG): container finished" podID="9df81af7-c367-482b-a3bc-300150722639" containerID="6ed26c786717d6b68494906cc4293f074daef841abd253f99d341c61973fdd5b" exitCode=0 Feb 23 18:50:55 crc kubenswrapper[4724]: I0223 18:50:55.217772 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n445s" event={"ID":"9df81af7-c367-482b-a3bc-300150722639","Type":"ContainerDied","Data":"6ed26c786717d6b68494906cc4293f074daef841abd253f99d341c61973fdd5b"} Feb 23 18:50:55 crc kubenswrapper[4724]: I0223 18:50:55.217782 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n445s" Feb 23 18:50:55 crc kubenswrapper[4724]: I0223 18:50:55.217813 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n445s" event={"ID":"9df81af7-c367-482b-a3bc-300150722639","Type":"ContainerDied","Data":"00ff460617a766b5b3cd49fa77cf088367a9fbaa1cd221f132d7500a5cb1454b"} Feb 23 18:50:55 crc kubenswrapper[4724]: I0223 18:50:55.217836 4724 scope.go:117] "RemoveContainer" containerID="6ed26c786717d6b68494906cc4293f074daef841abd253f99d341c61973fdd5b" Feb 23 18:50:55 crc kubenswrapper[4724]: I0223 18:50:55.246308 4724 scope.go:117] "RemoveContainer" containerID="f2f8bfbe476b0e45363982b7953bcf87f872b20589ae45802a1fb024d224eead" Feb 23 18:50:55 crc kubenswrapper[4724]: I0223 18:50:55.268931 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n445s"] Feb 23 18:50:55 crc kubenswrapper[4724]: I0223 18:50:55.283602 4724 scope.go:117] "RemoveContainer" containerID="c1c487cd0365da80b6b8c67ecb3ddc09f9deba9de5d125d1101b6ab6b1a179c9" Feb 23 18:50:55 crc kubenswrapper[4724]: I0223 18:50:55.285922 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-n445s"] Feb 23 18:50:55 crc kubenswrapper[4724]: I0223 18:50:55.344134 4724 scope.go:117] "RemoveContainer" containerID="6ed26c786717d6b68494906cc4293f074daef841abd253f99d341c61973fdd5b" Feb 23 18:50:55 crc kubenswrapper[4724]: E0223 18:50:55.344660 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ed26c786717d6b68494906cc4293f074daef841abd253f99d341c61973fdd5b\": container with ID starting with 6ed26c786717d6b68494906cc4293f074daef841abd253f99d341c61973fdd5b not found: ID does not exist" containerID="6ed26c786717d6b68494906cc4293f074daef841abd253f99d341c61973fdd5b" Feb 23 18:50:55 crc kubenswrapper[4724]: I0223 18:50:55.344694 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ed26c786717d6b68494906cc4293f074daef841abd253f99d341c61973fdd5b"} err="failed to get container status \"6ed26c786717d6b68494906cc4293f074daef841abd253f99d341c61973fdd5b\": rpc error: code = NotFound desc = could not find container \"6ed26c786717d6b68494906cc4293f074daef841abd253f99d341c61973fdd5b\": container with ID starting with 6ed26c786717d6b68494906cc4293f074daef841abd253f99d341c61973fdd5b not found: ID does not exist" Feb 23 18:50:55 crc kubenswrapper[4724]: I0223 18:50:55.344714 4724 scope.go:117] "RemoveContainer" containerID="f2f8bfbe476b0e45363982b7953bcf87f872b20589ae45802a1fb024d224eead" Feb 23 18:50:55 crc kubenswrapper[4724]: E0223 18:50:55.345070 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2f8bfbe476b0e45363982b7953bcf87f872b20589ae45802a1fb024d224eead\": container with ID starting with f2f8bfbe476b0e45363982b7953bcf87f872b20589ae45802a1fb024d224eead not found: ID does not exist" containerID="f2f8bfbe476b0e45363982b7953bcf87f872b20589ae45802a1fb024d224eead" Feb 23 18:50:55 crc kubenswrapper[4724]: I0223 18:50:55.345100 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2f8bfbe476b0e45363982b7953bcf87f872b20589ae45802a1fb024d224eead"} err="failed to get container status \"f2f8bfbe476b0e45363982b7953bcf87f872b20589ae45802a1fb024d224eead\": rpc error: code = NotFound desc = could not find container \"f2f8bfbe476b0e45363982b7953bcf87f872b20589ae45802a1fb024d224eead\": container with ID starting with f2f8bfbe476b0e45363982b7953bcf87f872b20589ae45802a1fb024d224eead not found: ID does not exist" Feb 23 18:50:55 crc kubenswrapper[4724]: I0223 18:50:55.345119 4724 scope.go:117] "RemoveContainer" containerID="c1c487cd0365da80b6b8c67ecb3ddc09f9deba9de5d125d1101b6ab6b1a179c9" Feb 23 18:50:55 crc kubenswrapper[4724]: E0223 18:50:55.345370 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1c487cd0365da80b6b8c67ecb3ddc09f9deba9de5d125d1101b6ab6b1a179c9\": container with ID starting with c1c487cd0365da80b6b8c67ecb3ddc09f9deba9de5d125d1101b6ab6b1a179c9 not found: ID does not exist" containerID="c1c487cd0365da80b6b8c67ecb3ddc09f9deba9de5d125d1101b6ab6b1a179c9" Feb 23 18:50:55 crc kubenswrapper[4724]: I0223 18:50:55.345413 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1c487cd0365da80b6b8c67ecb3ddc09f9deba9de5d125d1101b6ab6b1a179c9"} err="failed to get container status \"c1c487cd0365da80b6b8c67ecb3ddc09f9deba9de5d125d1101b6ab6b1a179c9\": rpc error: code = NotFound desc = could not find container \"c1c487cd0365da80b6b8c67ecb3ddc09f9deba9de5d125d1101b6ab6b1a179c9\": container with ID starting with c1c487cd0365da80b6b8c67ecb3ddc09f9deba9de5d125d1101b6ab6b1a179c9 not found: ID does not exist" Feb 23 18:50:56 crc kubenswrapper[4724]: I0223 18:50:56.968782 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9df81af7-c367-482b-a3bc-300150722639" path="/var/lib/kubelet/pods/9df81af7-c367-482b-a3bc-300150722639/volumes" Feb 23 18:51:07 crc kubenswrapper[4724]: I0223 18:51:07.951124 4724 scope.go:117] "RemoveContainer" containerID="bf1b52f0bf4f6d6849a1e01af7d9a0d575e3465a63d51b8f1787d5cd801f7d99" Feb 23 18:51:07 crc kubenswrapper[4724]: E0223 18:51:07.951793 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:51:20 crc kubenswrapper[4724]: I0223 18:51:20.951610 4724 scope.go:117] "RemoveContainer" containerID="bf1b52f0bf4f6d6849a1e01af7d9a0d575e3465a63d51b8f1787d5cd801f7d99" Feb 23 18:51:20 crc kubenswrapper[4724]: E0223 18:51:20.952311 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:51:32 crc kubenswrapper[4724]: I0223 18:51:32.951572 4724 scope.go:117] "RemoveContainer" containerID="bf1b52f0bf4f6d6849a1e01af7d9a0d575e3465a63d51b8f1787d5cd801f7d99" Feb 23 18:51:32 crc kubenswrapper[4724]: E0223 18:51:32.953094 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:51:47 crc kubenswrapper[4724]: I0223 18:51:47.950913 4724 scope.go:117] "RemoveContainer" containerID="bf1b52f0bf4f6d6849a1e01af7d9a0d575e3465a63d51b8f1787d5cd801f7d99" Feb 23 18:51:47 crc kubenswrapper[4724]: E0223 18:51:47.952046 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:52:01 crc kubenswrapper[4724]: I0223 18:52:01.951760 4724 scope.go:117] "RemoveContainer" containerID="bf1b52f0bf4f6d6849a1e01af7d9a0d575e3465a63d51b8f1787d5cd801f7d99" Feb 23 18:52:01 crc kubenswrapper[4724]: E0223 18:52:01.952883 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:52:12 crc kubenswrapper[4724]: I0223 18:52:12.951069 4724 scope.go:117] "RemoveContainer" containerID="bf1b52f0bf4f6d6849a1e01af7d9a0d575e3465a63d51b8f1787d5cd801f7d99" Feb 23 18:52:12 crc kubenswrapper[4724]: E0223 18:52:12.951823 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:52:27 crc kubenswrapper[4724]: I0223 18:52:27.950852 4724 scope.go:117] "RemoveContainer" containerID="bf1b52f0bf4f6d6849a1e01af7d9a0d575e3465a63d51b8f1787d5cd801f7d99" Feb 23 18:52:27 crc kubenswrapper[4724]: E0223 18:52:27.951959 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:52:40 crc kubenswrapper[4724]: I0223 18:52:40.951223 4724 scope.go:117] "RemoveContainer" containerID="bf1b52f0bf4f6d6849a1e01af7d9a0d575e3465a63d51b8f1787d5cd801f7d99" Feb 23 18:52:40 crc kubenswrapper[4724]: E0223 18:52:40.952055 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:52:53 crc kubenswrapper[4724]: I0223 18:52:53.951927 4724 scope.go:117] "RemoveContainer" containerID="bf1b52f0bf4f6d6849a1e01af7d9a0d575e3465a63d51b8f1787d5cd801f7d99" Feb 23 18:52:53 crc kubenswrapper[4724]: E0223 18:52:53.953092 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:53:06 crc kubenswrapper[4724]: I0223 18:53:06.952054 4724 scope.go:117] "RemoveContainer" containerID="bf1b52f0bf4f6d6849a1e01af7d9a0d575e3465a63d51b8f1787d5cd801f7d99" Feb 23 18:53:07 crc kubenswrapper[4724]: I0223 18:53:07.471625 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" event={"ID":"a065b197-b354-4d9b-b2e9-7d4882a3d1a2","Type":"ContainerStarted","Data":"639fd3b8f1387bc69e2a8c57a6b59cab3604e4b36459949d5f23a41a140cc1ce"} Feb 23 18:54:58 crc kubenswrapper[4724]: I0223 18:54:58.518660 4724 generic.go:334] "Generic (PLEG): container finished" podID="0d826425-e3f8-42d4-823f-2f8db766ad9a" containerID="4d449e7d2897358f9b45f5be03e332abda0f08a7f8b60e2956d496e45370ed34" exitCode=0 Feb 23 18:54:58 crc kubenswrapper[4724]: I0223 18:54:58.518753 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"0d826425-e3f8-42d4-823f-2f8db766ad9a","Type":"ContainerDied","Data":"4d449e7d2897358f9b45f5be03e332abda0f08a7f8b60e2956d496e45370ed34"} Feb 23 18:54:59 crc kubenswrapper[4724]: I0223 18:54:59.923126 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 23 18:54:59 crc kubenswrapper[4724]: I0223 18:54:59.965801 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jktn6\" (UniqueName: \"kubernetes.io/projected/0d826425-e3f8-42d4-823f-2f8db766ad9a-kube-api-access-jktn6\") pod \"0d826425-e3f8-42d4-823f-2f8db766ad9a\" (UID: \"0d826425-e3f8-42d4-823f-2f8db766ad9a\") " Feb 23 18:54:59 crc kubenswrapper[4724]: I0223 18:54:59.965862 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/0d826425-e3f8-42d4-823f-2f8db766ad9a-ca-certs\") pod \"0d826425-e3f8-42d4-823f-2f8db766ad9a\" (UID: \"0d826425-e3f8-42d4-823f-2f8db766ad9a\") " Feb 23 18:54:59 crc kubenswrapper[4724]: I0223 18:54:59.965900 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0d826425-e3f8-42d4-823f-2f8db766ad9a-config-data\") pod \"0d826425-e3f8-42d4-823f-2f8db766ad9a\" (UID: \"0d826425-e3f8-42d4-823f-2f8db766ad9a\") " Feb 23 18:54:59 crc kubenswrapper[4724]: I0223 18:54:59.966057 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/0d826425-e3f8-42d4-823f-2f8db766ad9a-test-operator-ephemeral-temporary\") pod \"0d826425-e3f8-42d4-823f-2f8db766ad9a\" (UID: \"0d826425-e3f8-42d4-823f-2f8db766ad9a\") " Feb 23 18:54:59 crc kubenswrapper[4724]: I0223 18:54:59.966105 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/0d826425-e3f8-42d4-823f-2f8db766ad9a-test-operator-ephemeral-workdir\") pod \"0d826425-e3f8-42d4-823f-2f8db766ad9a\" (UID: \"0d826425-e3f8-42d4-823f-2f8db766ad9a\") " Feb 23 18:54:59 crc kubenswrapper[4724]: I0223 18:54:59.966192 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0d826425-e3f8-42d4-823f-2f8db766ad9a-ssh-key\") pod \"0d826425-e3f8-42d4-823f-2f8db766ad9a\" (UID: \"0d826425-e3f8-42d4-823f-2f8db766ad9a\") " Feb 23 18:54:59 crc kubenswrapper[4724]: I0223 18:54:59.966218 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0d826425-e3f8-42d4-823f-2f8db766ad9a-openstack-config\") pod \"0d826425-e3f8-42d4-823f-2f8db766ad9a\" (UID: \"0d826425-e3f8-42d4-823f-2f8db766ad9a\") " Feb 23 18:54:59 crc kubenswrapper[4724]: I0223 18:54:59.966287 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"0d826425-e3f8-42d4-823f-2f8db766ad9a\" (UID: \"0d826425-e3f8-42d4-823f-2f8db766ad9a\") " Feb 23 18:54:59 crc kubenswrapper[4724]: I0223 18:54:59.966413 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0d826425-e3f8-42d4-823f-2f8db766ad9a-openstack-config-secret\") pod \"0d826425-e3f8-42d4-823f-2f8db766ad9a\" (UID: \"0d826425-e3f8-42d4-823f-2f8db766ad9a\") " Feb 23 18:54:59 crc kubenswrapper[4724]: I0223 18:54:59.966796 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d826425-e3f8-42d4-823f-2f8db766ad9a-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "0d826425-e3f8-42d4-823f-2f8db766ad9a" (UID: "0d826425-e3f8-42d4-823f-2f8db766ad9a"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:54:59 crc kubenswrapper[4724]: I0223 18:54:59.966813 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d826425-e3f8-42d4-823f-2f8db766ad9a-config-data" (OuterVolumeSpecName: "config-data") pod "0d826425-e3f8-42d4-823f-2f8db766ad9a" (UID: "0d826425-e3f8-42d4-823f-2f8db766ad9a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:54:59 crc kubenswrapper[4724]: I0223 18:54:59.967199 4724 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/0d826425-e3f8-42d4-823f-2f8db766ad9a-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 23 18:54:59 crc kubenswrapper[4724]: I0223 18:54:59.967231 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0d826425-e3f8-42d4-823f-2f8db766ad9a-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 18:54:59 crc kubenswrapper[4724]: I0223 18:54:59.973579 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d826425-e3f8-42d4-823f-2f8db766ad9a-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "0d826425-e3f8-42d4-823f-2f8db766ad9a" (UID: "0d826425-e3f8-42d4-823f-2f8db766ad9a"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:54:59 crc kubenswrapper[4724]: I0223 18:54:59.976637 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d826425-e3f8-42d4-823f-2f8db766ad9a-kube-api-access-jktn6" (OuterVolumeSpecName: "kube-api-access-jktn6") pod "0d826425-e3f8-42d4-823f-2f8db766ad9a" (UID: "0d826425-e3f8-42d4-823f-2f8db766ad9a"). InnerVolumeSpecName "kube-api-access-jktn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:54:59 crc kubenswrapper[4724]: I0223 18:54:59.985052 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "test-operator-logs") pod "0d826425-e3f8-42d4-823f-2f8db766ad9a" (UID: "0d826425-e3f8-42d4-823f-2f8db766ad9a"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 23 18:55:00 crc kubenswrapper[4724]: I0223 18:55:00.016222 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d826425-e3f8-42d4-823f-2f8db766ad9a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0d826425-e3f8-42d4-823f-2f8db766ad9a" (UID: "0d826425-e3f8-42d4-823f-2f8db766ad9a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:55:00 crc kubenswrapper[4724]: I0223 18:55:00.019183 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d826425-e3f8-42d4-823f-2f8db766ad9a-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "0d826425-e3f8-42d4-823f-2f8db766ad9a" (UID: "0d826425-e3f8-42d4-823f-2f8db766ad9a"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:55:00 crc kubenswrapper[4724]: I0223 18:55:00.035339 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d826425-e3f8-42d4-823f-2f8db766ad9a-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "0d826425-e3f8-42d4-823f-2f8db766ad9a" (UID: "0d826425-e3f8-42d4-823f-2f8db766ad9a"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:55:00 crc kubenswrapper[4724]: I0223 18:55:00.056185 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d826425-e3f8-42d4-823f-2f8db766ad9a-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "0d826425-e3f8-42d4-823f-2f8db766ad9a" (UID: "0d826425-e3f8-42d4-823f-2f8db766ad9a"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:55:00 crc kubenswrapper[4724]: I0223 18:55:00.068914 4724 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0d826425-e3f8-42d4-823f-2f8db766ad9a-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 23 18:55:00 crc kubenswrapper[4724]: I0223 18:55:00.068946 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jktn6\" (UniqueName: \"kubernetes.io/projected/0d826425-e3f8-42d4-823f-2f8db766ad9a-kube-api-access-jktn6\") on node \"crc\" DevicePath \"\"" Feb 23 18:55:00 crc kubenswrapper[4724]: I0223 18:55:00.068955 4724 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/0d826425-e3f8-42d4-823f-2f8db766ad9a-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 23 18:55:00 crc kubenswrapper[4724]: I0223 18:55:00.068964 4724 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/0d826425-e3f8-42d4-823f-2f8db766ad9a-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 23 18:55:00 crc kubenswrapper[4724]: I0223 18:55:00.068972 4724 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0d826425-e3f8-42d4-823f-2f8db766ad9a-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 23 18:55:00 crc kubenswrapper[4724]: I0223 18:55:00.068983 4724 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0d826425-e3f8-42d4-823f-2f8db766ad9a-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:55:00 crc kubenswrapper[4724]: I0223 18:55:00.069003 4724 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Feb 23 18:55:00 crc kubenswrapper[4724]: I0223 18:55:00.104351 4724 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Feb 23 18:55:00 crc kubenswrapper[4724]: I0223 18:55:00.171268 4724 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Feb 23 18:55:00 crc kubenswrapper[4724]: I0223 18:55:00.538061 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"0d826425-e3f8-42d4-823f-2f8db766ad9a","Type":"ContainerDied","Data":"e3b8deeb44fb45fa04a78c6d874267283c0cfb2f9a290bfe341f9b8fc5ebc476"} Feb 23 18:55:00 crc kubenswrapper[4724]: I0223 18:55:00.538100 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3b8deeb44fb45fa04a78c6d874267283c0cfb2f9a290bfe341f9b8fc5ebc476" Feb 23 18:55:00 crc kubenswrapper[4724]: I0223 18:55:00.538156 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 23 18:55:03 crc kubenswrapper[4724]: I0223 18:55:03.420875 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 23 18:55:03 crc kubenswrapper[4724]: E0223 18:55:03.421805 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9df81af7-c367-482b-a3bc-300150722639" containerName="registry-server" Feb 23 18:55:03 crc kubenswrapper[4724]: I0223 18:55:03.421818 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="9df81af7-c367-482b-a3bc-300150722639" containerName="registry-server" Feb 23 18:55:03 crc kubenswrapper[4724]: E0223 18:55:03.421827 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e26b2a18-8ac8-4b86-b13e-513820f9671e" containerName="registry-server" Feb 23 18:55:03 crc kubenswrapper[4724]: I0223 18:55:03.421832 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="e26b2a18-8ac8-4b86-b13e-513820f9671e" containerName="registry-server" Feb 23 18:55:03 crc kubenswrapper[4724]: E0223 18:55:03.421844 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f15a35e-170a-4bc9-b921-113e2075e656" containerName="extract-content" Feb 23 18:55:03 crc kubenswrapper[4724]: I0223 18:55:03.421850 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f15a35e-170a-4bc9-b921-113e2075e656" containerName="extract-content" Feb 23 18:55:03 crc kubenswrapper[4724]: E0223 18:55:03.421862 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9df81af7-c367-482b-a3bc-300150722639" containerName="extract-utilities" Feb 23 18:55:03 crc kubenswrapper[4724]: I0223 18:55:03.421867 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="9df81af7-c367-482b-a3bc-300150722639" containerName="extract-utilities" Feb 23 18:55:03 crc kubenswrapper[4724]: E0223 18:55:03.421887 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9df81af7-c367-482b-a3bc-300150722639" containerName="extract-content" Feb 23 18:55:03 crc kubenswrapper[4724]: I0223 18:55:03.421893 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="9df81af7-c367-482b-a3bc-300150722639" containerName="extract-content" Feb 23 18:55:03 crc kubenswrapper[4724]: E0223 18:55:03.421907 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d826425-e3f8-42d4-823f-2f8db766ad9a" containerName="tempest-tests-tempest-tests-runner" Feb 23 18:55:03 crc kubenswrapper[4724]: I0223 18:55:03.421913 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d826425-e3f8-42d4-823f-2f8db766ad9a" containerName="tempest-tests-tempest-tests-runner" Feb 23 18:55:03 crc kubenswrapper[4724]: E0223 18:55:03.421921 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e26b2a18-8ac8-4b86-b13e-513820f9671e" containerName="extract-content" Feb 23 18:55:03 crc kubenswrapper[4724]: I0223 18:55:03.421926 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="e26b2a18-8ac8-4b86-b13e-513820f9671e" containerName="extract-content" Feb 23 18:55:03 crc kubenswrapper[4724]: E0223 18:55:03.421937 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f15a35e-170a-4bc9-b921-113e2075e656" containerName="extract-utilities" Feb 23 18:55:03 crc kubenswrapper[4724]: I0223 18:55:03.421943 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f15a35e-170a-4bc9-b921-113e2075e656" containerName="extract-utilities" Feb 23 18:55:03 crc kubenswrapper[4724]: E0223 18:55:03.421952 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f15a35e-170a-4bc9-b921-113e2075e656" containerName="registry-server" Feb 23 18:55:03 crc kubenswrapper[4724]: I0223 18:55:03.421958 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f15a35e-170a-4bc9-b921-113e2075e656" containerName="registry-server" Feb 23 18:55:03 crc kubenswrapper[4724]: E0223 18:55:03.421973 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e26b2a18-8ac8-4b86-b13e-513820f9671e" containerName="extract-utilities" Feb 23 18:55:03 crc kubenswrapper[4724]: I0223 18:55:03.421980 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="e26b2a18-8ac8-4b86-b13e-513820f9671e" containerName="extract-utilities" Feb 23 18:55:03 crc kubenswrapper[4724]: I0223 18:55:03.422167 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="9df81af7-c367-482b-a3bc-300150722639" containerName="registry-server" Feb 23 18:55:03 crc kubenswrapper[4724]: I0223 18:55:03.422189 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d826425-e3f8-42d4-823f-2f8db766ad9a" containerName="tempest-tests-tempest-tests-runner" Feb 23 18:55:03 crc kubenswrapper[4724]: I0223 18:55:03.422206 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="e26b2a18-8ac8-4b86-b13e-513820f9671e" containerName="registry-server" Feb 23 18:55:03 crc kubenswrapper[4724]: I0223 18:55:03.422221 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f15a35e-170a-4bc9-b921-113e2075e656" containerName="registry-server" Feb 23 18:55:03 crc kubenswrapper[4724]: I0223 18:55:03.423053 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 23 18:55:03 crc kubenswrapper[4724]: I0223 18:55:03.426315 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-7tgsq" Feb 23 18:55:03 crc kubenswrapper[4724]: I0223 18:55:03.434831 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 23 18:55:03 crc kubenswrapper[4724]: I0223 18:55:03.542017 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"978d2d70-05f0-4404-8ace-2ba6f872d25a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 23 18:55:03 crc kubenswrapper[4724]: I0223 18:55:03.542105 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmqbf\" (UniqueName: \"kubernetes.io/projected/978d2d70-05f0-4404-8ace-2ba6f872d25a-kube-api-access-xmqbf\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"978d2d70-05f0-4404-8ace-2ba6f872d25a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 23 18:55:03 crc kubenswrapper[4724]: I0223 18:55:03.645591 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"978d2d70-05f0-4404-8ace-2ba6f872d25a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 23 18:55:03 crc kubenswrapper[4724]: I0223 18:55:03.645738 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmqbf\" (UniqueName: \"kubernetes.io/projected/978d2d70-05f0-4404-8ace-2ba6f872d25a-kube-api-access-xmqbf\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"978d2d70-05f0-4404-8ace-2ba6f872d25a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 23 18:55:03 crc kubenswrapper[4724]: I0223 18:55:03.646225 4724 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"978d2d70-05f0-4404-8ace-2ba6f872d25a\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 23 18:55:03 crc kubenswrapper[4724]: I0223 18:55:03.669932 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmqbf\" (UniqueName: \"kubernetes.io/projected/978d2d70-05f0-4404-8ace-2ba6f872d25a-kube-api-access-xmqbf\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"978d2d70-05f0-4404-8ace-2ba6f872d25a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 23 18:55:03 crc kubenswrapper[4724]: I0223 18:55:03.685417 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"978d2d70-05f0-4404-8ace-2ba6f872d25a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 23 18:55:03 crc kubenswrapper[4724]: I0223 18:55:03.745976 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 23 18:55:04 crc kubenswrapper[4724]: I0223 18:55:04.191605 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 23 18:55:04 crc kubenswrapper[4724]: W0223 18:55:04.199350 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod978d2d70_05f0_4404_8ace_2ba6f872d25a.slice/crio-253e45234127b0e17e1e4514e03e540b4993136473212caf799c7a8cae3563d2 WatchSource:0}: Error finding container 253e45234127b0e17e1e4514e03e540b4993136473212caf799c7a8cae3563d2: Status 404 returned error can't find the container with id 253e45234127b0e17e1e4514e03e540b4993136473212caf799c7a8cae3563d2 Feb 23 18:55:04 crc kubenswrapper[4724]: I0223 18:55:04.203005 4724 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 18:55:04 crc kubenswrapper[4724]: I0223 18:55:04.579938 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"978d2d70-05f0-4404-8ace-2ba6f872d25a","Type":"ContainerStarted","Data":"253e45234127b0e17e1e4514e03e540b4993136473212caf799c7a8cae3563d2"} Feb 23 18:55:05 crc kubenswrapper[4724]: I0223 18:55:05.595864 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"978d2d70-05f0-4404-8ace-2ba6f872d25a","Type":"ContainerStarted","Data":"dbabb2251d349a5edef38f4d64ce67bebc5c46647331c0c951d1dcba241f6e41"} Feb 23 18:55:05 crc kubenswrapper[4724]: I0223 18:55:05.614225 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.6957319800000001 podStartE2EDuration="2.614204842s" podCreationTimestamp="2026-02-23 18:55:03 +0000 UTC" firstStartedPulling="2026-02-23 18:55:04.202804877 +0000 UTC m=+5060.019004477" lastFinishedPulling="2026-02-23 18:55:05.121277729 +0000 UTC m=+5060.937477339" observedRunningTime="2026-02-23 18:55:05.610595442 +0000 UTC m=+5061.426795062" watchObservedRunningTime="2026-02-23 18:55:05.614204842 +0000 UTC m=+5061.430404452" Feb 23 18:55:27 crc kubenswrapper[4724]: I0223 18:55:27.752136 4724 patch_prober.go:28] interesting pod/machine-config-daemon-rw78r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 18:55:27 crc kubenswrapper[4724]: I0223 18:55:27.752919 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 18:55:29 crc kubenswrapper[4724]: I0223 18:55:29.190947 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-78zrq/must-gather-t654n"] Feb 23 18:55:29 crc kubenswrapper[4724]: I0223 18:55:29.193190 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-78zrq/must-gather-t654n" Feb 23 18:55:29 crc kubenswrapper[4724]: I0223 18:55:29.194733 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-78zrq"/"default-dockercfg-h87mh" Feb 23 18:55:29 crc kubenswrapper[4724]: I0223 18:55:29.195328 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-78zrq"/"kube-root-ca.crt" Feb 23 18:55:29 crc kubenswrapper[4724]: I0223 18:55:29.196281 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-78zrq"/"openshift-service-ca.crt" Feb 23 18:55:29 crc kubenswrapper[4724]: I0223 18:55:29.203283 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-78zrq/must-gather-t654n"] Feb 23 18:55:29 crc kubenswrapper[4724]: I0223 18:55:29.381546 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrrq9\" (UniqueName: \"kubernetes.io/projected/7e70813e-32c6-4649-9ae4-5291ceed814e-kube-api-access-vrrq9\") pod \"must-gather-t654n\" (UID: \"7e70813e-32c6-4649-9ae4-5291ceed814e\") " pod="openshift-must-gather-78zrq/must-gather-t654n" Feb 23 18:55:29 crc kubenswrapper[4724]: I0223 18:55:29.381682 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7e70813e-32c6-4649-9ae4-5291ceed814e-must-gather-output\") pod \"must-gather-t654n\" (UID: \"7e70813e-32c6-4649-9ae4-5291ceed814e\") " pod="openshift-must-gather-78zrq/must-gather-t654n" Feb 23 18:55:29 crc kubenswrapper[4724]: I0223 18:55:29.483831 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrrq9\" (UniqueName: \"kubernetes.io/projected/7e70813e-32c6-4649-9ae4-5291ceed814e-kube-api-access-vrrq9\") pod \"must-gather-t654n\" (UID: \"7e70813e-32c6-4649-9ae4-5291ceed814e\") " pod="openshift-must-gather-78zrq/must-gather-t654n" Feb 23 18:55:29 crc kubenswrapper[4724]: I0223 18:55:29.483907 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7e70813e-32c6-4649-9ae4-5291ceed814e-must-gather-output\") pod \"must-gather-t654n\" (UID: \"7e70813e-32c6-4649-9ae4-5291ceed814e\") " pod="openshift-must-gather-78zrq/must-gather-t654n" Feb 23 18:55:29 crc kubenswrapper[4724]: I0223 18:55:29.485272 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7e70813e-32c6-4649-9ae4-5291ceed814e-must-gather-output\") pod \"must-gather-t654n\" (UID: \"7e70813e-32c6-4649-9ae4-5291ceed814e\") " pod="openshift-must-gather-78zrq/must-gather-t654n" Feb 23 18:55:29 crc kubenswrapper[4724]: I0223 18:55:29.506231 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrrq9\" (UniqueName: \"kubernetes.io/projected/7e70813e-32c6-4649-9ae4-5291ceed814e-kube-api-access-vrrq9\") pod \"must-gather-t654n\" (UID: \"7e70813e-32c6-4649-9ae4-5291ceed814e\") " pod="openshift-must-gather-78zrq/must-gather-t654n" Feb 23 18:55:29 crc kubenswrapper[4724]: I0223 18:55:29.513961 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-78zrq/must-gather-t654n" Feb 23 18:55:30 crc kubenswrapper[4724]: I0223 18:55:30.008706 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-78zrq/must-gather-t654n"] Feb 23 18:55:30 crc kubenswrapper[4724]: W0223 18:55:30.015837 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e70813e_32c6_4649_9ae4_5291ceed814e.slice/crio-0637af9c94f50f8b83851a65933d9a4c467401323e46d9ca614cd0537771a526 WatchSource:0}: Error finding container 0637af9c94f50f8b83851a65933d9a4c467401323e46d9ca614cd0537771a526: Status 404 returned error can't find the container with id 0637af9c94f50f8b83851a65933d9a4c467401323e46d9ca614cd0537771a526 Feb 23 18:55:30 crc kubenswrapper[4724]: I0223 18:55:30.856479 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-78zrq/must-gather-t654n" event={"ID":"7e70813e-32c6-4649-9ae4-5291ceed814e","Type":"ContainerStarted","Data":"0637af9c94f50f8b83851a65933d9a4c467401323e46d9ca614cd0537771a526"} Feb 23 18:55:36 crc kubenswrapper[4724]: I0223 18:55:36.927609 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-78zrq/must-gather-t654n" event={"ID":"7e70813e-32c6-4649-9ae4-5291ceed814e","Type":"ContainerStarted","Data":"84ee3e3f59657325bcd0ef11a242405818912936714c67b72a09d175c28ef5c2"} Feb 23 18:55:36 crc kubenswrapper[4724]: I0223 18:55:36.928146 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-78zrq/must-gather-t654n" event={"ID":"7e70813e-32c6-4649-9ae4-5291ceed814e","Type":"ContainerStarted","Data":"6f1c1d463f48ba4aaa2d48a4c9ac140d46992a31fde19caab9e3fac866280192"} Feb 23 18:55:36 crc kubenswrapper[4724]: I0223 18:55:36.948007 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-78zrq/must-gather-t654n" podStartSLOduration=2.267743887 podStartE2EDuration="7.94798412s" podCreationTimestamp="2026-02-23 18:55:29 +0000 UTC" firstStartedPulling="2026-02-23 18:55:30.022258226 +0000 UTC m=+5085.838457826" lastFinishedPulling="2026-02-23 18:55:35.702498459 +0000 UTC m=+5091.518698059" observedRunningTime="2026-02-23 18:55:36.941095588 +0000 UTC m=+5092.757295188" watchObservedRunningTime="2026-02-23 18:55:36.94798412 +0000 UTC m=+5092.764183720" Feb 23 18:55:39 crc kubenswrapper[4724]: I0223 18:55:39.713975 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-78zrq/crc-debug-b9266"] Feb 23 18:55:39 crc kubenswrapper[4724]: I0223 18:55:39.715509 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-78zrq/crc-debug-b9266" Feb 23 18:55:39 crc kubenswrapper[4724]: I0223 18:55:39.801718 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/06ba436c-157f-4766-a481-c1d6f3c21c5f-host\") pod \"crc-debug-b9266\" (UID: \"06ba436c-157f-4766-a481-c1d6f3c21c5f\") " pod="openshift-must-gather-78zrq/crc-debug-b9266" Feb 23 18:55:39 crc kubenswrapper[4724]: I0223 18:55:39.801971 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf66g\" (UniqueName: \"kubernetes.io/projected/06ba436c-157f-4766-a481-c1d6f3c21c5f-kube-api-access-bf66g\") pod \"crc-debug-b9266\" (UID: \"06ba436c-157f-4766-a481-c1d6f3c21c5f\") " pod="openshift-must-gather-78zrq/crc-debug-b9266" Feb 23 18:55:39 crc kubenswrapper[4724]: I0223 18:55:39.904372 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf66g\" (UniqueName: \"kubernetes.io/projected/06ba436c-157f-4766-a481-c1d6f3c21c5f-kube-api-access-bf66g\") pod \"crc-debug-b9266\" (UID: \"06ba436c-157f-4766-a481-c1d6f3c21c5f\") " pod="openshift-must-gather-78zrq/crc-debug-b9266" Feb 23 18:55:39 crc kubenswrapper[4724]: I0223 18:55:39.904488 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/06ba436c-157f-4766-a481-c1d6f3c21c5f-host\") pod \"crc-debug-b9266\" (UID: \"06ba436c-157f-4766-a481-c1d6f3c21c5f\") " pod="openshift-must-gather-78zrq/crc-debug-b9266" Feb 23 18:55:39 crc kubenswrapper[4724]: I0223 18:55:39.904636 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/06ba436c-157f-4766-a481-c1d6f3c21c5f-host\") pod \"crc-debug-b9266\" (UID: \"06ba436c-157f-4766-a481-c1d6f3c21c5f\") " pod="openshift-must-gather-78zrq/crc-debug-b9266" Feb 23 18:55:39 crc kubenswrapper[4724]: I0223 18:55:39.932728 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf66g\" (UniqueName: \"kubernetes.io/projected/06ba436c-157f-4766-a481-c1d6f3c21c5f-kube-api-access-bf66g\") pod \"crc-debug-b9266\" (UID: \"06ba436c-157f-4766-a481-c1d6f3c21c5f\") " pod="openshift-must-gather-78zrq/crc-debug-b9266" Feb 23 18:55:40 crc kubenswrapper[4724]: I0223 18:55:40.031475 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-78zrq/crc-debug-b9266" Feb 23 18:55:40 crc kubenswrapper[4724]: I0223 18:55:40.970373 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-78zrq/crc-debug-b9266" event={"ID":"06ba436c-157f-4766-a481-c1d6f3c21c5f","Type":"ContainerStarted","Data":"97c3fc46f8feee71bb96960c54732da585d5c4b439374371b556ec68b86869b2"} Feb 23 18:55:50 crc kubenswrapper[4724]: I0223 18:55:50.095353 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-78zrq/crc-debug-b9266" event={"ID":"06ba436c-157f-4766-a481-c1d6f3c21c5f","Type":"ContainerStarted","Data":"284f90198dc471a183e6f1329505e4bc608b01bc4cab6c86c4116d7c37ab9140"} Feb 23 18:55:50 crc kubenswrapper[4724]: I0223 18:55:50.112130 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-78zrq/crc-debug-b9266" podStartSLOduration=1.583874209 podStartE2EDuration="11.112102219s" podCreationTimestamp="2026-02-23 18:55:39 +0000 UTC" firstStartedPulling="2026-02-23 18:55:40.068219309 +0000 UTC m=+5095.884418909" lastFinishedPulling="2026-02-23 18:55:49.596447319 +0000 UTC m=+5105.412646919" observedRunningTime="2026-02-23 18:55:50.10974382 +0000 UTC m=+5105.925943420" watchObservedRunningTime="2026-02-23 18:55:50.112102219 +0000 UTC m=+5105.928301829" Feb 23 18:55:57 crc kubenswrapper[4724]: I0223 18:55:57.752291 4724 patch_prober.go:28] interesting pod/machine-config-daemon-rw78r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 18:55:57 crc kubenswrapper[4724]: I0223 18:55:57.752852 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 18:56:27 crc kubenswrapper[4724]: I0223 18:56:27.754925 4724 patch_prober.go:28] interesting pod/machine-config-daemon-rw78r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 18:56:27 crc kubenswrapper[4724]: I0223 18:56:27.755582 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 18:56:27 crc kubenswrapper[4724]: I0223 18:56:27.755636 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" Feb 23 18:56:27 crc kubenswrapper[4724]: I0223 18:56:27.756520 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"639fd3b8f1387bc69e2a8c57a6b59cab3604e4b36459949d5f23a41a140cc1ce"} pod="openshift-machine-config-operator/machine-config-daemon-rw78r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 18:56:27 crc kubenswrapper[4724]: I0223 18:56:27.756587 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" containerID="cri-o://639fd3b8f1387bc69e2a8c57a6b59cab3604e4b36459949d5f23a41a140cc1ce" gracePeriod=600 Feb 23 18:56:28 crc kubenswrapper[4724]: I0223 18:56:28.436311 4724 generic.go:334] "Generic (PLEG): container finished" podID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerID="639fd3b8f1387bc69e2a8c57a6b59cab3604e4b36459949d5f23a41a140cc1ce" exitCode=0 Feb 23 18:56:28 crc kubenswrapper[4724]: I0223 18:56:28.436376 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" event={"ID":"a065b197-b354-4d9b-b2e9-7d4882a3d1a2","Type":"ContainerDied","Data":"639fd3b8f1387bc69e2a8c57a6b59cab3604e4b36459949d5f23a41a140cc1ce"} Feb 23 18:56:28 crc kubenswrapper[4724]: I0223 18:56:28.436683 4724 scope.go:117] "RemoveContainer" containerID="bf1b52f0bf4f6d6849a1e01af7d9a0d575e3465a63d51b8f1787d5cd801f7d99" Feb 23 18:56:29 crc kubenswrapper[4724]: I0223 18:56:29.446908 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" event={"ID":"a065b197-b354-4d9b-b2e9-7d4882a3d1a2","Type":"ContainerStarted","Data":"0445ec09f32cc3389918fbf9447b786148afd5f9c063c45817e69392efe39cd6"} Feb 23 18:56:32 crc kubenswrapper[4724]: I0223 18:56:32.472445 4724 generic.go:334] "Generic (PLEG): container finished" podID="06ba436c-157f-4766-a481-c1d6f3c21c5f" containerID="284f90198dc471a183e6f1329505e4bc608b01bc4cab6c86c4116d7c37ab9140" exitCode=0 Feb 23 18:56:32 crc kubenswrapper[4724]: I0223 18:56:32.472530 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-78zrq/crc-debug-b9266" event={"ID":"06ba436c-157f-4766-a481-c1d6f3c21c5f","Type":"ContainerDied","Data":"284f90198dc471a183e6f1329505e4bc608b01bc4cab6c86c4116d7c37ab9140"} Feb 23 18:56:33 crc kubenswrapper[4724]: I0223 18:56:33.575662 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-78zrq/crc-debug-b9266" Feb 23 18:56:33 crc kubenswrapper[4724]: I0223 18:56:33.613903 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-78zrq/crc-debug-b9266"] Feb 23 18:56:33 crc kubenswrapper[4724]: I0223 18:56:33.634708 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-78zrq/crc-debug-b9266"] Feb 23 18:56:33 crc kubenswrapper[4724]: I0223 18:56:33.738791 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf66g\" (UniqueName: \"kubernetes.io/projected/06ba436c-157f-4766-a481-c1d6f3c21c5f-kube-api-access-bf66g\") pod \"06ba436c-157f-4766-a481-c1d6f3c21c5f\" (UID: \"06ba436c-157f-4766-a481-c1d6f3c21c5f\") " Feb 23 18:56:33 crc kubenswrapper[4724]: I0223 18:56:33.739205 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/06ba436c-157f-4766-a481-c1d6f3c21c5f-host\") pod \"06ba436c-157f-4766-a481-c1d6f3c21c5f\" (UID: \"06ba436c-157f-4766-a481-c1d6f3c21c5f\") " Feb 23 18:56:33 crc kubenswrapper[4724]: I0223 18:56:33.739310 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06ba436c-157f-4766-a481-c1d6f3c21c5f-host" (OuterVolumeSpecName: "host") pod "06ba436c-157f-4766-a481-c1d6f3c21c5f" (UID: "06ba436c-157f-4766-a481-c1d6f3c21c5f"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 18:56:33 crc kubenswrapper[4724]: I0223 18:56:33.740178 4724 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/06ba436c-157f-4766-a481-c1d6f3c21c5f-host\") on node \"crc\" DevicePath \"\"" Feb 23 18:56:33 crc kubenswrapper[4724]: I0223 18:56:33.746264 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06ba436c-157f-4766-a481-c1d6f3c21c5f-kube-api-access-bf66g" (OuterVolumeSpecName: "kube-api-access-bf66g") pod "06ba436c-157f-4766-a481-c1d6f3c21c5f" (UID: "06ba436c-157f-4766-a481-c1d6f3c21c5f"). InnerVolumeSpecName "kube-api-access-bf66g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:56:33 crc kubenswrapper[4724]: I0223 18:56:33.842444 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf66g\" (UniqueName: \"kubernetes.io/projected/06ba436c-157f-4766-a481-c1d6f3c21c5f-kube-api-access-bf66g\") on node \"crc\" DevicePath \"\"" Feb 23 18:56:34 crc kubenswrapper[4724]: I0223 18:56:34.490757 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97c3fc46f8feee71bb96960c54732da585d5c4b439374371b556ec68b86869b2" Feb 23 18:56:34 crc kubenswrapper[4724]: I0223 18:56:34.490906 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-78zrq/crc-debug-b9266" Feb 23 18:56:34 crc kubenswrapper[4724]: I0223 18:56:34.786531 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-78zrq/crc-debug-bn6g4"] Feb 23 18:56:34 crc kubenswrapper[4724]: E0223 18:56:34.787066 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06ba436c-157f-4766-a481-c1d6f3c21c5f" containerName="container-00" Feb 23 18:56:34 crc kubenswrapper[4724]: I0223 18:56:34.787080 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="06ba436c-157f-4766-a481-c1d6f3c21c5f" containerName="container-00" Feb 23 18:56:34 crc kubenswrapper[4724]: I0223 18:56:34.787319 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="06ba436c-157f-4766-a481-c1d6f3c21c5f" containerName="container-00" Feb 23 18:56:34 crc kubenswrapper[4724]: I0223 18:56:34.788013 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-78zrq/crc-debug-bn6g4" Feb 23 18:56:34 crc kubenswrapper[4724]: I0223 18:56:34.961120 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06ba436c-157f-4766-a481-c1d6f3c21c5f" path="/var/lib/kubelet/pods/06ba436c-157f-4766-a481-c1d6f3c21c5f/volumes" Feb 23 18:56:34 crc kubenswrapper[4724]: I0223 18:56:34.962624 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs2j6\" (UniqueName: \"kubernetes.io/projected/11433c2f-2a30-4c7d-a20c-d34c7a8428ef-kube-api-access-bs2j6\") pod \"crc-debug-bn6g4\" (UID: \"11433c2f-2a30-4c7d-a20c-d34c7a8428ef\") " pod="openshift-must-gather-78zrq/crc-debug-bn6g4" Feb 23 18:56:34 crc kubenswrapper[4724]: I0223 18:56:34.962754 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/11433c2f-2a30-4c7d-a20c-d34c7a8428ef-host\") pod \"crc-debug-bn6g4\" (UID: \"11433c2f-2a30-4c7d-a20c-d34c7a8428ef\") " pod="openshift-must-gather-78zrq/crc-debug-bn6g4" Feb 23 18:56:35 crc kubenswrapper[4724]: I0223 18:56:35.065047 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs2j6\" (UniqueName: \"kubernetes.io/projected/11433c2f-2a30-4c7d-a20c-d34c7a8428ef-kube-api-access-bs2j6\") pod \"crc-debug-bn6g4\" (UID: \"11433c2f-2a30-4c7d-a20c-d34c7a8428ef\") " pod="openshift-must-gather-78zrq/crc-debug-bn6g4" Feb 23 18:56:35 crc kubenswrapper[4724]: I0223 18:56:35.065139 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/11433c2f-2a30-4c7d-a20c-d34c7a8428ef-host\") pod \"crc-debug-bn6g4\" (UID: \"11433c2f-2a30-4c7d-a20c-d34c7a8428ef\") " pod="openshift-must-gather-78zrq/crc-debug-bn6g4" Feb 23 18:56:35 crc kubenswrapper[4724]: I0223 18:56:35.065769 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/11433c2f-2a30-4c7d-a20c-d34c7a8428ef-host\") pod \"crc-debug-bn6g4\" (UID: \"11433c2f-2a30-4c7d-a20c-d34c7a8428ef\") " pod="openshift-must-gather-78zrq/crc-debug-bn6g4" Feb 23 18:56:35 crc kubenswrapper[4724]: I0223 18:56:35.096007 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs2j6\" (UniqueName: \"kubernetes.io/projected/11433c2f-2a30-4c7d-a20c-d34c7a8428ef-kube-api-access-bs2j6\") pod \"crc-debug-bn6g4\" (UID: \"11433c2f-2a30-4c7d-a20c-d34c7a8428ef\") " pod="openshift-must-gather-78zrq/crc-debug-bn6g4" Feb 23 18:56:35 crc kubenswrapper[4724]: I0223 18:56:35.105288 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-78zrq/crc-debug-bn6g4" Feb 23 18:56:35 crc kubenswrapper[4724]: I0223 18:56:35.500938 4724 generic.go:334] "Generic (PLEG): container finished" podID="11433c2f-2a30-4c7d-a20c-d34c7a8428ef" containerID="785952424dc738efaafd536fb00f8a3a2054c56ee243ff603778dd75b29d68ad" exitCode=0 Feb 23 18:56:35 crc kubenswrapper[4724]: I0223 18:56:35.501025 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-78zrq/crc-debug-bn6g4" event={"ID":"11433c2f-2a30-4c7d-a20c-d34c7a8428ef","Type":"ContainerDied","Data":"785952424dc738efaafd536fb00f8a3a2054c56ee243ff603778dd75b29d68ad"} Feb 23 18:56:35 crc kubenswrapper[4724]: I0223 18:56:35.501597 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-78zrq/crc-debug-bn6g4" event={"ID":"11433c2f-2a30-4c7d-a20c-d34c7a8428ef","Type":"ContainerStarted","Data":"ba09df3f19b7f0fee1e187c47f6bc2cd1acce0b211780d5b52437af682c68500"} Feb 23 18:56:36 crc kubenswrapper[4724]: I0223 18:56:36.613532 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-78zrq/crc-debug-bn6g4" Feb 23 18:56:36 crc kubenswrapper[4724]: I0223 18:56:36.802462 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bs2j6\" (UniqueName: \"kubernetes.io/projected/11433c2f-2a30-4c7d-a20c-d34c7a8428ef-kube-api-access-bs2j6\") pod \"11433c2f-2a30-4c7d-a20c-d34c7a8428ef\" (UID: \"11433c2f-2a30-4c7d-a20c-d34c7a8428ef\") " Feb 23 18:56:36 crc kubenswrapper[4724]: I0223 18:56:36.802637 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/11433c2f-2a30-4c7d-a20c-d34c7a8428ef-host\") pod \"11433c2f-2a30-4c7d-a20c-d34c7a8428ef\" (UID: \"11433c2f-2a30-4c7d-a20c-d34c7a8428ef\") " Feb 23 18:56:36 crc kubenswrapper[4724]: I0223 18:56:36.802709 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11433c2f-2a30-4c7d-a20c-d34c7a8428ef-host" (OuterVolumeSpecName: "host") pod "11433c2f-2a30-4c7d-a20c-d34c7a8428ef" (UID: "11433c2f-2a30-4c7d-a20c-d34c7a8428ef"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 18:56:36 crc kubenswrapper[4724]: I0223 18:56:36.804874 4724 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/11433c2f-2a30-4c7d-a20c-d34c7a8428ef-host\") on node \"crc\" DevicePath \"\"" Feb 23 18:56:36 crc kubenswrapper[4724]: I0223 18:56:36.808604 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11433c2f-2a30-4c7d-a20c-d34c7a8428ef-kube-api-access-bs2j6" (OuterVolumeSpecName: "kube-api-access-bs2j6") pod "11433c2f-2a30-4c7d-a20c-d34c7a8428ef" (UID: "11433c2f-2a30-4c7d-a20c-d34c7a8428ef"). InnerVolumeSpecName "kube-api-access-bs2j6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:56:36 crc kubenswrapper[4724]: I0223 18:56:36.906684 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bs2j6\" (UniqueName: \"kubernetes.io/projected/11433c2f-2a30-4c7d-a20c-d34c7a8428ef-kube-api-access-bs2j6\") on node \"crc\" DevicePath \"\"" Feb 23 18:56:37 crc kubenswrapper[4724]: I0223 18:56:37.521034 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-78zrq/crc-debug-bn6g4" event={"ID":"11433c2f-2a30-4c7d-a20c-d34c7a8428ef","Type":"ContainerDied","Data":"ba09df3f19b7f0fee1e187c47f6bc2cd1acce0b211780d5b52437af682c68500"} Feb 23 18:56:37 crc kubenswrapper[4724]: I0223 18:56:37.521419 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba09df3f19b7f0fee1e187c47f6bc2cd1acce0b211780d5b52437af682c68500" Feb 23 18:56:37 crc kubenswrapper[4724]: I0223 18:56:37.521124 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-78zrq/crc-debug-bn6g4" Feb 23 18:56:38 crc kubenswrapper[4724]: I0223 18:56:38.064527 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-78zrq/crc-debug-bn6g4"] Feb 23 18:56:38 crc kubenswrapper[4724]: I0223 18:56:38.078360 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-78zrq/crc-debug-bn6g4"] Feb 23 18:56:38 crc kubenswrapper[4724]: I0223 18:56:38.964956 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11433c2f-2a30-4c7d-a20c-d34c7a8428ef" path="/var/lib/kubelet/pods/11433c2f-2a30-4c7d-a20c-d34c7a8428ef/volumes" Feb 23 18:56:39 crc kubenswrapper[4724]: I0223 18:56:39.287235 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-78zrq/crc-debug-qzsjz"] Feb 23 18:56:39 crc kubenswrapper[4724]: E0223 18:56:39.287903 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11433c2f-2a30-4c7d-a20c-d34c7a8428ef" containerName="container-00" Feb 23 18:56:39 crc kubenswrapper[4724]: I0223 18:56:39.287927 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="11433c2f-2a30-4c7d-a20c-d34c7a8428ef" containerName="container-00" Feb 23 18:56:39 crc kubenswrapper[4724]: I0223 18:56:39.288263 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="11433c2f-2a30-4c7d-a20c-d34c7a8428ef" containerName="container-00" Feb 23 18:56:39 crc kubenswrapper[4724]: I0223 18:56:39.289503 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-78zrq/crc-debug-qzsjz" Feb 23 18:56:39 crc kubenswrapper[4724]: I0223 18:56:39.359273 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c5f9aae3-023f-4bbc-9577-3a8af9027744-host\") pod \"crc-debug-qzsjz\" (UID: \"c5f9aae3-023f-4bbc-9577-3a8af9027744\") " pod="openshift-must-gather-78zrq/crc-debug-qzsjz" Feb 23 18:56:39 crc kubenswrapper[4724]: I0223 18:56:39.359733 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pwcp\" (UniqueName: \"kubernetes.io/projected/c5f9aae3-023f-4bbc-9577-3a8af9027744-kube-api-access-4pwcp\") pod \"crc-debug-qzsjz\" (UID: \"c5f9aae3-023f-4bbc-9577-3a8af9027744\") " pod="openshift-must-gather-78zrq/crc-debug-qzsjz" Feb 23 18:56:39 crc kubenswrapper[4724]: I0223 18:56:39.462013 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pwcp\" (UniqueName: \"kubernetes.io/projected/c5f9aae3-023f-4bbc-9577-3a8af9027744-kube-api-access-4pwcp\") pod \"crc-debug-qzsjz\" (UID: \"c5f9aae3-023f-4bbc-9577-3a8af9027744\") " pod="openshift-must-gather-78zrq/crc-debug-qzsjz" Feb 23 18:56:39 crc kubenswrapper[4724]: I0223 18:56:39.462195 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c5f9aae3-023f-4bbc-9577-3a8af9027744-host\") pod \"crc-debug-qzsjz\" (UID: \"c5f9aae3-023f-4bbc-9577-3a8af9027744\") " pod="openshift-must-gather-78zrq/crc-debug-qzsjz" Feb 23 18:56:39 crc kubenswrapper[4724]: I0223 18:56:39.462355 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c5f9aae3-023f-4bbc-9577-3a8af9027744-host\") pod \"crc-debug-qzsjz\" (UID: \"c5f9aae3-023f-4bbc-9577-3a8af9027744\") " pod="openshift-must-gather-78zrq/crc-debug-qzsjz" Feb 23 18:56:39 crc kubenswrapper[4724]: I0223 18:56:39.500769 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pwcp\" (UniqueName: \"kubernetes.io/projected/c5f9aae3-023f-4bbc-9577-3a8af9027744-kube-api-access-4pwcp\") pod \"crc-debug-qzsjz\" (UID: \"c5f9aae3-023f-4bbc-9577-3a8af9027744\") " pod="openshift-must-gather-78zrq/crc-debug-qzsjz" Feb 23 18:56:39 crc kubenswrapper[4724]: I0223 18:56:39.610694 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-78zrq/crc-debug-qzsjz" Feb 23 18:56:39 crc kubenswrapper[4724]: W0223 18:56:39.648180 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5f9aae3_023f_4bbc_9577_3a8af9027744.slice/crio-14c88b22737343f226afdea7410e27fb6ef134fc9188ccbd3599cabdd7590d48 WatchSource:0}: Error finding container 14c88b22737343f226afdea7410e27fb6ef134fc9188ccbd3599cabdd7590d48: Status 404 returned error can't find the container with id 14c88b22737343f226afdea7410e27fb6ef134fc9188ccbd3599cabdd7590d48 Feb 23 18:56:40 crc kubenswrapper[4724]: I0223 18:56:40.550748 4724 generic.go:334] "Generic (PLEG): container finished" podID="c5f9aae3-023f-4bbc-9577-3a8af9027744" containerID="231c2064f216041e1c1b28cfc18391892501d4cf2a140b90e7e7ded5badc0244" exitCode=0 Feb 23 18:56:40 crc kubenswrapper[4724]: I0223 18:56:40.550914 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-78zrq/crc-debug-qzsjz" event={"ID":"c5f9aae3-023f-4bbc-9577-3a8af9027744","Type":"ContainerDied","Data":"231c2064f216041e1c1b28cfc18391892501d4cf2a140b90e7e7ded5badc0244"} Feb 23 18:56:40 crc kubenswrapper[4724]: I0223 18:56:40.551036 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-78zrq/crc-debug-qzsjz" event={"ID":"c5f9aae3-023f-4bbc-9577-3a8af9027744","Type":"ContainerStarted","Data":"14c88b22737343f226afdea7410e27fb6ef134fc9188ccbd3599cabdd7590d48"} Feb 23 18:56:40 crc kubenswrapper[4724]: I0223 18:56:40.609620 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-78zrq/crc-debug-qzsjz"] Feb 23 18:56:40 crc kubenswrapper[4724]: I0223 18:56:40.618736 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-78zrq/crc-debug-qzsjz"] Feb 23 18:56:41 crc kubenswrapper[4724]: I0223 18:56:41.665542 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-78zrq/crc-debug-qzsjz" Feb 23 18:56:41 crc kubenswrapper[4724]: I0223 18:56:41.706701 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pwcp\" (UniqueName: \"kubernetes.io/projected/c5f9aae3-023f-4bbc-9577-3a8af9027744-kube-api-access-4pwcp\") pod \"c5f9aae3-023f-4bbc-9577-3a8af9027744\" (UID: \"c5f9aae3-023f-4bbc-9577-3a8af9027744\") " Feb 23 18:56:41 crc kubenswrapper[4724]: I0223 18:56:41.706794 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c5f9aae3-023f-4bbc-9577-3a8af9027744-host\") pod \"c5f9aae3-023f-4bbc-9577-3a8af9027744\" (UID: \"c5f9aae3-023f-4bbc-9577-3a8af9027744\") " Feb 23 18:56:41 crc kubenswrapper[4724]: I0223 18:56:41.706945 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5f9aae3-023f-4bbc-9577-3a8af9027744-host" (OuterVolumeSpecName: "host") pod "c5f9aae3-023f-4bbc-9577-3a8af9027744" (UID: "c5f9aae3-023f-4bbc-9577-3a8af9027744"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 18:56:41 crc kubenswrapper[4724]: I0223 18:56:41.707419 4724 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c5f9aae3-023f-4bbc-9577-3a8af9027744-host\") on node \"crc\" DevicePath \"\"" Feb 23 18:56:41 crc kubenswrapper[4724]: I0223 18:56:41.712679 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f9aae3-023f-4bbc-9577-3a8af9027744-kube-api-access-4pwcp" (OuterVolumeSpecName: "kube-api-access-4pwcp") pod "c5f9aae3-023f-4bbc-9577-3a8af9027744" (UID: "c5f9aae3-023f-4bbc-9577-3a8af9027744"). InnerVolumeSpecName "kube-api-access-4pwcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:56:41 crc kubenswrapper[4724]: I0223 18:56:41.809112 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pwcp\" (UniqueName: \"kubernetes.io/projected/c5f9aae3-023f-4bbc-9577-3a8af9027744-kube-api-access-4pwcp\") on node \"crc\" DevicePath \"\"" Feb 23 18:56:42 crc kubenswrapper[4724]: I0223 18:56:42.569761 4724 scope.go:117] "RemoveContainer" containerID="231c2064f216041e1c1b28cfc18391892501d4cf2a140b90e7e7ded5badc0244" Feb 23 18:56:42 crc kubenswrapper[4724]: I0223 18:56:42.569818 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-78zrq/crc-debug-qzsjz" Feb 23 18:56:42 crc kubenswrapper[4724]: I0223 18:56:42.964690 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f9aae3-023f-4bbc-9577-3a8af9027744" path="/var/lib/kubelet/pods/c5f9aae3-023f-4bbc-9577-3a8af9027744/volumes" Feb 23 18:57:13 crc kubenswrapper[4724]: I0223 18:57:13.696557 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6f4c5b5ccd-7xcmx_e93c91f5-d9d7-4322-97c0-8d2b9ab82714/barbican-api/0.log" Feb 23 18:57:13 crc kubenswrapper[4724]: I0223 18:57:13.841617 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6f4c5b5ccd-7xcmx_e93c91f5-d9d7-4322-97c0-8d2b9ab82714/barbican-api-log/0.log" Feb 23 18:57:13 crc kubenswrapper[4724]: I0223 18:57:13.968122 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7cbfcdd8bd-6sfgm_c06fc526-bdf8-419c-8261-29fca2da229c/barbican-keystone-listener/0.log" Feb 23 18:57:14 crc kubenswrapper[4724]: I0223 18:57:14.024834 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7cbfcdd8bd-6sfgm_c06fc526-bdf8-419c-8261-29fca2da229c/barbican-keystone-listener-log/0.log" Feb 23 18:57:14 crc kubenswrapper[4724]: I0223 18:57:14.078215 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-68f84cbc4f-9ns6x_d0195d90-e7f7-4cba-b83a-75b5e0a1bcd1/barbican-worker/0.log" Feb 23 18:57:14 crc kubenswrapper[4724]: I0223 18:57:14.152932 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-68f84cbc4f-9ns6x_d0195d90-e7f7-4cba-b83a-75b5e0a1bcd1/barbican-worker-log/0.log" Feb 23 18:57:14 crc kubenswrapper[4724]: I0223 18:57:14.260143 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-4wbqf_456d50d3-b5f9-4dd4-9eec-c15f21b183e7/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 18:57:14 crc kubenswrapper[4724]: I0223 18:57:14.407887 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2ed30198-318f-476e-83b7-e93ab4c5625d/ceilometer-central-agent/0.log" Feb 23 18:57:14 crc kubenswrapper[4724]: I0223 18:57:14.486065 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2ed30198-318f-476e-83b7-e93ab4c5625d/proxy-httpd/0.log" Feb 23 18:57:14 crc kubenswrapper[4724]: I0223 18:57:14.491502 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2ed30198-318f-476e-83b7-e93ab4c5625d/ceilometer-notification-agent/0.log" Feb 23 18:57:14 crc kubenswrapper[4724]: I0223 18:57:14.571013 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2ed30198-318f-476e-83b7-e93ab4c5625d/sg-core/0.log" Feb 23 18:57:14 crc kubenswrapper[4724]: I0223 18:57:14.661489 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_6c03aee9-806f-4319-a3b8-b3226a740f4b/cinder-api-log/0.log" Feb 23 18:57:14 crc kubenswrapper[4724]: I0223 18:57:14.930497 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_55cae485-5e0f-4fb8-a19a-21f84b246733/probe/0.log" Feb 23 18:57:15 crc kubenswrapper[4724]: I0223 18:57:15.073565 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_55cae485-5e0f-4fb8-a19a-21f84b246733/cinder-backup/0.log" Feb 23 18:57:15 crc kubenswrapper[4724]: I0223 18:57:15.166494 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_34e8e653-ac7e-4bca-9ce1-e5f9f4b5b2f2/cinder-scheduler/0.log" Feb 23 18:57:15 crc kubenswrapper[4724]: I0223 18:57:15.169028 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_6c03aee9-806f-4319-a3b8-b3226a740f4b/cinder-api/0.log" Feb 23 18:57:15 crc kubenswrapper[4724]: I0223 18:57:15.333545 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_34e8e653-ac7e-4bca-9ce1-e5f9f4b5b2f2/probe/0.log" Feb 23 18:57:15 crc kubenswrapper[4724]: I0223 18:57:15.375408 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_fee47e38-5239-488d-a11c-53342802f8b1/probe/0.log" Feb 23 18:57:15 crc kubenswrapper[4724]: I0223 18:57:15.515076 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_fee47e38-5239-488d-a11c-53342802f8b1/cinder-volume/0.log" Feb 23 18:57:15 crc kubenswrapper[4724]: I0223 18:57:15.641877 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_34ef4ee9-8229-4235-bb3c-f5138b1f8d4f/probe/0.log" Feb 23 18:57:15 crc kubenswrapper[4724]: I0223 18:57:15.785537 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-5s9b9_a5ffe362-1a42-40ec-8cbf-ce9b83db854d/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 18:57:15 crc kubenswrapper[4724]: I0223 18:57:15.796628 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_34ef4ee9-8229-4235-bb3c-f5138b1f8d4f/cinder-volume/0.log" Feb 23 18:57:15 crc kubenswrapper[4724]: I0223 18:57:15.976328 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-g4758_78a23e2d-61b1-4393-95b0-e4872270628a/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 18:57:16 crc kubenswrapper[4724]: I0223 18:57:16.001463 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69644d8897-p4mmz_f47e5d73-be56-42e3-b23e-1710cfab9733/init/0.log" Feb 23 18:57:16 crc kubenswrapper[4724]: I0223 18:57:16.878210 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69644d8897-p4mmz_f47e5d73-be56-42e3-b23e-1710cfab9733/init/0.log" Feb 23 18:57:16 crc kubenswrapper[4724]: I0223 18:57:16.882793 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-255hh_15bf49cb-7015-49e6-9710-4f701dc9d6f7/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 18:57:17 crc kubenswrapper[4724]: I0223 18:57:17.049278 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69644d8897-p4mmz_f47e5d73-be56-42e3-b23e-1710cfab9733/dnsmasq-dns/0.log" Feb 23 18:57:17 crc kubenswrapper[4724]: I0223 18:57:17.139234 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_8883a549-3562-42b7-86d4-934c3076f934/glance-httpd/0.log" Feb 23 18:57:17 crc kubenswrapper[4724]: I0223 18:57:17.177489 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_8883a549-3562-42b7-86d4-934c3076f934/glance-log/0.log" Feb 23 18:57:17 crc kubenswrapper[4724]: I0223 18:57:17.338069 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_260cff26-a398-4898-9708-61ef33a6aa00/glance-httpd/0.log" Feb 23 18:57:17 crc kubenswrapper[4724]: I0223 18:57:17.366639 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_260cff26-a398-4898-9708-61ef33a6aa00/glance-log/0.log" Feb 23 18:57:17 crc kubenswrapper[4724]: I0223 18:57:17.590684 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-fsscq_0e96ae5a-4689-4373-bfad-06a0f99345d2/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 18:57:17 crc kubenswrapper[4724]: I0223 18:57:17.623587 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5b4b6c94fb-ttctl_07785399-35e6-432b-8835-4412fa3ff02b/horizon/0.log" Feb 23 18:57:17 crc kubenswrapper[4724]: I0223 18:57:17.808115 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-ktllk_ccfb9295-92e0-4f3d-a25c-a3a7f433126e/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 18:57:18 crc kubenswrapper[4724]: I0223 18:57:18.011550 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29531161-mxchr_3b373b9a-1005-41fb-92c8-22d259d8f036/keystone-cron/0.log" Feb 23 18:57:18 crc kubenswrapper[4724]: I0223 18:57:18.236608 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5b4b6c94fb-ttctl_07785399-35e6-432b-8835-4412fa3ff02b/horizon-log/0.log" Feb 23 18:57:18 crc kubenswrapper[4724]: I0223 18:57:18.387554 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5cb5799495-xxmx4_36583b8f-b74d-4f25-980e-030c8d3896c7/keystone-api/0.log" Feb 23 18:57:18 crc kubenswrapper[4724]: I0223 18:57:18.544254 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_85b4f79b-e696-483e-8ee7-8653f8c07a40/kube-state-metrics/0.log" Feb 23 18:57:18 crc kubenswrapper[4724]: I0223 18:57:18.559893 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-tp7qm_3f5fa243-d790-4006-9c4c-7a1bf93a56b4/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 18:57:18 crc kubenswrapper[4724]: I0223 18:57:18.977121 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-84d9ddfbc9-spsrv_59037714-7bc4-4c52-95d7-a791923f67fe/neutron-httpd/0.log" Feb 23 18:57:19 crc kubenswrapper[4724]: I0223 18:57:19.069101 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-fxkgw_ffe67500-5244-403d-8a50-59aa76582492/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 18:57:19 crc kubenswrapper[4724]: I0223 18:57:19.112563 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-84d9ddfbc9-spsrv_59037714-7bc4-4c52-95d7-a791923f67fe/neutron-api/0.log" Feb 23 18:57:19 crc kubenswrapper[4724]: I0223 18:57:19.217925 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_notifications-rabbitmq-server-0_6e165de7-7e1a-47c3-84d2-9fc675a2224a/setup-container/0.log" Feb 23 18:57:19 crc kubenswrapper[4724]: I0223 18:57:19.378261 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_notifications-rabbitmq-server-0_6e165de7-7e1a-47c3-84d2-9fc675a2224a/setup-container/0.log" Feb 23 18:57:19 crc kubenswrapper[4724]: I0223 18:57:19.492833 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_notifications-rabbitmq-server-0_6e165de7-7e1a-47c3-84d2-9fc675a2224a/rabbitmq/0.log" Feb 23 18:57:19 crc kubenswrapper[4724]: I0223 18:57:19.959869 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_557c3e1b-ccc8-48d7-8a2c-78de846beac2/nova-cell0-conductor-conductor/0.log" Feb 23 18:57:20 crc kubenswrapper[4724]: I0223 18:57:20.311137 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_e54fa012-7969-4917-888f-a2f822eb9449/nova-cell1-conductor-conductor/0.log" Feb 23 18:57:20 crc kubenswrapper[4724]: I0223 18:57:20.553282 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_31f78c36-4f54-425d-87a6-3b0c7093a06c/nova-cell1-novncproxy-novncproxy/0.log" Feb 23 18:57:20 crc kubenswrapper[4724]: I0223 18:57:20.789244 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_4bef9e90-cdd6-4eb6-8801-3f7b07bc9363/nova-api-log/0.log" Feb 23 18:57:20 crc kubenswrapper[4724]: I0223 18:57:20.820031 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-t898c_28de6808-9434-463a-9b7f-cd4236c51c29/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 18:57:21 crc kubenswrapper[4724]: I0223 18:57:21.047840 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_4bef9e90-cdd6-4eb6-8801-3f7b07bc9363/nova-api-api/0.log" Feb 23 18:57:21 crc kubenswrapper[4724]: I0223 18:57:21.140691 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_9365a64c-1314-4df5-b7b2-ed56c6d7a358/nova-metadata-log/0.log" Feb 23 18:57:21 crc kubenswrapper[4724]: I0223 18:57:21.413660 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c7ad5fb5-517e-4249-9da4-08d99599caf0/mysql-bootstrap/0.log" Feb 23 18:57:21 crc kubenswrapper[4724]: I0223 18:57:21.600067 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c7ad5fb5-517e-4249-9da4-08d99599caf0/mysql-bootstrap/0.log" Feb 23 18:57:21 crc kubenswrapper[4724]: I0223 18:57:21.608667 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_05bf40b5-2154-40d2-8714-7e7d24d42786/nova-scheduler-scheduler/0.log" Feb 23 18:57:21 crc kubenswrapper[4724]: I0223 18:57:21.644641 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c7ad5fb5-517e-4249-9da4-08d99599caf0/galera/0.log" Feb 23 18:57:21 crc kubenswrapper[4724]: I0223 18:57:21.804128 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e48a20ad-1863-458a-ba27-6b24cee6df0c/mysql-bootstrap/0.log" Feb 23 18:57:22 crc kubenswrapper[4724]: I0223 18:57:22.080674 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e48a20ad-1863-458a-ba27-6b24cee6df0c/mysql-bootstrap/0.log" Feb 23 18:57:22 crc kubenswrapper[4724]: I0223 18:57:22.099049 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e48a20ad-1863-458a-ba27-6b24cee6df0c/galera/0.log" Feb 23 18:57:22 crc kubenswrapper[4724]: I0223 18:57:22.278212 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_f5d061d8-a5d8-48fd-8f20-45eb9def3384/openstackclient/0.log" Feb 23 18:57:22 crc kubenswrapper[4724]: I0223 18:57:22.320709 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-hh76w_8fd48d48-59c7-4470-9223-c3b3f786c8d9/ovn-controller/0.log" Feb 23 18:57:22 crc kubenswrapper[4724]: I0223 18:57:22.518735 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-8b9ks_0371ce0f-1e0f-4b9f-a5aa-971ae7d19279/openstack-network-exporter/0.log" Feb 23 18:57:22 crc kubenswrapper[4724]: I0223 18:57:22.727779 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-lzxrb_4f6f0027-7e55-407c-be1d-5dc5f57250a8/ovsdb-server-init/0.log" Feb 23 18:57:22 crc kubenswrapper[4724]: I0223 18:57:22.851783 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_9365a64c-1314-4df5-b7b2-ed56c6d7a358/nova-metadata-metadata/0.log" Feb 23 18:57:22 crc kubenswrapper[4724]: I0223 18:57:22.897747 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-lzxrb_4f6f0027-7e55-407c-be1d-5dc5f57250a8/ovsdb-server-init/0.log" Feb 23 18:57:22 crc kubenswrapper[4724]: I0223 18:57:22.943944 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-lzxrb_4f6f0027-7e55-407c-be1d-5dc5f57250a8/ovsdb-server/0.log" Feb 23 18:57:23 crc kubenswrapper[4724]: I0223 18:57:23.177492 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-wn74p_5e7e7627-560c-4959-8d79-7999e31db5be/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 18:57:23 crc kubenswrapper[4724]: I0223 18:57:23.234447 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-lzxrb_4f6f0027-7e55-407c-be1d-5dc5f57250a8/ovs-vswitchd/0.log" Feb 23 18:57:23 crc kubenswrapper[4724]: I0223 18:57:23.343019 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_46836cc7-f4d3-432c-aa3e-c448d50a212e/openstack-network-exporter/0.log" Feb 23 18:57:23 crc kubenswrapper[4724]: I0223 18:57:23.424414 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_46836cc7-f4d3-432c-aa3e-c448d50a212e/ovn-northd/0.log" Feb 23 18:57:23 crc kubenswrapper[4724]: I0223 18:57:23.433282 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_ba834afe-088c-4b0c-97f5-7986f8f9c988/openstack-network-exporter/0.log" Feb 23 18:57:23 crc kubenswrapper[4724]: I0223 18:57:23.541559 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_ba834afe-088c-4b0c-97f5-7986f8f9c988/ovsdbserver-nb/0.log" Feb 23 18:57:23 crc kubenswrapper[4724]: I0223 18:57:23.656036 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_02d0b5c7-a3f7-47d6-a52f-cff5a0946cea/ovsdbserver-sb/0.log" Feb 23 18:57:23 crc kubenswrapper[4724]: I0223 18:57:23.674527 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_02d0b5c7-a3f7-47d6-a52f-cff5a0946cea/openstack-network-exporter/0.log" Feb 23 18:57:23 crc kubenswrapper[4724]: I0223 18:57:23.987048 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-69f7cbf768-jd6kh_1b2a00ce-727b-4065-b3b4-99f43d28b54d/placement-api/0.log" Feb 23 18:57:24 crc kubenswrapper[4724]: I0223 18:57:24.030675 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_a8cb62eb-328b-4857-92b7-2ec45d3b7714/init-config-reloader/0.log" Feb 23 18:57:24 crc kubenswrapper[4724]: I0223 18:57:24.084117 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-69f7cbf768-jd6kh_1b2a00ce-727b-4065-b3b4-99f43d28b54d/placement-log/0.log" Feb 23 18:57:24 crc kubenswrapper[4724]: I0223 18:57:24.150232 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_a8cb62eb-328b-4857-92b7-2ec45d3b7714/config-reloader/0.log" Feb 23 18:57:24 crc kubenswrapper[4724]: I0223 18:57:24.157286 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_a8cb62eb-328b-4857-92b7-2ec45d3b7714/init-config-reloader/0.log" Feb 23 18:57:24 crc kubenswrapper[4724]: I0223 18:57:24.251899 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_a8cb62eb-328b-4857-92b7-2ec45d3b7714/prometheus/0.log" Feb 23 18:57:24 crc kubenswrapper[4724]: I0223 18:57:24.335539 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_a8cb62eb-328b-4857-92b7-2ec45d3b7714/thanos-sidecar/0.log" Feb 23 18:57:24 crc kubenswrapper[4724]: I0223 18:57:24.449376 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_9723ff3a-6da5-46fd-be2a-89693223d4f0/setup-container/0.log" Feb 23 18:57:24 crc kubenswrapper[4724]: I0223 18:57:24.633545 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_9723ff3a-6da5-46fd-be2a-89693223d4f0/setup-container/0.log" Feb 23 18:57:24 crc kubenswrapper[4724]: I0223 18:57:24.650816 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_1593736a-2034-4811-90f9-90645b954b2c/setup-container/0.log" Feb 23 18:57:24 crc kubenswrapper[4724]: I0223 18:57:24.697797 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_9723ff3a-6da5-46fd-be2a-89693223d4f0/rabbitmq/0.log" Feb 23 18:57:24 crc kubenswrapper[4724]: I0223 18:57:24.914565 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_1593736a-2034-4811-90f9-90645b954b2c/setup-container/0.log" Feb 23 18:57:24 crc kubenswrapper[4724]: I0223 18:57:24.939298 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-g74wb_bb78bbf2-4067-4e58-b506-5dc2249d2aff/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 18:57:24 crc kubenswrapper[4724]: I0223 18:57:24.962036 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_1593736a-2034-4811-90f9-90645b954b2c/rabbitmq/0.log" Feb 23 18:57:25 crc kubenswrapper[4724]: I0223 18:57:25.111404 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-ctwxj_190a2171-8cbd-4bb4-a22d-76d1cf634934/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 18:57:25 crc kubenswrapper[4724]: I0223 18:57:25.184106 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-ftpsr_8780dd09-5b4b-40f6-81ee-d2163bd3f066/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 18:57:25 crc kubenswrapper[4724]: I0223 18:57:25.386241 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-v2zdl_1a8e063f-7461-4365-bb92-a08b5d5c5b1f/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 18:57:25 crc kubenswrapper[4724]: I0223 18:57:25.470865 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-xldw2_3067abd3-b2db-458d-a71c-9f569c2a6bdc/ssh-known-hosts-edpm-deployment/0.log" Feb 23 18:57:25 crc kubenswrapper[4724]: I0223 18:57:25.990068 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-f447dffc7-s2mfq_46ac9fa5-f3bd-48bf-b1e6-1b570b4bda5b/proxy-server/0.log" Feb 23 18:57:26 crc kubenswrapper[4724]: I0223 18:57:26.155528 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-f447dffc7-s2mfq_46ac9fa5-f3bd-48bf-b1e6-1b570b4bda5b/proxy-httpd/0.log" Feb 23 18:57:26 crc kubenswrapper[4724]: I0223 18:57:26.224885 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-w2vrd_bc3d191e-4725-42ef-90af-16b57d7bf649/swift-ring-rebalance/0.log" Feb 23 18:57:26 crc kubenswrapper[4724]: I0223 18:57:26.295200 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3946025b-c492-4f1b-a3c3-62d2fa658586/account-auditor/0.log" Feb 23 18:57:26 crc kubenswrapper[4724]: I0223 18:57:26.329701 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3946025b-c492-4f1b-a3c3-62d2fa658586/account-reaper/0.log" Feb 23 18:57:26 crc kubenswrapper[4724]: I0223 18:57:26.472408 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3946025b-c492-4f1b-a3c3-62d2fa658586/account-server/0.log" Feb 23 18:57:26 crc kubenswrapper[4724]: I0223 18:57:26.485085 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3946025b-c492-4f1b-a3c3-62d2fa658586/container-auditor/0.log" Feb 23 18:57:26 crc kubenswrapper[4724]: I0223 18:57:26.506781 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3946025b-c492-4f1b-a3c3-62d2fa658586/account-replicator/0.log" Feb 23 18:57:26 crc kubenswrapper[4724]: I0223 18:57:26.593977 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3946025b-c492-4f1b-a3c3-62d2fa658586/container-replicator/0.log" Feb 23 18:57:26 crc kubenswrapper[4724]: I0223 18:57:26.679440 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3946025b-c492-4f1b-a3c3-62d2fa658586/container-server/0.log" Feb 23 18:57:26 crc kubenswrapper[4724]: I0223 18:57:26.709000 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3946025b-c492-4f1b-a3c3-62d2fa658586/container-updater/0.log" Feb 23 18:57:26 crc kubenswrapper[4724]: I0223 18:57:26.804634 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3946025b-c492-4f1b-a3c3-62d2fa658586/object-auditor/0.log" Feb 23 18:57:26 crc kubenswrapper[4724]: I0223 18:57:26.813203 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3946025b-c492-4f1b-a3c3-62d2fa658586/object-expirer/0.log" Feb 23 18:57:26 crc kubenswrapper[4724]: I0223 18:57:26.916519 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3946025b-c492-4f1b-a3c3-62d2fa658586/object-server/0.log" Feb 23 18:57:26 crc kubenswrapper[4724]: I0223 18:57:26.937670 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3946025b-c492-4f1b-a3c3-62d2fa658586/object-replicator/0.log" Feb 23 18:57:27 crc kubenswrapper[4724]: I0223 18:57:27.029547 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3946025b-c492-4f1b-a3c3-62d2fa658586/rsync/0.log" Feb 23 18:57:27 crc kubenswrapper[4724]: I0223 18:57:27.039031 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3946025b-c492-4f1b-a3c3-62d2fa658586/object-updater/0.log" Feb 23 18:57:27 crc kubenswrapper[4724]: I0223 18:57:27.132631 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3946025b-c492-4f1b-a3c3-62d2fa658586/swift-recon-cron/0.log" Feb 23 18:57:27 crc kubenswrapper[4724]: I0223 18:57:27.328841 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-mbgkn_3052df73-dea7-4da0-b0b1-f881cff2b747/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 18:57:27 crc kubenswrapper[4724]: I0223 18:57:27.889415 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_0d826425-e3f8-42d4-823f-2f8db766ad9a/tempest-tests-tempest-tests-runner/0.log" Feb 23 18:57:27 crc kubenswrapper[4724]: I0223 18:57:27.892692 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_978d2d70-05f0-4404-8ace-2ba6f872d25a/test-operator-logs-container/0.log" Feb 23 18:57:28 crc kubenswrapper[4724]: I0223 18:57:28.297696 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-x4j75_a46f5b1a-20be-4f6e-97fb-00662f817dc9/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 18:57:28 crc kubenswrapper[4724]: I0223 18:57:28.954352 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-applier-0_ad11f589-aa5d-493e-b431-25f6f7b0675b/watcher-applier/0.log" Feb 23 18:57:29 crc kubenswrapper[4724]: I0223 18:57:29.716525 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_dc365749-e4ec-46b3-9aa8-522dac685189/watcher-api-log/0.log" Feb 23 18:57:31 crc kubenswrapper[4724]: I0223 18:57:31.904242 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_eadce7d0-a9bc-4840-919b-a341aba11ca2/memcached/0.log" Feb 23 18:57:32 crc kubenswrapper[4724]: I0223 18:57:32.337938 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-decision-engine-0_935753ed-464b-4bac-af1f-e356a473c78f/watcher-decision-engine/0.log" Feb 23 18:57:32 crc kubenswrapper[4724]: I0223 18:57:32.889009 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_dc365749-e4ec-46b3-9aa8-522dac685189/watcher-api/0.log" Feb 23 18:57:57 crc kubenswrapper[4724]: I0223 18:57:57.411798 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d2abe2e0813cd7e38464a3437d4c9e1acf5147797253340570e3ac2557qwvm8_da865614-a81a-4de0-b6e4-8be443632fa5/util/0.log" Feb 23 18:57:57 crc kubenswrapper[4724]: I0223 18:57:57.595660 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d2abe2e0813cd7e38464a3437d4c9e1acf5147797253340570e3ac2557qwvm8_da865614-a81a-4de0-b6e4-8be443632fa5/util/0.log" Feb 23 18:57:57 crc kubenswrapper[4724]: I0223 18:57:57.606948 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d2abe2e0813cd7e38464a3437d4c9e1acf5147797253340570e3ac2557qwvm8_da865614-a81a-4de0-b6e4-8be443632fa5/pull/0.log" Feb 23 18:57:57 crc kubenswrapper[4724]: I0223 18:57:57.749491 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d2abe2e0813cd7e38464a3437d4c9e1acf5147797253340570e3ac2557qwvm8_da865614-a81a-4de0-b6e4-8be443632fa5/pull/0.log" Feb 23 18:57:57 crc kubenswrapper[4724]: I0223 18:57:57.964118 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d2abe2e0813cd7e38464a3437d4c9e1acf5147797253340570e3ac2557qwvm8_da865614-a81a-4de0-b6e4-8be443632fa5/pull/0.log" Feb 23 18:57:57 crc kubenswrapper[4724]: I0223 18:57:57.967180 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d2abe2e0813cd7e38464a3437d4c9e1acf5147797253340570e3ac2557qwvm8_da865614-a81a-4de0-b6e4-8be443632fa5/util/0.log" Feb 23 18:57:58 crc kubenswrapper[4724]: I0223 18:57:58.187611 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d2abe2e0813cd7e38464a3437d4c9e1acf5147797253340570e3ac2557qwvm8_da865614-a81a-4de0-b6e4-8be443632fa5/extract/0.log" Feb 23 18:57:58 crc kubenswrapper[4724]: I0223 18:57:58.611555 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-vqls9_967a6928-46e0-4a1e-90bd-cc9a204d9099/manager/0.log" Feb 23 18:57:59 crc kubenswrapper[4724]: I0223 18:57:59.008949 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-784b5bb6c5-gmdl7_dedf8817-f3cf-4630-a825-71059f681d10/manager/0.log" Feb 23 18:57:59 crc kubenswrapper[4724]: I0223 18:57:59.105256 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-f5x72_6b607306-d732-4142-83d4-92ae20c714cd/manager/0.log" Feb 23 18:57:59 crc kubenswrapper[4724]: I0223 18:57:59.352871 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-9gtq7_2bc5c9a5-0293-4efd-b5a4-0f5c85b238b5/manager/0.log" Feb 23 18:57:59 crc kubenswrapper[4724]: I0223 18:57:59.590844 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-55d77d7b5c-zm7cw_a4842ca7-909d-4d11-bba6-75555f3599b3/manager/0.log" Feb 23 18:57:59 crc kubenswrapper[4724]: I0223 18:57:59.862614 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-22lgm_dd866f81-0e85-4690-b16d-45baf5e856ed/manager/0.log" Feb 23 18:58:00 crc kubenswrapper[4724]: I0223 18:58:00.120298 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-pb2dv_7af82cf5-bcff-40d4-8c1b-4bc71e5ca9a3/manager/0.log" Feb 23 18:58:00 crc kubenswrapper[4724]: I0223 18:58:00.294108 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-djmpk_973124e7-0723-4a5d-ab81-0ef8619f8754/manager/0.log" Feb 23 18:58:00 crc kubenswrapper[4724]: I0223 18:58:00.364709 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-67d996989d-fxj7d_b906fefc-aaf5-48c0-b45b-3d11dbda1c3e/manager/0.log" Feb 23 18:58:00 crc kubenswrapper[4724]: I0223 18:58:00.640703 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-xdfp8_73da6414-95e9-4d5a-a0ca-fbeb32048153/manager/0.log" Feb 23 18:58:00 crc kubenswrapper[4724]: I0223 18:58:00.899149 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6bd4687957-9s4mk_8b193934-08d8-4435-ae40-8b4d7b4878e7/manager/0.log" Feb 23 18:58:01 crc kubenswrapper[4724]: I0223 18:58:01.060697 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-d5z2j_8bc03a47-9ded-40c0-b924-0c936950a12a/manager/0.log" Feb 23 18:58:01 crc kubenswrapper[4724]: I0223 18:58:01.131353 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-659dc6bbfc-p42tx_24d796b9-e6ea-4b70-9424-1352f71c80a6/manager/0.log" Feb 23 18:58:01 crc kubenswrapper[4724]: I0223 18:58:01.264081 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9c44nwv_63923048-2ad5-45f9-9285-9d84dc711fa7/manager/0.log" Feb 23 18:58:01 crc kubenswrapper[4724]: I0223 18:58:01.553503 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-9d7777f98-c6ttl_264513fc-f807-42c5-8089-abc30cf6404b/operator/0.log" Feb 23 18:58:01 crc kubenswrapper[4724]: I0223 18:58:01.673159 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-qnjwh_c7f91058-6754-42fd-916c-38da4dd0acd4/registry-server/0.log" Feb 23 18:58:01 crc kubenswrapper[4724]: I0223 18:58:01.880551 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-5955d8c787-92g5j_a8f9c97e-0259-4c6e-b188-33081d1706fd/manager/0.log" Feb 23 18:58:02 crc kubenswrapper[4724]: I0223 18:58:02.028342 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-szmk8_77ba1933-d39b-4b30-9d8c-1500d7293444/manager/0.log" Feb 23 18:58:02 crc kubenswrapper[4724]: I0223 18:58:02.178324 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-t5pkl_6848c8bf-d8f5-4215-90fb-454b794e33ae/operator/0.log" Feb 23 18:58:02 crc kubenswrapper[4724]: I0223 18:58:02.413202 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-wqsvk_e37a1f8b-cee7-4a13-879e-496d26735ab4/manager/0.log" Feb 23 18:58:02 crc kubenswrapper[4724]: I0223 18:58:02.684997 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5dc6794d5b-4tnw2_ca793345-c1e2-4207-844b-170dd5b70066/manager/0.log" Feb 23 18:58:02 crc kubenswrapper[4724]: I0223 18:58:02.899403 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-589c568786-d85f4_3b37faa8-6e4e-427a-9c1a-84993ed85290/manager/0.log" Feb 23 18:58:03 crc kubenswrapper[4724]: I0223 18:58:03.080101 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5cb6b78489-7tdgw_5c1a94e8-9d64-4e28-b2f9-fdd066fddd6a/manager/0.log" Feb 23 18:58:04 crc kubenswrapper[4724]: I0223 18:58:04.201973 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-bf9ddc465-xrp8k_c38380c9-1ff8-4a96-9c4a-15ed760a25db/manager/0.log" Feb 23 18:58:08 crc kubenswrapper[4724]: I0223 18:58:08.799166 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-4zgfm_70c55fa9-1fa4-415c-98c4-adfe080201d1/manager/0.log" Feb 23 18:58:23 crc kubenswrapper[4724]: I0223 18:58:23.306812 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-n4kjh_e842e9a3-2897-414d-8606-46bb70b207d9/control-plane-machine-set-operator/0.log" Feb 23 18:58:23 crc kubenswrapper[4724]: I0223 18:58:23.533997 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-xttsp_4a9e0634-64a7-4106-8a10-bfed1ab672da/kube-rbac-proxy/0.log" Feb 23 18:58:23 crc kubenswrapper[4724]: I0223 18:58:23.535559 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-xttsp_4a9e0634-64a7-4106-8a10-bfed1ab672da/machine-api-operator/0.log" Feb 23 18:58:36 crc kubenswrapper[4724]: I0223 18:58:36.197032 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-vlrjb_4a08b754-7169-4f53-9212-84ed962b15dd/cert-manager-controller/0.log" Feb 23 18:58:36 crc kubenswrapper[4724]: I0223 18:58:36.345740 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-pns6j_209587a2-48da-480c-93b0-17a306f362a3/cert-manager-cainjector/0.log" Feb 23 18:58:36 crc kubenswrapper[4724]: I0223 18:58:36.423777 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-zzpcv_2e8372fe-4e2d-49f4-94b7-0e6000bd0f5b/cert-manager-webhook/0.log" Feb 23 18:58:48 crc kubenswrapper[4724]: I0223 18:58:48.737323 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-lrrzf_86c89d64-bec0-4e95-ae8c-194200a9f20c/nmstate-console-plugin/0.log" Feb 23 18:58:48 crc kubenswrapper[4724]: I0223 18:58:48.848507 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-cfznb_28356f9d-af74-4f20-ba5c-8a40fda9ef6d/nmstate-handler/0.log" Feb 23 18:58:48 crc kubenswrapper[4724]: I0223 18:58:48.922830 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-2s7zq_bce58068-4adb-427b-96f8-e289d595515d/kube-rbac-proxy/0.log" Feb 23 18:58:48 crc kubenswrapper[4724]: I0223 18:58:48.945926 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-2s7zq_bce58068-4adb-427b-96f8-e289d595515d/nmstate-metrics/0.log" Feb 23 18:58:49 crc kubenswrapper[4724]: I0223 18:58:49.102250 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-hs7ws_12ceca0c-78de-41ff-8e20-cdf172bd915e/nmstate-operator/0.log" Feb 23 18:58:49 crc kubenswrapper[4724]: I0223 18:58:49.137814 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-49gxm_1217d925-38a5-4311-a32a-49e306238283/nmstate-webhook/0.log" Feb 23 18:58:57 crc kubenswrapper[4724]: I0223 18:58:57.751673 4724 patch_prober.go:28] interesting pod/machine-config-daemon-rw78r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 18:58:57 crc kubenswrapper[4724]: I0223 18:58:57.752208 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 18:59:02 crc kubenswrapper[4724]: I0223 18:59:02.219874 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-5jjjl_814ddfc1-f41d-41fe-9e19-72ebf86f8950/prometheus-operator/0.log" Feb 23 18:59:02 crc kubenswrapper[4724]: I0223 18:59:02.354157 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6c699bc6b7-ldjml_88e5fd13-0f53-4516-b0e8-73f22b9837eb/prometheus-operator-admission-webhook/0.log" Feb 23 18:59:02 crc kubenswrapper[4724]: I0223 18:59:02.455326 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6c699bc6b7-lthdd_7750cf0f-feab-4fd7-a8a3-4fc9298a169e/prometheus-operator-admission-webhook/0.log" Feb 23 18:59:02 crc kubenswrapper[4724]: I0223 18:59:02.550353 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-djp7f_0a3d2d9a-1225-4ec1-ac5b-4657ca676522/operator/0.log" Feb 23 18:59:02 crc kubenswrapper[4724]: I0223 18:59:02.605278 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-6j5cq_606f1fc9-e753-4c28-8386-dfe7bb1f4eca/perses-operator/0.log" Feb 23 18:59:17 crc kubenswrapper[4724]: I0223 18:59:17.101523 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-fhn7w_8637711e-f5d2-43e1-b8f6-65df43b16ffc/kube-rbac-proxy/0.log" Feb 23 18:59:17 crc kubenswrapper[4724]: I0223 18:59:17.158906 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-fhn7w_8637711e-f5d2-43e1-b8f6-65df43b16ffc/controller/0.log" Feb 23 18:59:17 crc kubenswrapper[4724]: I0223 18:59:17.349356 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6cjbc_da6b6734-568e-4283-8df5-f8e9abbef784/cp-frr-files/0.log" Feb 23 18:59:17 crc kubenswrapper[4724]: I0223 18:59:17.492040 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6cjbc_da6b6734-568e-4283-8df5-f8e9abbef784/cp-frr-files/0.log" Feb 23 18:59:17 crc kubenswrapper[4724]: I0223 18:59:17.493293 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6cjbc_da6b6734-568e-4283-8df5-f8e9abbef784/cp-reloader/0.log" Feb 23 18:59:17 crc kubenswrapper[4724]: I0223 18:59:17.509190 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6cjbc_da6b6734-568e-4283-8df5-f8e9abbef784/cp-metrics/0.log" Feb 23 18:59:17 crc kubenswrapper[4724]: I0223 18:59:17.552031 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6cjbc_da6b6734-568e-4283-8df5-f8e9abbef784/cp-reloader/0.log" Feb 23 18:59:17 crc kubenswrapper[4724]: I0223 18:59:17.703665 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6cjbc_da6b6734-568e-4283-8df5-f8e9abbef784/cp-frr-files/0.log" Feb 23 18:59:17 crc kubenswrapper[4724]: I0223 18:59:17.704126 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6cjbc_da6b6734-568e-4283-8df5-f8e9abbef784/cp-reloader/0.log" Feb 23 18:59:17 crc kubenswrapper[4724]: I0223 18:59:17.728662 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6cjbc_da6b6734-568e-4283-8df5-f8e9abbef784/cp-metrics/0.log" Feb 23 18:59:17 crc kubenswrapper[4724]: I0223 18:59:17.778228 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6cjbc_da6b6734-568e-4283-8df5-f8e9abbef784/cp-metrics/0.log" Feb 23 18:59:17 crc kubenswrapper[4724]: I0223 18:59:17.905169 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6cjbc_da6b6734-568e-4283-8df5-f8e9abbef784/cp-reloader/0.log" Feb 23 18:59:17 crc kubenswrapper[4724]: I0223 18:59:17.907371 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6cjbc_da6b6734-568e-4283-8df5-f8e9abbef784/cp-metrics/0.log" Feb 23 18:59:17 crc kubenswrapper[4724]: I0223 18:59:17.918780 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6cjbc_da6b6734-568e-4283-8df5-f8e9abbef784/cp-frr-files/0.log" Feb 23 18:59:17 crc kubenswrapper[4724]: I0223 18:59:17.950384 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6cjbc_da6b6734-568e-4283-8df5-f8e9abbef784/controller/0.log" Feb 23 18:59:18 crc kubenswrapper[4724]: I0223 18:59:18.150297 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6cjbc_da6b6734-568e-4283-8df5-f8e9abbef784/frr-metrics/0.log" Feb 23 18:59:18 crc kubenswrapper[4724]: I0223 18:59:18.167064 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6cjbc_da6b6734-568e-4283-8df5-f8e9abbef784/kube-rbac-proxy/0.log" Feb 23 18:59:18 crc kubenswrapper[4724]: I0223 18:59:18.172794 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6cjbc_da6b6734-568e-4283-8df5-f8e9abbef784/kube-rbac-proxy-frr/0.log" Feb 23 18:59:18 crc kubenswrapper[4724]: I0223 18:59:18.400053 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6cjbc_da6b6734-568e-4283-8df5-f8e9abbef784/reloader/0.log" Feb 23 18:59:18 crc kubenswrapper[4724]: I0223 18:59:18.411854 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-kcvmh_39fa75d7-3799-41ce-9a9e-ebf9dd8c347b/frr-k8s-webhook-server/0.log" Feb 23 18:59:18 crc kubenswrapper[4724]: I0223 18:59:18.673461 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5bb6655d58-zmrrz_8fb21fbd-388b-4b8f-a0ec-78f2396bf456/manager/0.log" Feb 23 18:59:18 crc kubenswrapper[4724]: I0223 18:59:18.887976 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-745c85d5d8-v6vwt_7e8b0053-5568-4e4e-8021-f2351dc9f4df/webhook-server/0.log" Feb 23 18:59:18 crc kubenswrapper[4724]: I0223 18:59:18.942223 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-dxbt6_a55b73c4-da87-4ce8-8418-3d6d854c0b0e/kube-rbac-proxy/0.log" Feb 23 18:59:19 crc kubenswrapper[4724]: I0223 18:59:19.586751 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-dxbt6_a55b73c4-da87-4ce8-8418-3d6d854c0b0e/speaker/0.log" Feb 23 18:59:19 crc kubenswrapper[4724]: I0223 18:59:19.792544 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6cjbc_da6b6734-568e-4283-8df5-f8e9abbef784/frr/0.log" Feb 23 18:59:27 crc kubenswrapper[4724]: I0223 18:59:27.752513 4724 patch_prober.go:28] interesting pod/machine-config-daemon-rw78r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 18:59:27 crc kubenswrapper[4724]: I0223 18:59:27.753089 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 18:59:33 crc kubenswrapper[4724]: I0223 18:59:33.653519 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08x48lj_ca8d1d6c-2638-493f-8aed-775dd9bd326d/util/0.log" Feb 23 18:59:33 crc kubenswrapper[4724]: I0223 18:59:33.818902 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08x48lj_ca8d1d6c-2638-493f-8aed-775dd9bd326d/pull/0.log" Feb 23 18:59:33 crc kubenswrapper[4724]: I0223 18:59:33.829174 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08x48lj_ca8d1d6c-2638-493f-8aed-775dd9bd326d/util/0.log" Feb 23 18:59:33 crc kubenswrapper[4724]: I0223 18:59:33.838550 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08x48lj_ca8d1d6c-2638-493f-8aed-775dd9bd326d/pull/0.log" Feb 23 18:59:33 crc kubenswrapper[4724]: I0223 18:59:33.985848 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08x48lj_ca8d1d6c-2638-493f-8aed-775dd9bd326d/pull/0.log" Feb 23 18:59:34 crc kubenswrapper[4724]: I0223 18:59:34.006684 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08x48lj_ca8d1d6c-2638-493f-8aed-775dd9bd326d/util/0.log" Feb 23 18:59:34 crc kubenswrapper[4724]: I0223 18:59:34.021927 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08x48lj_ca8d1d6c-2638-493f-8aed-775dd9bd326d/extract/0.log" Feb 23 18:59:34 crc kubenswrapper[4724]: I0223 18:59:34.162329 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21324dbf_371db8f9-a502-40cb-a0cd-256b481c12aa/util/0.log" Feb 23 18:59:34 crc kubenswrapper[4724]: I0223 18:59:34.361031 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21324dbf_371db8f9-a502-40cb-a0cd-256b481c12aa/util/0.log" Feb 23 18:59:34 crc kubenswrapper[4724]: I0223 18:59:34.384576 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21324dbf_371db8f9-a502-40cb-a0cd-256b481c12aa/pull/0.log" Feb 23 18:59:34 crc kubenswrapper[4724]: I0223 18:59:34.390941 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21324dbf_371db8f9-a502-40cb-a0cd-256b481c12aa/pull/0.log" Feb 23 18:59:34 crc kubenswrapper[4724]: I0223 18:59:34.552711 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21324dbf_371db8f9-a502-40cb-a0cd-256b481c12aa/pull/0.log" Feb 23 18:59:34 crc kubenswrapper[4724]: I0223 18:59:34.575803 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21324dbf_371db8f9-a502-40cb-a0cd-256b481c12aa/extract/0.log" Feb 23 18:59:34 crc kubenswrapper[4724]: I0223 18:59:34.576742 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21324dbf_371db8f9-a502-40cb-a0cd-256b481c12aa/util/0.log" Feb 23 18:59:34 crc kubenswrapper[4724]: I0223 18:59:34.739883 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pmtdz_7bba3085-58d9-4a69-b93b-f4b0034fa2ec/extract-utilities/0.log" Feb 23 18:59:34 crc kubenswrapper[4724]: I0223 18:59:34.880118 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pmtdz_7bba3085-58d9-4a69-b93b-f4b0034fa2ec/extract-utilities/0.log" Feb 23 18:59:34 crc kubenswrapper[4724]: I0223 18:59:34.895560 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pmtdz_7bba3085-58d9-4a69-b93b-f4b0034fa2ec/extract-content/0.log" Feb 23 18:59:34 crc kubenswrapper[4724]: I0223 18:59:34.917684 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pmtdz_7bba3085-58d9-4a69-b93b-f4b0034fa2ec/extract-content/0.log" Feb 23 18:59:35 crc kubenswrapper[4724]: I0223 18:59:35.068627 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pmtdz_7bba3085-58d9-4a69-b93b-f4b0034fa2ec/extract-utilities/0.log" Feb 23 18:59:35 crc kubenswrapper[4724]: I0223 18:59:35.136692 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pmtdz_7bba3085-58d9-4a69-b93b-f4b0034fa2ec/extract-content/0.log" Feb 23 18:59:35 crc kubenswrapper[4724]: I0223 18:59:35.315341 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xwjqh_24d64c3d-d544-4a74-ae90-36b17131a812/extract-utilities/0.log" Feb 23 18:59:35 crc kubenswrapper[4724]: I0223 18:59:35.482454 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xwjqh_24d64c3d-d544-4a74-ae90-36b17131a812/extract-utilities/0.log" Feb 23 18:59:35 crc kubenswrapper[4724]: I0223 18:59:35.509060 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xwjqh_24d64c3d-d544-4a74-ae90-36b17131a812/extract-content/0.log" Feb 23 18:59:35 crc kubenswrapper[4724]: I0223 18:59:35.575785 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xwjqh_24d64c3d-d544-4a74-ae90-36b17131a812/extract-content/0.log" Feb 23 18:59:35 crc kubenswrapper[4724]: I0223 18:59:35.705258 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pmtdz_7bba3085-58d9-4a69-b93b-f4b0034fa2ec/registry-server/0.log" Feb 23 18:59:35 crc kubenswrapper[4724]: I0223 18:59:35.777905 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xwjqh_24d64c3d-d544-4a74-ae90-36b17131a812/extract-utilities/0.log" Feb 23 18:59:35 crc kubenswrapper[4724]: I0223 18:59:35.830850 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xwjqh_24d64c3d-d544-4a74-ae90-36b17131a812/extract-content/0.log" Feb 23 18:59:36 crc kubenswrapper[4724]: I0223 18:59:36.024360 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatt28t_af1905ab-fece-4f3b-8f30-f96b5022bb3d/util/0.log" Feb 23 18:59:36 crc kubenswrapper[4724]: I0223 18:59:36.293715 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatt28t_af1905ab-fece-4f3b-8f30-f96b5022bb3d/pull/0.log" Feb 23 18:59:36 crc kubenswrapper[4724]: I0223 18:59:36.306847 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatt28t_af1905ab-fece-4f3b-8f30-f96b5022bb3d/util/0.log" Feb 23 18:59:36 crc kubenswrapper[4724]: I0223 18:59:36.347633 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatt28t_af1905ab-fece-4f3b-8f30-f96b5022bb3d/pull/0.log" Feb 23 18:59:36 crc kubenswrapper[4724]: I0223 18:59:36.386270 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xwjqh_24d64c3d-d544-4a74-ae90-36b17131a812/registry-server/0.log" Feb 23 18:59:36 crc kubenswrapper[4724]: I0223 18:59:36.514530 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatt28t_af1905ab-fece-4f3b-8f30-f96b5022bb3d/util/0.log" Feb 23 18:59:36 crc kubenswrapper[4724]: I0223 18:59:36.546260 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatt28t_af1905ab-fece-4f3b-8f30-f96b5022bb3d/extract/0.log" Feb 23 18:59:36 crc kubenswrapper[4724]: I0223 18:59:36.547149 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatt28t_af1905ab-fece-4f3b-8f30-f96b5022bb3d/pull/0.log" Feb 23 18:59:36 crc kubenswrapper[4724]: I0223 18:59:36.767106 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mb467_0fd0393f-f7c2-45b6-9bcd-d83bb0e7988d/extract-utilities/0.log" Feb 23 18:59:36 crc kubenswrapper[4724]: I0223 18:59:36.768289 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-w8klm_67588304-35a3-404e-bd48-9f7bc0ec5a44/marketplace-operator/0.log" Feb 23 18:59:36 crc kubenswrapper[4724]: I0223 18:59:36.993415 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mb467_0fd0393f-f7c2-45b6-9bcd-d83bb0e7988d/extract-utilities/0.log" Feb 23 18:59:36 crc kubenswrapper[4724]: I0223 18:59:36.998717 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mb467_0fd0393f-f7c2-45b6-9bcd-d83bb0e7988d/extract-content/0.log" Feb 23 18:59:37 crc kubenswrapper[4724]: I0223 18:59:37.003811 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mb467_0fd0393f-f7c2-45b6-9bcd-d83bb0e7988d/extract-content/0.log" Feb 23 18:59:37 crc kubenswrapper[4724]: I0223 18:59:37.128149 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mb467_0fd0393f-f7c2-45b6-9bcd-d83bb0e7988d/extract-utilities/0.log" Feb 23 18:59:37 crc kubenswrapper[4724]: I0223 18:59:37.157043 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mb467_0fd0393f-f7c2-45b6-9bcd-d83bb0e7988d/extract-content/0.log" Feb 23 18:59:37 crc kubenswrapper[4724]: I0223 18:59:37.224138 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z9smz_474bcfee-4643-4fdc-b7c9-d823ecb79b90/extract-utilities/0.log" Feb 23 18:59:37 crc kubenswrapper[4724]: I0223 18:59:37.325864 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mb467_0fd0393f-f7c2-45b6-9bcd-d83bb0e7988d/registry-server/0.log" Feb 23 18:59:37 crc kubenswrapper[4724]: I0223 18:59:37.422536 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z9smz_474bcfee-4643-4fdc-b7c9-d823ecb79b90/extract-utilities/0.log" Feb 23 18:59:37 crc kubenswrapper[4724]: I0223 18:59:37.443056 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z9smz_474bcfee-4643-4fdc-b7c9-d823ecb79b90/extract-content/0.log" Feb 23 18:59:37 crc kubenswrapper[4724]: I0223 18:59:37.449039 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z9smz_474bcfee-4643-4fdc-b7c9-d823ecb79b90/extract-content/0.log" Feb 23 18:59:37 crc kubenswrapper[4724]: I0223 18:59:37.616601 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z9smz_474bcfee-4643-4fdc-b7c9-d823ecb79b90/extract-utilities/0.log" Feb 23 18:59:37 crc kubenswrapper[4724]: I0223 18:59:37.665629 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z9smz_474bcfee-4643-4fdc-b7c9-d823ecb79b90/extract-content/0.log" Feb 23 18:59:38 crc kubenswrapper[4724]: I0223 18:59:38.121605 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z9smz_474bcfee-4643-4fdc-b7c9-d823ecb79b90/registry-server/0.log" Feb 23 18:59:52 crc kubenswrapper[4724]: I0223 18:59:52.156121 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-5jjjl_814ddfc1-f41d-41fe-9e19-72ebf86f8950/prometheus-operator/0.log" Feb 23 18:59:52 crc kubenswrapper[4724]: I0223 18:59:52.203017 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6c699bc6b7-ldjml_88e5fd13-0f53-4516-b0e8-73f22b9837eb/prometheus-operator-admission-webhook/0.log" Feb 23 18:59:52 crc kubenswrapper[4724]: I0223 18:59:52.220299 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6c699bc6b7-lthdd_7750cf0f-feab-4fd7-a8a3-4fc9298a169e/prometheus-operator-admission-webhook/0.log" Feb 23 18:59:53 crc kubenswrapper[4724]: I0223 18:59:53.018544 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-djp7f_0a3d2d9a-1225-4ec1-ac5b-4657ca676522/operator/0.log" Feb 23 18:59:53 crc kubenswrapper[4724]: I0223 18:59:53.211785 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-6j5cq_606f1fc9-e753-4c28-8386-dfe7bb1f4eca/perses-operator/0.log" Feb 23 18:59:57 crc kubenswrapper[4724]: I0223 18:59:57.751793 4724 patch_prober.go:28] interesting pod/machine-config-daemon-rw78r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 18:59:57 crc kubenswrapper[4724]: I0223 18:59:57.752265 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 18:59:57 crc kubenswrapper[4724]: I0223 18:59:57.752321 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" Feb 23 18:59:57 crc kubenswrapper[4724]: I0223 18:59:57.753122 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0445ec09f32cc3389918fbf9447b786148afd5f9c063c45817e69392efe39cd6"} pod="openshift-machine-config-operator/machine-config-daemon-rw78r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 18:59:57 crc kubenswrapper[4724]: I0223 18:59:57.753180 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" containerID="cri-o://0445ec09f32cc3389918fbf9447b786148afd5f9c063c45817e69392efe39cd6" gracePeriod=600 Feb 23 18:59:58 crc kubenswrapper[4724]: E0223 18:59:58.379673 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 18:59:58 crc kubenswrapper[4724]: I0223 18:59:58.445973 4724 generic.go:334] "Generic (PLEG): container finished" podID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerID="0445ec09f32cc3389918fbf9447b786148afd5f9c063c45817e69392efe39cd6" exitCode=0 Feb 23 18:59:58 crc kubenswrapper[4724]: I0223 18:59:58.446030 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" event={"ID":"a065b197-b354-4d9b-b2e9-7d4882a3d1a2","Type":"ContainerDied","Data":"0445ec09f32cc3389918fbf9447b786148afd5f9c063c45817e69392efe39cd6"} Feb 23 18:59:58 crc kubenswrapper[4724]: I0223 18:59:58.446063 4724 scope.go:117] "RemoveContainer" containerID="639fd3b8f1387bc69e2a8c57a6b59cab3604e4b36459949d5f23a41a140cc1ce" Feb 23 18:59:58 crc kubenswrapper[4724]: I0223 18:59:58.446703 4724 scope.go:117] "RemoveContainer" containerID="0445ec09f32cc3389918fbf9447b786148afd5f9c063c45817e69392efe39cd6" Feb 23 18:59:58 crc kubenswrapper[4724]: E0223 18:59:58.446933 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 19:00:00 crc kubenswrapper[4724]: I0223 19:00:00.155491 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531220-dfn9t"] Feb 23 19:00:00 crc kubenswrapper[4724]: E0223 19:00:00.156623 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5f9aae3-023f-4bbc-9577-3a8af9027744" containerName="container-00" Feb 23 19:00:00 crc kubenswrapper[4724]: I0223 19:00:00.156641 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5f9aae3-023f-4bbc-9577-3a8af9027744" containerName="container-00" Feb 23 19:00:00 crc kubenswrapper[4724]: I0223 19:00:00.156959 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5f9aae3-023f-4bbc-9577-3a8af9027744" containerName="container-00" Feb 23 19:00:00 crc kubenswrapper[4724]: I0223 19:00:00.162773 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531220-dfn9t" Feb 23 19:00:00 crc kubenswrapper[4724]: I0223 19:00:00.165351 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 23 19:00:00 crc kubenswrapper[4724]: I0223 19:00:00.168156 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 23 19:00:00 crc kubenswrapper[4724]: I0223 19:00:00.184650 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531220-dfn9t"] Feb 23 19:00:00 crc kubenswrapper[4724]: I0223 19:00:00.330218 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntfg8\" (UniqueName: \"kubernetes.io/projected/cf17914f-3192-479c-8146-0d2fe0cd253a-kube-api-access-ntfg8\") pod \"collect-profiles-29531220-dfn9t\" (UID: \"cf17914f-3192-479c-8146-0d2fe0cd253a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531220-dfn9t" Feb 23 19:00:00 crc kubenswrapper[4724]: I0223 19:00:00.330740 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cf17914f-3192-479c-8146-0d2fe0cd253a-secret-volume\") pod \"collect-profiles-29531220-dfn9t\" (UID: \"cf17914f-3192-479c-8146-0d2fe0cd253a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531220-dfn9t" Feb 23 19:00:00 crc kubenswrapper[4724]: I0223 19:00:00.330909 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf17914f-3192-479c-8146-0d2fe0cd253a-config-volume\") pod \"collect-profiles-29531220-dfn9t\" (UID: \"cf17914f-3192-479c-8146-0d2fe0cd253a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531220-dfn9t" Feb 23 19:00:00 crc kubenswrapper[4724]: I0223 19:00:00.433109 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cf17914f-3192-479c-8146-0d2fe0cd253a-secret-volume\") pod \"collect-profiles-29531220-dfn9t\" (UID: \"cf17914f-3192-479c-8146-0d2fe0cd253a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531220-dfn9t" Feb 23 19:00:00 crc kubenswrapper[4724]: I0223 19:00:00.433199 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf17914f-3192-479c-8146-0d2fe0cd253a-config-volume\") pod \"collect-profiles-29531220-dfn9t\" (UID: \"cf17914f-3192-479c-8146-0d2fe0cd253a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531220-dfn9t" Feb 23 19:00:00 crc kubenswrapper[4724]: I0223 19:00:00.433275 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntfg8\" (UniqueName: \"kubernetes.io/projected/cf17914f-3192-479c-8146-0d2fe0cd253a-kube-api-access-ntfg8\") pod \"collect-profiles-29531220-dfn9t\" (UID: \"cf17914f-3192-479c-8146-0d2fe0cd253a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531220-dfn9t" Feb 23 19:00:00 crc kubenswrapper[4724]: I0223 19:00:00.434413 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf17914f-3192-479c-8146-0d2fe0cd253a-config-volume\") pod \"collect-profiles-29531220-dfn9t\" (UID: \"cf17914f-3192-479c-8146-0d2fe0cd253a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531220-dfn9t" Feb 23 19:00:00 crc kubenswrapper[4724]: I0223 19:00:00.865065 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cf17914f-3192-479c-8146-0d2fe0cd253a-secret-volume\") pod \"collect-profiles-29531220-dfn9t\" (UID: \"cf17914f-3192-479c-8146-0d2fe0cd253a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531220-dfn9t" Feb 23 19:00:00 crc kubenswrapper[4724]: I0223 19:00:00.873125 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntfg8\" (UniqueName: \"kubernetes.io/projected/cf17914f-3192-479c-8146-0d2fe0cd253a-kube-api-access-ntfg8\") pod \"collect-profiles-29531220-dfn9t\" (UID: \"cf17914f-3192-479c-8146-0d2fe0cd253a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531220-dfn9t" Feb 23 19:00:01 crc kubenswrapper[4724]: I0223 19:00:01.090709 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531220-dfn9t" Feb 23 19:00:01 crc kubenswrapper[4724]: I0223 19:00:01.665214 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531220-dfn9t"] Feb 23 19:00:02 crc kubenswrapper[4724]: I0223 19:00:02.491770 4724 generic.go:334] "Generic (PLEG): container finished" podID="cf17914f-3192-479c-8146-0d2fe0cd253a" containerID="5bef58b0a46ee29e74ac36d08dd0635f9c9fc4534e0ff6e6bc5b35160d1d3925" exitCode=0 Feb 23 19:00:02 crc kubenswrapper[4724]: I0223 19:00:02.491854 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531220-dfn9t" event={"ID":"cf17914f-3192-479c-8146-0d2fe0cd253a","Type":"ContainerDied","Data":"5bef58b0a46ee29e74ac36d08dd0635f9c9fc4534e0ff6e6bc5b35160d1d3925"} Feb 23 19:00:02 crc kubenswrapper[4724]: I0223 19:00:02.492122 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531220-dfn9t" event={"ID":"cf17914f-3192-479c-8146-0d2fe0cd253a","Type":"ContainerStarted","Data":"15e285593b8d873f349298d943dbb8e74b64230e71e636ce2bdddde8fd0f68a4"} Feb 23 19:00:03 crc kubenswrapper[4724]: I0223 19:00:03.857603 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531220-dfn9t" Feb 23 19:00:04 crc kubenswrapper[4724]: I0223 19:00:04.027815 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntfg8\" (UniqueName: \"kubernetes.io/projected/cf17914f-3192-479c-8146-0d2fe0cd253a-kube-api-access-ntfg8\") pod \"cf17914f-3192-479c-8146-0d2fe0cd253a\" (UID: \"cf17914f-3192-479c-8146-0d2fe0cd253a\") " Feb 23 19:00:04 crc kubenswrapper[4724]: I0223 19:00:04.028159 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf17914f-3192-479c-8146-0d2fe0cd253a-config-volume\") pod \"cf17914f-3192-479c-8146-0d2fe0cd253a\" (UID: \"cf17914f-3192-479c-8146-0d2fe0cd253a\") " Feb 23 19:00:04 crc kubenswrapper[4724]: I0223 19:00:04.028253 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cf17914f-3192-479c-8146-0d2fe0cd253a-secret-volume\") pod \"cf17914f-3192-479c-8146-0d2fe0cd253a\" (UID: \"cf17914f-3192-479c-8146-0d2fe0cd253a\") " Feb 23 19:00:04 crc kubenswrapper[4724]: I0223 19:00:04.029371 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf17914f-3192-479c-8146-0d2fe0cd253a-config-volume" (OuterVolumeSpecName: "config-volume") pod "cf17914f-3192-479c-8146-0d2fe0cd253a" (UID: "cf17914f-3192-479c-8146-0d2fe0cd253a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 19:00:04 crc kubenswrapper[4724]: I0223 19:00:04.030166 4724 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf17914f-3192-479c-8146-0d2fe0cd253a-config-volume\") on node \"crc\" DevicePath \"\"" Feb 23 19:00:04 crc kubenswrapper[4724]: I0223 19:00:04.039645 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf17914f-3192-479c-8146-0d2fe0cd253a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "cf17914f-3192-479c-8146-0d2fe0cd253a" (UID: "cf17914f-3192-479c-8146-0d2fe0cd253a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:00:04 crc kubenswrapper[4724]: I0223 19:00:04.039739 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf17914f-3192-479c-8146-0d2fe0cd253a-kube-api-access-ntfg8" (OuterVolumeSpecName: "kube-api-access-ntfg8") pod "cf17914f-3192-479c-8146-0d2fe0cd253a" (UID: "cf17914f-3192-479c-8146-0d2fe0cd253a"). InnerVolumeSpecName "kube-api-access-ntfg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:00:04 crc kubenswrapper[4724]: I0223 19:00:04.132341 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntfg8\" (UniqueName: \"kubernetes.io/projected/cf17914f-3192-479c-8146-0d2fe0cd253a-kube-api-access-ntfg8\") on node \"crc\" DevicePath \"\"" Feb 23 19:00:04 crc kubenswrapper[4724]: I0223 19:00:04.132381 4724 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cf17914f-3192-479c-8146-0d2fe0cd253a-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 23 19:00:04 crc kubenswrapper[4724]: I0223 19:00:04.515331 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531220-dfn9t" event={"ID":"cf17914f-3192-479c-8146-0d2fe0cd253a","Type":"ContainerDied","Data":"15e285593b8d873f349298d943dbb8e74b64230e71e636ce2bdddde8fd0f68a4"} Feb 23 19:00:04 crc kubenswrapper[4724]: I0223 19:00:04.515367 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531220-dfn9t" Feb 23 19:00:04 crc kubenswrapper[4724]: I0223 19:00:04.515376 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15e285593b8d873f349298d943dbb8e74b64230e71e636ce2bdddde8fd0f68a4" Feb 23 19:00:04 crc kubenswrapper[4724]: I0223 19:00:04.938048 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531175-q6kgt"] Feb 23 19:00:04 crc kubenswrapper[4724]: I0223 19:00:04.947549 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531175-q6kgt"] Feb 23 19:00:04 crc kubenswrapper[4724]: I0223 19:00:04.963802 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae3552e8-ee24-41b0-a477-81536c660b7f" path="/var/lib/kubelet/pods/ae3552e8-ee24-41b0-a477-81536c660b7f/volumes" Feb 23 19:00:12 crc kubenswrapper[4724]: I0223 19:00:12.951818 4724 scope.go:117] "RemoveContainer" containerID="0445ec09f32cc3389918fbf9447b786148afd5f9c063c45817e69392efe39cd6" Feb 23 19:00:12 crc kubenswrapper[4724]: E0223 19:00:12.953030 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 19:00:27 crc kubenswrapper[4724]: I0223 19:00:27.950951 4724 scope.go:117] "RemoveContainer" containerID="0445ec09f32cc3389918fbf9447b786148afd5f9c063c45817e69392efe39cd6" Feb 23 19:00:27 crc kubenswrapper[4724]: E0223 19:00:27.951780 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 19:00:42 crc kubenswrapper[4724]: I0223 19:00:42.958128 4724 scope.go:117] "RemoveContainer" containerID="0445ec09f32cc3389918fbf9447b786148afd5f9c063c45817e69392efe39cd6" Feb 23 19:00:42 crc kubenswrapper[4724]: E0223 19:00:42.961316 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 19:00:52 crc kubenswrapper[4724]: I0223 19:00:52.014246 4724 scope.go:117] "RemoveContainer" containerID="4c2b4dd3f6b984562adb37fd94b97ddcafbc4bf12890f3d84fd702b0320a637d" Feb 23 19:00:55 crc kubenswrapper[4724]: I0223 19:00:55.951589 4724 scope.go:117] "RemoveContainer" containerID="0445ec09f32cc3389918fbf9447b786148afd5f9c063c45817e69392efe39cd6" Feb 23 19:00:55 crc kubenswrapper[4724]: E0223 19:00:55.952543 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 19:01:00 crc kubenswrapper[4724]: I0223 19:01:00.175486 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29531221-dnhc9"] Feb 23 19:01:00 crc kubenswrapper[4724]: E0223 19:01:00.176363 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf17914f-3192-479c-8146-0d2fe0cd253a" containerName="collect-profiles" Feb 23 19:01:00 crc kubenswrapper[4724]: I0223 19:01:00.176375 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf17914f-3192-479c-8146-0d2fe0cd253a" containerName="collect-profiles" Feb 23 19:01:00 crc kubenswrapper[4724]: I0223 19:01:00.176608 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf17914f-3192-479c-8146-0d2fe0cd253a" containerName="collect-profiles" Feb 23 19:01:00 crc kubenswrapper[4724]: I0223 19:01:00.177297 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29531221-dnhc9" Feb 23 19:01:00 crc kubenswrapper[4724]: I0223 19:01:00.204635 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29531221-dnhc9"] Feb 23 19:01:00 crc kubenswrapper[4724]: I0223 19:01:00.289913 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/822d2059-6be4-4c9f-8ca8-b38ebaf5ff01-fernet-keys\") pod \"keystone-cron-29531221-dnhc9\" (UID: \"822d2059-6be4-4c9f-8ca8-b38ebaf5ff01\") " pod="openstack/keystone-cron-29531221-dnhc9" Feb 23 19:01:00 crc kubenswrapper[4724]: I0223 19:01:00.290071 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/822d2059-6be4-4c9f-8ca8-b38ebaf5ff01-combined-ca-bundle\") pod \"keystone-cron-29531221-dnhc9\" (UID: \"822d2059-6be4-4c9f-8ca8-b38ebaf5ff01\") " pod="openstack/keystone-cron-29531221-dnhc9" Feb 23 19:01:00 crc kubenswrapper[4724]: I0223 19:01:00.290264 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/822d2059-6be4-4c9f-8ca8-b38ebaf5ff01-config-data\") pod \"keystone-cron-29531221-dnhc9\" (UID: \"822d2059-6be4-4c9f-8ca8-b38ebaf5ff01\") " pod="openstack/keystone-cron-29531221-dnhc9" Feb 23 19:01:00 crc kubenswrapper[4724]: I0223 19:01:00.290380 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpdsv\" (UniqueName: \"kubernetes.io/projected/822d2059-6be4-4c9f-8ca8-b38ebaf5ff01-kube-api-access-qpdsv\") pod \"keystone-cron-29531221-dnhc9\" (UID: \"822d2059-6be4-4c9f-8ca8-b38ebaf5ff01\") " pod="openstack/keystone-cron-29531221-dnhc9" Feb 23 19:01:00 crc kubenswrapper[4724]: I0223 19:01:00.391877 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/822d2059-6be4-4c9f-8ca8-b38ebaf5ff01-fernet-keys\") pod \"keystone-cron-29531221-dnhc9\" (UID: \"822d2059-6be4-4c9f-8ca8-b38ebaf5ff01\") " pod="openstack/keystone-cron-29531221-dnhc9" Feb 23 19:01:00 crc kubenswrapper[4724]: I0223 19:01:00.392238 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/822d2059-6be4-4c9f-8ca8-b38ebaf5ff01-combined-ca-bundle\") pod \"keystone-cron-29531221-dnhc9\" (UID: \"822d2059-6be4-4c9f-8ca8-b38ebaf5ff01\") " pod="openstack/keystone-cron-29531221-dnhc9" Feb 23 19:01:00 crc kubenswrapper[4724]: I0223 19:01:00.392297 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/822d2059-6be4-4c9f-8ca8-b38ebaf5ff01-config-data\") pod \"keystone-cron-29531221-dnhc9\" (UID: \"822d2059-6be4-4c9f-8ca8-b38ebaf5ff01\") " pod="openstack/keystone-cron-29531221-dnhc9" Feb 23 19:01:00 crc kubenswrapper[4724]: I0223 19:01:00.392347 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpdsv\" (UniqueName: \"kubernetes.io/projected/822d2059-6be4-4c9f-8ca8-b38ebaf5ff01-kube-api-access-qpdsv\") pod \"keystone-cron-29531221-dnhc9\" (UID: \"822d2059-6be4-4c9f-8ca8-b38ebaf5ff01\") " pod="openstack/keystone-cron-29531221-dnhc9" Feb 23 19:01:00 crc kubenswrapper[4724]: I0223 19:01:00.402331 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/822d2059-6be4-4c9f-8ca8-b38ebaf5ff01-config-data\") pod \"keystone-cron-29531221-dnhc9\" (UID: \"822d2059-6be4-4c9f-8ca8-b38ebaf5ff01\") " pod="openstack/keystone-cron-29531221-dnhc9" Feb 23 19:01:00 crc kubenswrapper[4724]: I0223 19:01:00.408916 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/822d2059-6be4-4c9f-8ca8-b38ebaf5ff01-fernet-keys\") pod \"keystone-cron-29531221-dnhc9\" (UID: \"822d2059-6be4-4c9f-8ca8-b38ebaf5ff01\") " pod="openstack/keystone-cron-29531221-dnhc9" Feb 23 19:01:00 crc kubenswrapper[4724]: I0223 19:01:00.410429 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/822d2059-6be4-4c9f-8ca8-b38ebaf5ff01-combined-ca-bundle\") pod \"keystone-cron-29531221-dnhc9\" (UID: \"822d2059-6be4-4c9f-8ca8-b38ebaf5ff01\") " pod="openstack/keystone-cron-29531221-dnhc9" Feb 23 19:01:00 crc kubenswrapper[4724]: I0223 19:01:00.413718 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpdsv\" (UniqueName: \"kubernetes.io/projected/822d2059-6be4-4c9f-8ca8-b38ebaf5ff01-kube-api-access-qpdsv\") pod \"keystone-cron-29531221-dnhc9\" (UID: \"822d2059-6be4-4c9f-8ca8-b38ebaf5ff01\") " pod="openstack/keystone-cron-29531221-dnhc9" Feb 23 19:01:00 crc kubenswrapper[4724]: I0223 19:01:00.507997 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29531221-dnhc9" Feb 23 19:01:00 crc kubenswrapper[4724]: I0223 19:01:00.944959 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29531221-dnhc9"] Feb 23 19:01:01 crc kubenswrapper[4724]: I0223 19:01:01.136032 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29531221-dnhc9" event={"ID":"822d2059-6be4-4c9f-8ca8-b38ebaf5ff01","Type":"ContainerStarted","Data":"f7fb3bb8fe47d0a240af8a0816b3edd8a30df17cd93b6deb87c48bffc855181f"} Feb 23 19:01:02 crc kubenswrapper[4724]: I0223 19:01:02.149300 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29531221-dnhc9" event={"ID":"822d2059-6be4-4c9f-8ca8-b38ebaf5ff01","Type":"ContainerStarted","Data":"0e9df1cfa276da24c15e168ea5d45be30669fe621a398438ad81f2c6a4f78b00"} Feb 23 19:01:02 crc kubenswrapper[4724]: I0223 19:01:02.174474 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29531221-dnhc9" podStartSLOduration=2.174453327 podStartE2EDuration="2.174453327s" podCreationTimestamp="2026-02-23 19:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 19:01:02.172924069 +0000 UTC m=+5417.989123689" watchObservedRunningTime="2026-02-23 19:01:02.174453327 +0000 UTC m=+5417.990652927" Feb 23 19:01:05 crc kubenswrapper[4724]: I0223 19:01:05.231613 4724 generic.go:334] "Generic (PLEG): container finished" podID="822d2059-6be4-4c9f-8ca8-b38ebaf5ff01" containerID="0e9df1cfa276da24c15e168ea5d45be30669fe621a398438ad81f2c6a4f78b00" exitCode=0 Feb 23 19:01:05 crc kubenswrapper[4724]: I0223 19:01:05.231703 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29531221-dnhc9" event={"ID":"822d2059-6be4-4c9f-8ca8-b38ebaf5ff01","Type":"ContainerDied","Data":"0e9df1cfa276da24c15e168ea5d45be30669fe621a398438ad81f2c6a4f78b00"} Feb 23 19:01:06 crc kubenswrapper[4724]: I0223 19:01:06.606342 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29531221-dnhc9" Feb 23 19:01:06 crc kubenswrapper[4724]: I0223 19:01:06.734470 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/822d2059-6be4-4c9f-8ca8-b38ebaf5ff01-config-data\") pod \"822d2059-6be4-4c9f-8ca8-b38ebaf5ff01\" (UID: \"822d2059-6be4-4c9f-8ca8-b38ebaf5ff01\") " Feb 23 19:01:06 crc kubenswrapper[4724]: I0223 19:01:06.734769 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpdsv\" (UniqueName: \"kubernetes.io/projected/822d2059-6be4-4c9f-8ca8-b38ebaf5ff01-kube-api-access-qpdsv\") pod \"822d2059-6be4-4c9f-8ca8-b38ebaf5ff01\" (UID: \"822d2059-6be4-4c9f-8ca8-b38ebaf5ff01\") " Feb 23 19:01:06 crc kubenswrapper[4724]: I0223 19:01:06.734796 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/822d2059-6be4-4c9f-8ca8-b38ebaf5ff01-combined-ca-bundle\") pod \"822d2059-6be4-4c9f-8ca8-b38ebaf5ff01\" (UID: \"822d2059-6be4-4c9f-8ca8-b38ebaf5ff01\") " Feb 23 19:01:06 crc kubenswrapper[4724]: I0223 19:01:06.734832 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/822d2059-6be4-4c9f-8ca8-b38ebaf5ff01-fernet-keys\") pod \"822d2059-6be4-4c9f-8ca8-b38ebaf5ff01\" (UID: \"822d2059-6be4-4c9f-8ca8-b38ebaf5ff01\") " Feb 23 19:01:06 crc kubenswrapper[4724]: I0223 19:01:06.742270 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/822d2059-6be4-4c9f-8ca8-b38ebaf5ff01-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "822d2059-6be4-4c9f-8ca8-b38ebaf5ff01" (UID: "822d2059-6be4-4c9f-8ca8-b38ebaf5ff01"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:01:06 crc kubenswrapper[4724]: I0223 19:01:06.742309 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/822d2059-6be4-4c9f-8ca8-b38ebaf5ff01-kube-api-access-qpdsv" (OuterVolumeSpecName: "kube-api-access-qpdsv") pod "822d2059-6be4-4c9f-8ca8-b38ebaf5ff01" (UID: "822d2059-6be4-4c9f-8ca8-b38ebaf5ff01"). InnerVolumeSpecName "kube-api-access-qpdsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:01:06 crc kubenswrapper[4724]: I0223 19:01:06.774166 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/822d2059-6be4-4c9f-8ca8-b38ebaf5ff01-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "822d2059-6be4-4c9f-8ca8-b38ebaf5ff01" (UID: "822d2059-6be4-4c9f-8ca8-b38ebaf5ff01"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:01:06 crc kubenswrapper[4724]: I0223 19:01:06.799733 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/822d2059-6be4-4c9f-8ca8-b38ebaf5ff01-config-data" (OuterVolumeSpecName: "config-data") pod "822d2059-6be4-4c9f-8ca8-b38ebaf5ff01" (UID: "822d2059-6be4-4c9f-8ca8-b38ebaf5ff01"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:01:06 crc kubenswrapper[4724]: I0223 19:01:06.838854 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/822d2059-6be4-4c9f-8ca8-b38ebaf5ff01-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 19:01:06 crc kubenswrapper[4724]: I0223 19:01:06.838894 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpdsv\" (UniqueName: \"kubernetes.io/projected/822d2059-6be4-4c9f-8ca8-b38ebaf5ff01-kube-api-access-qpdsv\") on node \"crc\" DevicePath \"\"" Feb 23 19:01:06 crc kubenswrapper[4724]: I0223 19:01:06.838907 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/822d2059-6be4-4c9f-8ca8-b38ebaf5ff01-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 19:01:06 crc kubenswrapper[4724]: I0223 19:01:06.838919 4724 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/822d2059-6be4-4c9f-8ca8-b38ebaf5ff01-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 23 19:01:07 crc kubenswrapper[4724]: I0223 19:01:07.261421 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29531221-dnhc9" event={"ID":"822d2059-6be4-4c9f-8ca8-b38ebaf5ff01","Type":"ContainerDied","Data":"f7fb3bb8fe47d0a240af8a0816b3edd8a30df17cd93b6deb87c48bffc855181f"} Feb 23 19:01:07 crc kubenswrapper[4724]: I0223 19:01:07.262030 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7fb3bb8fe47d0a240af8a0816b3edd8a30df17cd93b6deb87c48bffc855181f" Feb 23 19:01:07 crc kubenswrapper[4724]: I0223 19:01:07.261602 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29531221-dnhc9" Feb 23 19:01:10 crc kubenswrapper[4724]: I0223 19:01:10.953336 4724 scope.go:117] "RemoveContainer" containerID="0445ec09f32cc3389918fbf9447b786148afd5f9c063c45817e69392efe39cd6" Feb 23 19:01:10 crc kubenswrapper[4724]: E0223 19:01:10.954538 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 19:01:14 crc kubenswrapper[4724]: I0223 19:01:14.833761 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fk467"] Feb 23 19:01:14 crc kubenswrapper[4724]: E0223 19:01:14.834931 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="822d2059-6be4-4c9f-8ca8-b38ebaf5ff01" containerName="keystone-cron" Feb 23 19:01:14 crc kubenswrapper[4724]: I0223 19:01:14.834953 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="822d2059-6be4-4c9f-8ca8-b38ebaf5ff01" containerName="keystone-cron" Feb 23 19:01:14 crc kubenswrapper[4724]: I0223 19:01:14.835312 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="822d2059-6be4-4c9f-8ca8-b38ebaf5ff01" containerName="keystone-cron" Feb 23 19:01:14 crc kubenswrapper[4724]: I0223 19:01:14.837886 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fk467" Feb 23 19:01:14 crc kubenswrapper[4724]: I0223 19:01:14.873755 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fk467"] Feb 23 19:01:14 crc kubenswrapper[4724]: I0223 19:01:14.926872 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq68v\" (UniqueName: \"kubernetes.io/projected/3fecd3df-95fe-4232-a498-8297906e023a-kube-api-access-mq68v\") pod \"redhat-operators-fk467\" (UID: \"3fecd3df-95fe-4232-a498-8297906e023a\") " pod="openshift-marketplace/redhat-operators-fk467" Feb 23 19:01:14 crc kubenswrapper[4724]: I0223 19:01:14.927096 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fecd3df-95fe-4232-a498-8297906e023a-utilities\") pod \"redhat-operators-fk467\" (UID: \"3fecd3df-95fe-4232-a498-8297906e023a\") " pod="openshift-marketplace/redhat-operators-fk467" Feb 23 19:01:14 crc kubenswrapper[4724]: I0223 19:01:14.927222 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fecd3df-95fe-4232-a498-8297906e023a-catalog-content\") pod \"redhat-operators-fk467\" (UID: \"3fecd3df-95fe-4232-a498-8297906e023a\") " pod="openshift-marketplace/redhat-operators-fk467" Feb 23 19:01:15 crc kubenswrapper[4724]: I0223 19:01:15.028921 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fecd3df-95fe-4232-a498-8297906e023a-catalog-content\") pod \"redhat-operators-fk467\" (UID: \"3fecd3df-95fe-4232-a498-8297906e023a\") " pod="openshift-marketplace/redhat-operators-fk467" Feb 23 19:01:15 crc kubenswrapper[4724]: I0223 19:01:15.029099 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq68v\" (UniqueName: \"kubernetes.io/projected/3fecd3df-95fe-4232-a498-8297906e023a-kube-api-access-mq68v\") pod \"redhat-operators-fk467\" (UID: \"3fecd3df-95fe-4232-a498-8297906e023a\") " pod="openshift-marketplace/redhat-operators-fk467" Feb 23 19:01:15 crc kubenswrapper[4724]: I0223 19:01:15.029167 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fecd3df-95fe-4232-a498-8297906e023a-utilities\") pod \"redhat-operators-fk467\" (UID: \"3fecd3df-95fe-4232-a498-8297906e023a\") " pod="openshift-marketplace/redhat-operators-fk467" Feb 23 19:01:15 crc kubenswrapper[4724]: I0223 19:01:15.029746 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fecd3df-95fe-4232-a498-8297906e023a-catalog-content\") pod \"redhat-operators-fk467\" (UID: \"3fecd3df-95fe-4232-a498-8297906e023a\") " pod="openshift-marketplace/redhat-operators-fk467" Feb 23 19:01:15 crc kubenswrapper[4724]: I0223 19:01:15.029793 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fecd3df-95fe-4232-a498-8297906e023a-utilities\") pod \"redhat-operators-fk467\" (UID: \"3fecd3df-95fe-4232-a498-8297906e023a\") " pod="openshift-marketplace/redhat-operators-fk467" Feb 23 19:01:15 crc kubenswrapper[4724]: I0223 19:01:15.048653 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq68v\" (UniqueName: \"kubernetes.io/projected/3fecd3df-95fe-4232-a498-8297906e023a-kube-api-access-mq68v\") pod \"redhat-operators-fk467\" (UID: \"3fecd3df-95fe-4232-a498-8297906e023a\") " pod="openshift-marketplace/redhat-operators-fk467" Feb 23 19:01:15 crc kubenswrapper[4724]: I0223 19:01:15.157219 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fk467" Feb 23 19:01:15 crc kubenswrapper[4724]: I0223 19:01:15.628379 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fk467"] Feb 23 19:01:16 crc kubenswrapper[4724]: I0223 19:01:16.354608 4724 generic.go:334] "Generic (PLEG): container finished" podID="3fecd3df-95fe-4232-a498-8297906e023a" containerID="b81597bea16cedb0dfbec9f4cdaddb7b970c79c633182f010de8f18d14861d7f" exitCode=0 Feb 23 19:01:16 crc kubenswrapper[4724]: I0223 19:01:16.354757 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fk467" event={"ID":"3fecd3df-95fe-4232-a498-8297906e023a","Type":"ContainerDied","Data":"b81597bea16cedb0dfbec9f4cdaddb7b970c79c633182f010de8f18d14861d7f"} Feb 23 19:01:16 crc kubenswrapper[4724]: I0223 19:01:16.354909 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fk467" event={"ID":"3fecd3df-95fe-4232-a498-8297906e023a","Type":"ContainerStarted","Data":"2e79b8d37bad0d8566b1755bfaa2582e6aa83946a0c173c13152b843df14e826"} Feb 23 19:01:16 crc kubenswrapper[4724]: I0223 19:01:16.356598 4724 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 19:01:18 crc kubenswrapper[4724]: I0223 19:01:18.391533 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fk467" event={"ID":"3fecd3df-95fe-4232-a498-8297906e023a","Type":"ContainerStarted","Data":"0db4b31326d28f51646d14f6da5db0d1b15e30edefc3fe4e576d61ac10320006"} Feb 23 19:01:21 crc kubenswrapper[4724]: I0223 19:01:21.424628 4724 generic.go:334] "Generic (PLEG): container finished" podID="3fecd3df-95fe-4232-a498-8297906e023a" containerID="0db4b31326d28f51646d14f6da5db0d1b15e30edefc3fe4e576d61ac10320006" exitCode=0 Feb 23 19:01:21 crc kubenswrapper[4724]: I0223 19:01:21.424856 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fk467" event={"ID":"3fecd3df-95fe-4232-a498-8297906e023a","Type":"ContainerDied","Data":"0db4b31326d28f51646d14f6da5db0d1b15e30edefc3fe4e576d61ac10320006"} Feb 23 19:01:22 crc kubenswrapper[4724]: I0223 19:01:22.436703 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fk467" event={"ID":"3fecd3df-95fe-4232-a498-8297906e023a","Type":"ContainerStarted","Data":"e5a642e0b7b4aba742f2b10aa05ee4d5ef108c509a91d23ae4c7a6e679f25ca6"} Feb 23 19:01:22 crc kubenswrapper[4724]: I0223 19:01:22.454969 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fk467" podStartSLOduration=2.982490664 podStartE2EDuration="8.454941927s" podCreationTimestamp="2026-02-23 19:01:14 +0000 UTC" firstStartedPulling="2026-02-23 19:01:16.356332317 +0000 UTC m=+5432.172531917" lastFinishedPulling="2026-02-23 19:01:21.82878358 +0000 UTC m=+5437.644983180" observedRunningTime="2026-02-23 19:01:22.452385994 +0000 UTC m=+5438.268585624" watchObservedRunningTime="2026-02-23 19:01:22.454941927 +0000 UTC m=+5438.271141527" Feb 23 19:01:22 crc kubenswrapper[4724]: I0223 19:01:22.951150 4724 scope.go:117] "RemoveContainer" containerID="0445ec09f32cc3389918fbf9447b786148afd5f9c063c45817e69392efe39cd6" Feb 23 19:01:22 crc kubenswrapper[4724]: E0223 19:01:22.951479 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 19:01:25 crc kubenswrapper[4724]: I0223 19:01:25.158337 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fk467" Feb 23 19:01:25 crc kubenswrapper[4724]: I0223 19:01:25.159963 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fk467" Feb 23 19:01:26 crc kubenswrapper[4724]: I0223 19:01:26.214447 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fk467" podUID="3fecd3df-95fe-4232-a498-8297906e023a" containerName="registry-server" probeResult="failure" output=< Feb 23 19:01:26 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 23 19:01:26 crc kubenswrapper[4724]: > Feb 23 19:01:34 crc kubenswrapper[4724]: I0223 19:01:34.960901 4724 scope.go:117] "RemoveContainer" containerID="0445ec09f32cc3389918fbf9447b786148afd5f9c063c45817e69392efe39cd6" Feb 23 19:01:34 crc kubenswrapper[4724]: E0223 19:01:34.961539 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 19:01:35 crc kubenswrapper[4724]: I0223 19:01:35.227472 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fk467" Feb 23 19:01:35 crc kubenswrapper[4724]: I0223 19:01:35.296090 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fk467" Feb 23 19:01:35 crc kubenswrapper[4724]: I0223 19:01:35.475167 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fk467"] Feb 23 19:01:36 crc kubenswrapper[4724]: I0223 19:01:36.592287 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fk467" podUID="3fecd3df-95fe-4232-a498-8297906e023a" containerName="registry-server" containerID="cri-o://e5a642e0b7b4aba742f2b10aa05ee4d5ef108c509a91d23ae4c7a6e679f25ca6" gracePeriod=2 Feb 23 19:01:37 crc kubenswrapper[4724]: I0223 19:01:37.159753 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fk467" Feb 23 19:01:37 crc kubenswrapper[4724]: I0223 19:01:37.337342 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fecd3df-95fe-4232-a498-8297906e023a-catalog-content\") pod \"3fecd3df-95fe-4232-a498-8297906e023a\" (UID: \"3fecd3df-95fe-4232-a498-8297906e023a\") " Feb 23 19:01:37 crc kubenswrapper[4724]: I0223 19:01:37.337548 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mq68v\" (UniqueName: \"kubernetes.io/projected/3fecd3df-95fe-4232-a498-8297906e023a-kube-api-access-mq68v\") pod \"3fecd3df-95fe-4232-a498-8297906e023a\" (UID: \"3fecd3df-95fe-4232-a498-8297906e023a\") " Feb 23 19:01:37 crc kubenswrapper[4724]: I0223 19:01:37.337644 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fecd3df-95fe-4232-a498-8297906e023a-utilities\") pod \"3fecd3df-95fe-4232-a498-8297906e023a\" (UID: \"3fecd3df-95fe-4232-a498-8297906e023a\") " Feb 23 19:01:37 crc kubenswrapper[4724]: I0223 19:01:37.339016 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3fecd3df-95fe-4232-a498-8297906e023a-utilities" (OuterVolumeSpecName: "utilities") pod "3fecd3df-95fe-4232-a498-8297906e023a" (UID: "3fecd3df-95fe-4232-a498-8297906e023a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 19:01:37 crc kubenswrapper[4724]: I0223 19:01:37.342553 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fecd3df-95fe-4232-a498-8297906e023a-kube-api-access-mq68v" (OuterVolumeSpecName: "kube-api-access-mq68v") pod "3fecd3df-95fe-4232-a498-8297906e023a" (UID: "3fecd3df-95fe-4232-a498-8297906e023a"). InnerVolumeSpecName "kube-api-access-mq68v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:01:37 crc kubenswrapper[4724]: I0223 19:01:37.440965 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mq68v\" (UniqueName: \"kubernetes.io/projected/3fecd3df-95fe-4232-a498-8297906e023a-kube-api-access-mq68v\") on node \"crc\" DevicePath \"\"" Feb 23 19:01:37 crc kubenswrapper[4724]: I0223 19:01:37.440998 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fecd3df-95fe-4232-a498-8297906e023a-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 19:01:37 crc kubenswrapper[4724]: I0223 19:01:37.452928 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3fecd3df-95fe-4232-a498-8297906e023a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3fecd3df-95fe-4232-a498-8297906e023a" (UID: "3fecd3df-95fe-4232-a498-8297906e023a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 19:01:37 crc kubenswrapper[4724]: I0223 19:01:37.544353 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fecd3df-95fe-4232-a498-8297906e023a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 19:01:37 crc kubenswrapper[4724]: I0223 19:01:37.604203 4724 generic.go:334] "Generic (PLEG): container finished" podID="3fecd3df-95fe-4232-a498-8297906e023a" containerID="e5a642e0b7b4aba742f2b10aa05ee4d5ef108c509a91d23ae4c7a6e679f25ca6" exitCode=0 Feb 23 19:01:37 crc kubenswrapper[4724]: I0223 19:01:37.604482 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fk467" event={"ID":"3fecd3df-95fe-4232-a498-8297906e023a","Type":"ContainerDied","Data":"e5a642e0b7b4aba742f2b10aa05ee4d5ef108c509a91d23ae4c7a6e679f25ca6"} Feb 23 19:01:37 crc kubenswrapper[4724]: I0223 19:01:37.604509 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fk467" event={"ID":"3fecd3df-95fe-4232-a498-8297906e023a","Type":"ContainerDied","Data":"2e79b8d37bad0d8566b1755bfaa2582e6aa83946a0c173c13152b843df14e826"} Feb 23 19:01:37 crc kubenswrapper[4724]: I0223 19:01:37.604540 4724 scope.go:117] "RemoveContainer" containerID="e5a642e0b7b4aba742f2b10aa05ee4d5ef108c509a91d23ae4c7a6e679f25ca6" Feb 23 19:01:37 crc kubenswrapper[4724]: I0223 19:01:37.604704 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fk467" Feb 23 19:01:37 crc kubenswrapper[4724]: I0223 19:01:37.635764 4724 scope.go:117] "RemoveContainer" containerID="0db4b31326d28f51646d14f6da5db0d1b15e30edefc3fe4e576d61ac10320006" Feb 23 19:01:37 crc kubenswrapper[4724]: I0223 19:01:37.643382 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fk467"] Feb 23 19:01:37 crc kubenswrapper[4724]: I0223 19:01:37.654359 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fk467"] Feb 23 19:01:37 crc kubenswrapper[4724]: I0223 19:01:37.661663 4724 scope.go:117] "RemoveContainer" containerID="b81597bea16cedb0dfbec9f4cdaddb7b970c79c633182f010de8f18d14861d7f" Feb 23 19:01:37 crc kubenswrapper[4724]: I0223 19:01:37.709993 4724 scope.go:117] "RemoveContainer" containerID="e5a642e0b7b4aba742f2b10aa05ee4d5ef108c509a91d23ae4c7a6e679f25ca6" Feb 23 19:01:37 crc kubenswrapper[4724]: E0223 19:01:37.710509 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5a642e0b7b4aba742f2b10aa05ee4d5ef108c509a91d23ae4c7a6e679f25ca6\": container with ID starting with e5a642e0b7b4aba742f2b10aa05ee4d5ef108c509a91d23ae4c7a6e679f25ca6 not found: ID does not exist" containerID="e5a642e0b7b4aba742f2b10aa05ee4d5ef108c509a91d23ae4c7a6e679f25ca6" Feb 23 19:01:37 crc kubenswrapper[4724]: I0223 19:01:37.710546 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5a642e0b7b4aba742f2b10aa05ee4d5ef108c509a91d23ae4c7a6e679f25ca6"} err="failed to get container status \"e5a642e0b7b4aba742f2b10aa05ee4d5ef108c509a91d23ae4c7a6e679f25ca6\": rpc error: code = NotFound desc = could not find container \"e5a642e0b7b4aba742f2b10aa05ee4d5ef108c509a91d23ae4c7a6e679f25ca6\": container with ID starting with e5a642e0b7b4aba742f2b10aa05ee4d5ef108c509a91d23ae4c7a6e679f25ca6 not found: ID does not exist" Feb 23 19:01:37 crc kubenswrapper[4724]: I0223 19:01:37.710570 4724 scope.go:117] "RemoveContainer" containerID="0db4b31326d28f51646d14f6da5db0d1b15e30edefc3fe4e576d61ac10320006" Feb 23 19:01:37 crc kubenswrapper[4724]: E0223 19:01:37.710879 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0db4b31326d28f51646d14f6da5db0d1b15e30edefc3fe4e576d61ac10320006\": container with ID starting with 0db4b31326d28f51646d14f6da5db0d1b15e30edefc3fe4e576d61ac10320006 not found: ID does not exist" containerID="0db4b31326d28f51646d14f6da5db0d1b15e30edefc3fe4e576d61ac10320006" Feb 23 19:01:37 crc kubenswrapper[4724]: I0223 19:01:37.710919 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0db4b31326d28f51646d14f6da5db0d1b15e30edefc3fe4e576d61ac10320006"} err="failed to get container status \"0db4b31326d28f51646d14f6da5db0d1b15e30edefc3fe4e576d61ac10320006\": rpc error: code = NotFound desc = could not find container \"0db4b31326d28f51646d14f6da5db0d1b15e30edefc3fe4e576d61ac10320006\": container with ID starting with 0db4b31326d28f51646d14f6da5db0d1b15e30edefc3fe4e576d61ac10320006 not found: ID does not exist" Feb 23 19:01:37 crc kubenswrapper[4724]: I0223 19:01:37.710946 4724 scope.go:117] "RemoveContainer" containerID="b81597bea16cedb0dfbec9f4cdaddb7b970c79c633182f010de8f18d14861d7f" Feb 23 19:01:37 crc kubenswrapper[4724]: E0223 19:01:37.711305 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b81597bea16cedb0dfbec9f4cdaddb7b970c79c633182f010de8f18d14861d7f\": container with ID starting with b81597bea16cedb0dfbec9f4cdaddb7b970c79c633182f010de8f18d14861d7f not found: ID does not exist" containerID="b81597bea16cedb0dfbec9f4cdaddb7b970c79c633182f010de8f18d14861d7f" Feb 23 19:01:37 crc kubenswrapper[4724]: I0223 19:01:37.711376 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b81597bea16cedb0dfbec9f4cdaddb7b970c79c633182f010de8f18d14861d7f"} err="failed to get container status \"b81597bea16cedb0dfbec9f4cdaddb7b970c79c633182f010de8f18d14861d7f\": rpc error: code = NotFound desc = could not find container \"b81597bea16cedb0dfbec9f4cdaddb7b970c79c633182f010de8f18d14861d7f\": container with ID starting with b81597bea16cedb0dfbec9f4cdaddb7b970c79c633182f010de8f18d14861d7f not found: ID does not exist" Feb 23 19:01:38 crc kubenswrapper[4724]: I0223 19:01:38.965103 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fecd3df-95fe-4232-a498-8297906e023a" path="/var/lib/kubelet/pods/3fecd3df-95fe-4232-a498-8297906e023a/volumes" Feb 23 19:01:39 crc kubenswrapper[4724]: I0223 19:01:39.878595 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cl7pw"] Feb 23 19:01:39 crc kubenswrapper[4724]: E0223 19:01:39.879084 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fecd3df-95fe-4232-a498-8297906e023a" containerName="registry-server" Feb 23 19:01:39 crc kubenswrapper[4724]: I0223 19:01:39.879107 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fecd3df-95fe-4232-a498-8297906e023a" containerName="registry-server" Feb 23 19:01:39 crc kubenswrapper[4724]: E0223 19:01:39.879144 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fecd3df-95fe-4232-a498-8297906e023a" containerName="extract-content" Feb 23 19:01:39 crc kubenswrapper[4724]: I0223 19:01:39.879154 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fecd3df-95fe-4232-a498-8297906e023a" containerName="extract-content" Feb 23 19:01:39 crc kubenswrapper[4724]: E0223 19:01:39.879165 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fecd3df-95fe-4232-a498-8297906e023a" containerName="extract-utilities" Feb 23 19:01:39 crc kubenswrapper[4724]: I0223 19:01:39.879173 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fecd3df-95fe-4232-a498-8297906e023a" containerName="extract-utilities" Feb 23 19:01:39 crc kubenswrapper[4724]: I0223 19:01:39.879682 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fecd3df-95fe-4232-a498-8297906e023a" containerName="registry-server" Feb 23 19:01:39 crc kubenswrapper[4724]: I0223 19:01:39.881546 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cl7pw" Feb 23 19:01:39 crc kubenswrapper[4724]: I0223 19:01:39.917712 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cl7pw"] Feb 23 19:01:39 crc kubenswrapper[4724]: I0223 19:01:39.992758 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9clhj\" (UniqueName: \"kubernetes.io/projected/e110221b-522e-48a5-89ea-16b295239083-kube-api-access-9clhj\") pod \"community-operators-cl7pw\" (UID: \"e110221b-522e-48a5-89ea-16b295239083\") " pod="openshift-marketplace/community-operators-cl7pw" Feb 23 19:01:39 crc kubenswrapper[4724]: I0223 19:01:39.993686 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e110221b-522e-48a5-89ea-16b295239083-utilities\") pod \"community-operators-cl7pw\" (UID: \"e110221b-522e-48a5-89ea-16b295239083\") " pod="openshift-marketplace/community-operators-cl7pw" Feb 23 19:01:39 crc kubenswrapper[4724]: I0223 19:01:39.993813 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e110221b-522e-48a5-89ea-16b295239083-catalog-content\") pod \"community-operators-cl7pw\" (UID: \"e110221b-522e-48a5-89ea-16b295239083\") " pod="openshift-marketplace/community-operators-cl7pw" Feb 23 19:01:40 crc kubenswrapper[4724]: I0223 19:01:40.095624 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9clhj\" (UniqueName: \"kubernetes.io/projected/e110221b-522e-48a5-89ea-16b295239083-kube-api-access-9clhj\") pod \"community-operators-cl7pw\" (UID: \"e110221b-522e-48a5-89ea-16b295239083\") " pod="openshift-marketplace/community-operators-cl7pw" Feb 23 19:01:40 crc kubenswrapper[4724]: I0223 19:01:40.095875 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e110221b-522e-48a5-89ea-16b295239083-utilities\") pod \"community-operators-cl7pw\" (UID: \"e110221b-522e-48a5-89ea-16b295239083\") " pod="openshift-marketplace/community-operators-cl7pw" Feb 23 19:01:40 crc kubenswrapper[4724]: I0223 19:01:40.095977 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e110221b-522e-48a5-89ea-16b295239083-catalog-content\") pod \"community-operators-cl7pw\" (UID: \"e110221b-522e-48a5-89ea-16b295239083\") " pod="openshift-marketplace/community-operators-cl7pw" Feb 23 19:01:40 crc kubenswrapper[4724]: I0223 19:01:40.096738 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e110221b-522e-48a5-89ea-16b295239083-catalog-content\") pod \"community-operators-cl7pw\" (UID: \"e110221b-522e-48a5-89ea-16b295239083\") " pod="openshift-marketplace/community-operators-cl7pw" Feb 23 19:01:40 crc kubenswrapper[4724]: I0223 19:01:40.096789 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e110221b-522e-48a5-89ea-16b295239083-utilities\") pod \"community-operators-cl7pw\" (UID: \"e110221b-522e-48a5-89ea-16b295239083\") " pod="openshift-marketplace/community-operators-cl7pw" Feb 23 19:01:40 crc kubenswrapper[4724]: I0223 19:01:40.118033 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9clhj\" (UniqueName: \"kubernetes.io/projected/e110221b-522e-48a5-89ea-16b295239083-kube-api-access-9clhj\") pod \"community-operators-cl7pw\" (UID: \"e110221b-522e-48a5-89ea-16b295239083\") " pod="openshift-marketplace/community-operators-cl7pw" Feb 23 19:01:40 crc kubenswrapper[4724]: I0223 19:01:40.211136 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cl7pw" Feb 23 19:01:40 crc kubenswrapper[4724]: I0223 19:01:40.733596 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cl7pw"] Feb 23 19:01:41 crc kubenswrapper[4724]: I0223 19:01:41.650575 4724 generic.go:334] "Generic (PLEG): container finished" podID="e110221b-522e-48a5-89ea-16b295239083" containerID="781b9da871b7b5428dc811e7ace274b12773769697d2c9629250251afbb9c94a" exitCode=0 Feb 23 19:01:41 crc kubenswrapper[4724]: I0223 19:01:41.650772 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cl7pw" event={"ID":"e110221b-522e-48a5-89ea-16b295239083","Type":"ContainerDied","Data":"781b9da871b7b5428dc811e7ace274b12773769697d2c9629250251afbb9c94a"} Feb 23 19:01:41 crc kubenswrapper[4724]: I0223 19:01:41.651145 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cl7pw" event={"ID":"e110221b-522e-48a5-89ea-16b295239083","Type":"ContainerStarted","Data":"99739998d6f242cd748e2775571bec05e4a0c5db94f0087671ed718e9a3ffd25"} Feb 23 19:01:42 crc kubenswrapper[4724]: I0223 19:01:42.666368 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cl7pw" event={"ID":"e110221b-522e-48a5-89ea-16b295239083","Type":"ContainerStarted","Data":"e470b015bbbddd21264cccac8c79fb5e70431325dfd2e7a456260ff25b0e07a5"} Feb 23 19:01:44 crc kubenswrapper[4724]: I0223 19:01:44.694238 4724 generic.go:334] "Generic (PLEG): container finished" podID="e110221b-522e-48a5-89ea-16b295239083" containerID="e470b015bbbddd21264cccac8c79fb5e70431325dfd2e7a456260ff25b0e07a5" exitCode=0 Feb 23 19:01:44 crc kubenswrapper[4724]: I0223 19:01:44.694258 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cl7pw" event={"ID":"e110221b-522e-48a5-89ea-16b295239083","Type":"ContainerDied","Data":"e470b015bbbddd21264cccac8c79fb5e70431325dfd2e7a456260ff25b0e07a5"} Feb 23 19:01:45 crc kubenswrapper[4724]: I0223 19:01:45.709311 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cl7pw" event={"ID":"e110221b-522e-48a5-89ea-16b295239083","Type":"ContainerStarted","Data":"33560daa40ddc60e549f49affbb4a7db192670d7740d4e0e97de0538cea4be34"} Feb 23 19:01:45 crc kubenswrapper[4724]: I0223 19:01:45.743200 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cl7pw" podStartSLOduration=3.274258783 podStartE2EDuration="6.743176317s" podCreationTimestamp="2026-02-23 19:01:39 +0000 UTC" firstStartedPulling="2026-02-23 19:01:41.654464143 +0000 UTC m=+5457.470663753" lastFinishedPulling="2026-02-23 19:01:45.123381647 +0000 UTC m=+5460.939581287" observedRunningTime="2026-02-23 19:01:45.732364189 +0000 UTC m=+5461.548563809" watchObservedRunningTime="2026-02-23 19:01:45.743176317 +0000 UTC m=+5461.559375927" Feb 23 19:01:45 crc kubenswrapper[4724]: I0223 19:01:45.951968 4724 scope.go:117] "RemoveContainer" containerID="0445ec09f32cc3389918fbf9447b786148afd5f9c063c45817e69392efe39cd6" Feb 23 19:01:45 crc kubenswrapper[4724]: E0223 19:01:45.952446 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 19:01:50 crc kubenswrapper[4724]: I0223 19:01:50.212874 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cl7pw" Feb 23 19:01:50 crc kubenswrapper[4724]: I0223 19:01:50.213440 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cl7pw" Feb 23 19:01:50 crc kubenswrapper[4724]: I0223 19:01:50.268946 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cl7pw" Feb 23 19:01:50 crc kubenswrapper[4724]: I0223 19:01:50.827461 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cl7pw" Feb 23 19:01:50 crc kubenswrapper[4724]: I0223 19:01:50.891604 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cl7pw"] Feb 23 19:01:52 crc kubenswrapper[4724]: I0223 19:01:52.133931 4724 scope.go:117] "RemoveContainer" containerID="284f90198dc471a183e6f1329505e4bc608b01bc4cab6c86c4116d7c37ab9140" Feb 23 19:01:52 crc kubenswrapper[4724]: I0223 19:01:52.790757 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cl7pw" podUID="e110221b-522e-48a5-89ea-16b295239083" containerName="registry-server" containerID="cri-o://33560daa40ddc60e549f49affbb4a7db192670d7740d4e0e97de0538cea4be34" gracePeriod=2 Feb 23 19:01:53 crc kubenswrapper[4724]: I0223 19:01:53.270056 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cl7pw" Feb 23 19:01:53 crc kubenswrapper[4724]: I0223 19:01:53.417631 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9clhj\" (UniqueName: \"kubernetes.io/projected/e110221b-522e-48a5-89ea-16b295239083-kube-api-access-9clhj\") pod \"e110221b-522e-48a5-89ea-16b295239083\" (UID: \"e110221b-522e-48a5-89ea-16b295239083\") " Feb 23 19:01:53 crc kubenswrapper[4724]: I0223 19:01:53.417775 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e110221b-522e-48a5-89ea-16b295239083-utilities\") pod \"e110221b-522e-48a5-89ea-16b295239083\" (UID: \"e110221b-522e-48a5-89ea-16b295239083\") " Feb 23 19:01:53 crc kubenswrapper[4724]: I0223 19:01:53.418040 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e110221b-522e-48a5-89ea-16b295239083-catalog-content\") pod \"e110221b-522e-48a5-89ea-16b295239083\" (UID: \"e110221b-522e-48a5-89ea-16b295239083\") " Feb 23 19:01:53 crc kubenswrapper[4724]: I0223 19:01:53.418783 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e110221b-522e-48a5-89ea-16b295239083-utilities" (OuterVolumeSpecName: "utilities") pod "e110221b-522e-48a5-89ea-16b295239083" (UID: "e110221b-522e-48a5-89ea-16b295239083"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 19:01:53 crc kubenswrapper[4724]: I0223 19:01:53.428879 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e110221b-522e-48a5-89ea-16b295239083-kube-api-access-9clhj" (OuterVolumeSpecName: "kube-api-access-9clhj") pod "e110221b-522e-48a5-89ea-16b295239083" (UID: "e110221b-522e-48a5-89ea-16b295239083"). InnerVolumeSpecName "kube-api-access-9clhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:01:53 crc kubenswrapper[4724]: I0223 19:01:53.488970 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e110221b-522e-48a5-89ea-16b295239083-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e110221b-522e-48a5-89ea-16b295239083" (UID: "e110221b-522e-48a5-89ea-16b295239083"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 19:01:53 crc kubenswrapper[4724]: I0223 19:01:53.520127 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e110221b-522e-48a5-89ea-16b295239083-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 19:01:53 crc kubenswrapper[4724]: I0223 19:01:53.520166 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e110221b-522e-48a5-89ea-16b295239083-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 19:01:53 crc kubenswrapper[4724]: I0223 19:01:53.520177 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9clhj\" (UniqueName: \"kubernetes.io/projected/e110221b-522e-48a5-89ea-16b295239083-kube-api-access-9clhj\") on node \"crc\" DevicePath \"\"" Feb 23 19:01:53 crc kubenswrapper[4724]: I0223 19:01:53.808337 4724 generic.go:334] "Generic (PLEG): container finished" podID="e110221b-522e-48a5-89ea-16b295239083" containerID="33560daa40ddc60e549f49affbb4a7db192670d7740d4e0e97de0538cea4be34" exitCode=0 Feb 23 19:01:53 crc kubenswrapper[4724]: I0223 19:01:53.808690 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cl7pw" Feb 23 19:01:53 crc kubenswrapper[4724]: I0223 19:01:53.811848 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cl7pw" event={"ID":"e110221b-522e-48a5-89ea-16b295239083","Type":"ContainerDied","Data":"33560daa40ddc60e549f49affbb4a7db192670d7740d4e0e97de0538cea4be34"} Feb 23 19:01:53 crc kubenswrapper[4724]: I0223 19:01:53.811913 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cl7pw" event={"ID":"e110221b-522e-48a5-89ea-16b295239083","Type":"ContainerDied","Data":"99739998d6f242cd748e2775571bec05e4a0c5db94f0087671ed718e9a3ffd25"} Feb 23 19:01:53 crc kubenswrapper[4724]: I0223 19:01:53.811942 4724 scope.go:117] "RemoveContainer" containerID="33560daa40ddc60e549f49affbb4a7db192670d7740d4e0e97de0538cea4be34" Feb 23 19:01:53 crc kubenswrapper[4724]: I0223 19:01:53.848137 4724 scope.go:117] "RemoveContainer" containerID="e470b015bbbddd21264cccac8c79fb5e70431325dfd2e7a456260ff25b0e07a5" Feb 23 19:01:53 crc kubenswrapper[4724]: I0223 19:01:53.860774 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cl7pw"] Feb 23 19:01:53 crc kubenswrapper[4724]: I0223 19:01:53.869982 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cl7pw"] Feb 23 19:01:53 crc kubenswrapper[4724]: I0223 19:01:53.891272 4724 scope.go:117] "RemoveContainer" containerID="781b9da871b7b5428dc811e7ace274b12773769697d2c9629250251afbb9c94a" Feb 23 19:01:53 crc kubenswrapper[4724]: I0223 19:01:53.923977 4724 scope.go:117] "RemoveContainer" containerID="33560daa40ddc60e549f49affbb4a7db192670d7740d4e0e97de0538cea4be34" Feb 23 19:01:53 crc kubenswrapper[4724]: E0223 19:01:53.924465 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33560daa40ddc60e549f49affbb4a7db192670d7740d4e0e97de0538cea4be34\": container with ID starting with 33560daa40ddc60e549f49affbb4a7db192670d7740d4e0e97de0538cea4be34 not found: ID does not exist" containerID="33560daa40ddc60e549f49affbb4a7db192670d7740d4e0e97de0538cea4be34" Feb 23 19:01:53 crc kubenswrapper[4724]: I0223 19:01:53.924582 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33560daa40ddc60e549f49affbb4a7db192670d7740d4e0e97de0538cea4be34"} err="failed to get container status \"33560daa40ddc60e549f49affbb4a7db192670d7740d4e0e97de0538cea4be34\": rpc error: code = NotFound desc = could not find container \"33560daa40ddc60e549f49affbb4a7db192670d7740d4e0e97de0538cea4be34\": container with ID starting with 33560daa40ddc60e549f49affbb4a7db192670d7740d4e0e97de0538cea4be34 not found: ID does not exist" Feb 23 19:01:53 crc kubenswrapper[4724]: I0223 19:01:53.924655 4724 scope.go:117] "RemoveContainer" containerID="e470b015bbbddd21264cccac8c79fb5e70431325dfd2e7a456260ff25b0e07a5" Feb 23 19:01:53 crc kubenswrapper[4724]: E0223 19:01:53.924961 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e470b015bbbddd21264cccac8c79fb5e70431325dfd2e7a456260ff25b0e07a5\": container with ID starting with e470b015bbbddd21264cccac8c79fb5e70431325dfd2e7a456260ff25b0e07a5 not found: ID does not exist" containerID="e470b015bbbddd21264cccac8c79fb5e70431325dfd2e7a456260ff25b0e07a5" Feb 23 19:01:53 crc kubenswrapper[4724]: I0223 19:01:53.925072 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e470b015bbbddd21264cccac8c79fb5e70431325dfd2e7a456260ff25b0e07a5"} err="failed to get container status \"e470b015bbbddd21264cccac8c79fb5e70431325dfd2e7a456260ff25b0e07a5\": rpc error: code = NotFound desc = could not find container \"e470b015bbbddd21264cccac8c79fb5e70431325dfd2e7a456260ff25b0e07a5\": container with ID starting with e470b015bbbddd21264cccac8c79fb5e70431325dfd2e7a456260ff25b0e07a5 not found: ID does not exist" Feb 23 19:01:53 crc kubenswrapper[4724]: I0223 19:01:53.925138 4724 scope.go:117] "RemoveContainer" containerID="781b9da871b7b5428dc811e7ace274b12773769697d2c9629250251afbb9c94a" Feb 23 19:01:53 crc kubenswrapper[4724]: E0223 19:01:53.925616 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"781b9da871b7b5428dc811e7ace274b12773769697d2c9629250251afbb9c94a\": container with ID starting with 781b9da871b7b5428dc811e7ace274b12773769697d2c9629250251afbb9c94a not found: ID does not exist" containerID="781b9da871b7b5428dc811e7ace274b12773769697d2c9629250251afbb9c94a" Feb 23 19:01:53 crc kubenswrapper[4724]: I0223 19:01:53.925727 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"781b9da871b7b5428dc811e7ace274b12773769697d2c9629250251afbb9c94a"} err="failed to get container status \"781b9da871b7b5428dc811e7ace274b12773769697d2c9629250251afbb9c94a\": rpc error: code = NotFound desc = could not find container \"781b9da871b7b5428dc811e7ace274b12773769697d2c9629250251afbb9c94a\": container with ID starting with 781b9da871b7b5428dc811e7ace274b12773769697d2c9629250251afbb9c94a not found: ID does not exist" Feb 23 19:01:54 crc kubenswrapper[4724]: I0223 19:01:54.972187 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e110221b-522e-48a5-89ea-16b295239083" path="/var/lib/kubelet/pods/e110221b-522e-48a5-89ea-16b295239083/volumes" Feb 23 19:01:56 crc kubenswrapper[4724]: I0223 19:01:56.843150 4724 generic.go:334] "Generic (PLEG): container finished" podID="7e70813e-32c6-4649-9ae4-5291ceed814e" containerID="6f1c1d463f48ba4aaa2d48a4c9ac140d46992a31fde19caab9e3fac866280192" exitCode=0 Feb 23 19:01:56 crc kubenswrapper[4724]: I0223 19:01:56.843241 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-78zrq/must-gather-t654n" event={"ID":"7e70813e-32c6-4649-9ae4-5291ceed814e","Type":"ContainerDied","Data":"6f1c1d463f48ba4aaa2d48a4c9ac140d46992a31fde19caab9e3fac866280192"} Feb 23 19:01:56 crc kubenswrapper[4724]: I0223 19:01:56.844269 4724 scope.go:117] "RemoveContainer" containerID="6f1c1d463f48ba4aaa2d48a4c9ac140d46992a31fde19caab9e3fac866280192" Feb 23 19:01:57 crc kubenswrapper[4724]: I0223 19:01:57.540171 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-78zrq_must-gather-t654n_7e70813e-32c6-4649-9ae4-5291ceed814e/gather/0.log" Feb 23 19:01:58 crc kubenswrapper[4724]: I0223 19:01:58.951526 4724 scope.go:117] "RemoveContainer" containerID="0445ec09f32cc3389918fbf9447b786148afd5f9c063c45817e69392efe39cd6" Feb 23 19:01:58 crc kubenswrapper[4724]: E0223 19:01:58.952216 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 19:02:06 crc kubenswrapper[4724]: I0223 19:02:06.465142 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-78zrq/must-gather-t654n"] Feb 23 19:02:06 crc kubenswrapper[4724]: I0223 19:02:06.466823 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-78zrq/must-gather-t654n" podUID="7e70813e-32c6-4649-9ae4-5291ceed814e" containerName="copy" containerID="cri-o://84ee3e3f59657325bcd0ef11a242405818912936714c67b72a09d175c28ef5c2" gracePeriod=2 Feb 23 19:02:06 crc kubenswrapper[4724]: I0223 19:02:06.478795 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-78zrq/must-gather-t654n"] Feb 23 19:02:06 crc kubenswrapper[4724]: I0223 19:02:06.940255 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-78zrq_must-gather-t654n_7e70813e-32c6-4649-9ae4-5291ceed814e/copy/0.log" Feb 23 19:02:06 crc kubenswrapper[4724]: I0223 19:02:06.941093 4724 generic.go:334] "Generic (PLEG): container finished" podID="7e70813e-32c6-4649-9ae4-5291ceed814e" containerID="84ee3e3f59657325bcd0ef11a242405818912936714c67b72a09d175c28ef5c2" exitCode=143 Feb 23 19:02:06 crc kubenswrapper[4724]: I0223 19:02:06.941176 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0637af9c94f50f8b83851a65933d9a4c467401323e46d9ca614cd0537771a526" Feb 23 19:02:06 crc kubenswrapper[4724]: I0223 19:02:06.983043 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-78zrq_must-gather-t654n_7e70813e-32c6-4649-9ae4-5291ceed814e/copy/0.log" Feb 23 19:02:06 crc kubenswrapper[4724]: I0223 19:02:06.983663 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-78zrq/must-gather-t654n" Feb 23 19:02:07 crc kubenswrapper[4724]: I0223 19:02:07.127149 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7e70813e-32c6-4649-9ae4-5291ceed814e-must-gather-output\") pod \"7e70813e-32c6-4649-9ae4-5291ceed814e\" (UID: \"7e70813e-32c6-4649-9ae4-5291ceed814e\") " Feb 23 19:02:07 crc kubenswrapper[4724]: I0223 19:02:07.127450 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrrq9\" (UniqueName: \"kubernetes.io/projected/7e70813e-32c6-4649-9ae4-5291ceed814e-kube-api-access-vrrq9\") pod \"7e70813e-32c6-4649-9ae4-5291ceed814e\" (UID: \"7e70813e-32c6-4649-9ae4-5291ceed814e\") " Feb 23 19:02:07 crc kubenswrapper[4724]: I0223 19:02:07.134427 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e70813e-32c6-4649-9ae4-5291ceed814e-kube-api-access-vrrq9" (OuterVolumeSpecName: "kube-api-access-vrrq9") pod "7e70813e-32c6-4649-9ae4-5291ceed814e" (UID: "7e70813e-32c6-4649-9ae4-5291ceed814e"). InnerVolumeSpecName "kube-api-access-vrrq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:02:07 crc kubenswrapper[4724]: I0223 19:02:07.230053 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrrq9\" (UniqueName: \"kubernetes.io/projected/7e70813e-32c6-4649-9ae4-5291ceed814e-kube-api-access-vrrq9\") on node \"crc\" DevicePath \"\"" Feb 23 19:02:07 crc kubenswrapper[4724]: I0223 19:02:07.318009 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e70813e-32c6-4649-9ae4-5291ceed814e-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "7e70813e-32c6-4649-9ae4-5291ceed814e" (UID: "7e70813e-32c6-4649-9ae4-5291ceed814e"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 19:02:07 crc kubenswrapper[4724]: I0223 19:02:07.331376 4724 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7e70813e-32c6-4649-9ae4-5291ceed814e-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 23 19:02:07 crc kubenswrapper[4724]: I0223 19:02:07.948315 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-78zrq/must-gather-t654n" Feb 23 19:02:08 crc kubenswrapper[4724]: I0223 19:02:08.961025 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e70813e-32c6-4649-9ae4-5291ceed814e" path="/var/lib/kubelet/pods/7e70813e-32c6-4649-9ae4-5291ceed814e/volumes" Feb 23 19:02:11 crc kubenswrapper[4724]: I0223 19:02:11.951597 4724 scope.go:117] "RemoveContainer" containerID="0445ec09f32cc3389918fbf9447b786148afd5f9c063c45817e69392efe39cd6" Feb 23 19:02:11 crc kubenswrapper[4724]: E0223 19:02:11.952192 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 19:02:25 crc kubenswrapper[4724]: I0223 19:02:25.951913 4724 scope.go:117] "RemoveContainer" containerID="0445ec09f32cc3389918fbf9447b786148afd5f9c063c45817e69392efe39cd6" Feb 23 19:02:25 crc kubenswrapper[4724]: E0223 19:02:25.952730 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 19:02:38 crc kubenswrapper[4724]: I0223 19:02:38.950889 4724 scope.go:117] "RemoveContainer" containerID="0445ec09f32cc3389918fbf9447b786148afd5f9c063c45817e69392efe39cd6" Feb 23 19:02:38 crc kubenswrapper[4724]: E0223 19:02:38.951661 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 19:02:45 crc kubenswrapper[4724]: I0223 19:02:45.853550 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-prdj8"] Feb 23 19:02:45 crc kubenswrapper[4724]: E0223 19:02:45.858765 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e110221b-522e-48a5-89ea-16b295239083" containerName="registry-server" Feb 23 19:02:45 crc kubenswrapper[4724]: I0223 19:02:45.858899 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="e110221b-522e-48a5-89ea-16b295239083" containerName="registry-server" Feb 23 19:02:45 crc kubenswrapper[4724]: E0223 19:02:45.859001 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e70813e-32c6-4649-9ae4-5291ceed814e" containerName="copy" Feb 23 19:02:45 crc kubenswrapper[4724]: I0223 19:02:45.859059 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e70813e-32c6-4649-9ae4-5291ceed814e" containerName="copy" Feb 23 19:02:45 crc kubenswrapper[4724]: E0223 19:02:45.859151 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e110221b-522e-48a5-89ea-16b295239083" containerName="extract-content" Feb 23 19:02:45 crc kubenswrapper[4724]: I0223 19:02:45.859230 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="e110221b-522e-48a5-89ea-16b295239083" containerName="extract-content" Feb 23 19:02:45 crc kubenswrapper[4724]: E0223 19:02:45.859342 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e70813e-32c6-4649-9ae4-5291ceed814e" containerName="gather" Feb 23 19:02:45 crc kubenswrapper[4724]: I0223 19:02:45.859501 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e70813e-32c6-4649-9ae4-5291ceed814e" containerName="gather" Feb 23 19:02:45 crc kubenswrapper[4724]: E0223 19:02:45.859585 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e110221b-522e-48a5-89ea-16b295239083" containerName="extract-utilities" Feb 23 19:02:45 crc kubenswrapper[4724]: I0223 19:02:45.859650 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="e110221b-522e-48a5-89ea-16b295239083" containerName="extract-utilities" Feb 23 19:02:45 crc kubenswrapper[4724]: I0223 19:02:45.860572 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e70813e-32c6-4649-9ae4-5291ceed814e" containerName="gather" Feb 23 19:02:45 crc kubenswrapper[4724]: I0223 19:02:45.860609 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="e110221b-522e-48a5-89ea-16b295239083" containerName="registry-server" Feb 23 19:02:45 crc kubenswrapper[4724]: I0223 19:02:45.860640 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e70813e-32c6-4649-9ae4-5291ceed814e" containerName="copy" Feb 23 19:02:45 crc kubenswrapper[4724]: I0223 19:02:45.863572 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-prdj8" Feb 23 19:02:45 crc kubenswrapper[4724]: I0223 19:02:45.868821 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-prdj8"] Feb 23 19:02:46 crc kubenswrapper[4724]: I0223 19:02:46.008812 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5wxk\" (UniqueName: \"kubernetes.io/projected/6c74dae2-6c14-4830-96e4-6af8d2ad583d-kube-api-access-m5wxk\") pod \"redhat-marketplace-prdj8\" (UID: \"6c74dae2-6c14-4830-96e4-6af8d2ad583d\") " pod="openshift-marketplace/redhat-marketplace-prdj8" Feb 23 19:02:46 crc kubenswrapper[4724]: I0223 19:02:46.008879 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c74dae2-6c14-4830-96e4-6af8d2ad583d-catalog-content\") pod \"redhat-marketplace-prdj8\" (UID: \"6c74dae2-6c14-4830-96e4-6af8d2ad583d\") " pod="openshift-marketplace/redhat-marketplace-prdj8" Feb 23 19:02:46 crc kubenswrapper[4724]: I0223 19:02:46.010023 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c74dae2-6c14-4830-96e4-6af8d2ad583d-utilities\") pod \"redhat-marketplace-prdj8\" (UID: \"6c74dae2-6c14-4830-96e4-6af8d2ad583d\") " pod="openshift-marketplace/redhat-marketplace-prdj8" Feb 23 19:02:46 crc kubenswrapper[4724]: I0223 19:02:46.114122 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5wxk\" (UniqueName: \"kubernetes.io/projected/6c74dae2-6c14-4830-96e4-6af8d2ad583d-kube-api-access-m5wxk\") pod \"redhat-marketplace-prdj8\" (UID: \"6c74dae2-6c14-4830-96e4-6af8d2ad583d\") " pod="openshift-marketplace/redhat-marketplace-prdj8" Feb 23 19:02:46 crc kubenswrapper[4724]: I0223 19:02:46.114178 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c74dae2-6c14-4830-96e4-6af8d2ad583d-catalog-content\") pod \"redhat-marketplace-prdj8\" (UID: \"6c74dae2-6c14-4830-96e4-6af8d2ad583d\") " pod="openshift-marketplace/redhat-marketplace-prdj8" Feb 23 19:02:46 crc kubenswrapper[4724]: I0223 19:02:46.114830 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c74dae2-6c14-4830-96e4-6af8d2ad583d-catalog-content\") pod \"redhat-marketplace-prdj8\" (UID: \"6c74dae2-6c14-4830-96e4-6af8d2ad583d\") " pod="openshift-marketplace/redhat-marketplace-prdj8" Feb 23 19:02:46 crc kubenswrapper[4724]: I0223 19:02:46.115102 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c74dae2-6c14-4830-96e4-6af8d2ad583d-utilities\") pod \"redhat-marketplace-prdj8\" (UID: \"6c74dae2-6c14-4830-96e4-6af8d2ad583d\") " pod="openshift-marketplace/redhat-marketplace-prdj8" Feb 23 19:02:46 crc kubenswrapper[4724]: I0223 19:02:46.115429 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c74dae2-6c14-4830-96e4-6af8d2ad583d-utilities\") pod \"redhat-marketplace-prdj8\" (UID: \"6c74dae2-6c14-4830-96e4-6af8d2ad583d\") " pod="openshift-marketplace/redhat-marketplace-prdj8" Feb 23 19:02:46 crc kubenswrapper[4724]: I0223 19:02:46.134717 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5wxk\" (UniqueName: \"kubernetes.io/projected/6c74dae2-6c14-4830-96e4-6af8d2ad583d-kube-api-access-m5wxk\") pod \"redhat-marketplace-prdj8\" (UID: \"6c74dae2-6c14-4830-96e4-6af8d2ad583d\") " pod="openshift-marketplace/redhat-marketplace-prdj8" Feb 23 19:02:46 crc kubenswrapper[4724]: I0223 19:02:46.194884 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-prdj8" Feb 23 19:02:46 crc kubenswrapper[4724]: I0223 19:02:46.678893 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-prdj8"] Feb 23 19:02:47 crc kubenswrapper[4724]: I0223 19:02:47.290900 4724 generic.go:334] "Generic (PLEG): container finished" podID="6c74dae2-6c14-4830-96e4-6af8d2ad583d" containerID="48180a84fc16e3815ee29e0ab88c594f69ecb2dc29c0fabcd9bc52084f2b102b" exitCode=0 Feb 23 19:02:47 crc kubenswrapper[4724]: I0223 19:02:47.290982 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-prdj8" event={"ID":"6c74dae2-6c14-4830-96e4-6af8d2ad583d","Type":"ContainerDied","Data":"48180a84fc16e3815ee29e0ab88c594f69ecb2dc29c0fabcd9bc52084f2b102b"} Feb 23 19:02:47 crc kubenswrapper[4724]: I0223 19:02:47.291205 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-prdj8" event={"ID":"6c74dae2-6c14-4830-96e4-6af8d2ad583d","Type":"ContainerStarted","Data":"43841586b019f5f9bc13bb2ece209d8650bef6b14e027ffdce3439b79c4d4707"} Feb 23 19:02:48 crc kubenswrapper[4724]: I0223 19:02:48.216067 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cmd7l"] Feb 23 19:02:48 crc kubenswrapper[4724]: I0223 19:02:48.218349 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cmd7l" Feb 23 19:02:48 crc kubenswrapper[4724]: I0223 19:02:48.241273 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cmd7l"] Feb 23 19:02:48 crc kubenswrapper[4724]: I0223 19:02:48.265408 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72d8478a-b389-45e5-9de3-baafd7976396-utilities\") pod \"certified-operators-cmd7l\" (UID: \"72d8478a-b389-45e5-9de3-baafd7976396\") " pod="openshift-marketplace/certified-operators-cmd7l" Feb 23 19:02:48 crc kubenswrapper[4724]: I0223 19:02:48.265755 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72d8478a-b389-45e5-9de3-baafd7976396-catalog-content\") pod \"certified-operators-cmd7l\" (UID: \"72d8478a-b389-45e5-9de3-baafd7976396\") " pod="openshift-marketplace/certified-operators-cmd7l" Feb 23 19:02:48 crc kubenswrapper[4724]: I0223 19:02:48.266160 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rv5v\" (UniqueName: \"kubernetes.io/projected/72d8478a-b389-45e5-9de3-baafd7976396-kube-api-access-8rv5v\") pod \"certified-operators-cmd7l\" (UID: \"72d8478a-b389-45e5-9de3-baafd7976396\") " pod="openshift-marketplace/certified-operators-cmd7l" Feb 23 19:02:48 crc kubenswrapper[4724]: I0223 19:02:48.302415 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-prdj8" event={"ID":"6c74dae2-6c14-4830-96e4-6af8d2ad583d","Type":"ContainerStarted","Data":"18e0d14207898329c3f9a6ad64097c33a7fc3e2893703b3996bb8112cdcf6d7d"} Feb 23 19:02:48 crc kubenswrapper[4724]: I0223 19:02:48.368520 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rv5v\" (UniqueName: \"kubernetes.io/projected/72d8478a-b389-45e5-9de3-baafd7976396-kube-api-access-8rv5v\") pod \"certified-operators-cmd7l\" (UID: \"72d8478a-b389-45e5-9de3-baafd7976396\") " pod="openshift-marketplace/certified-operators-cmd7l" Feb 23 19:02:48 crc kubenswrapper[4724]: I0223 19:02:48.368704 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72d8478a-b389-45e5-9de3-baafd7976396-utilities\") pod \"certified-operators-cmd7l\" (UID: \"72d8478a-b389-45e5-9de3-baafd7976396\") " pod="openshift-marketplace/certified-operators-cmd7l" Feb 23 19:02:48 crc kubenswrapper[4724]: I0223 19:02:48.368740 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72d8478a-b389-45e5-9de3-baafd7976396-catalog-content\") pod \"certified-operators-cmd7l\" (UID: \"72d8478a-b389-45e5-9de3-baafd7976396\") " pod="openshift-marketplace/certified-operators-cmd7l" Feb 23 19:02:48 crc kubenswrapper[4724]: I0223 19:02:48.369164 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72d8478a-b389-45e5-9de3-baafd7976396-utilities\") pod \"certified-operators-cmd7l\" (UID: \"72d8478a-b389-45e5-9de3-baafd7976396\") " pod="openshift-marketplace/certified-operators-cmd7l" Feb 23 19:02:48 crc kubenswrapper[4724]: I0223 19:02:48.369237 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72d8478a-b389-45e5-9de3-baafd7976396-catalog-content\") pod \"certified-operators-cmd7l\" (UID: \"72d8478a-b389-45e5-9de3-baafd7976396\") " pod="openshift-marketplace/certified-operators-cmd7l" Feb 23 19:02:48 crc kubenswrapper[4724]: I0223 19:02:48.389257 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rv5v\" (UniqueName: \"kubernetes.io/projected/72d8478a-b389-45e5-9de3-baafd7976396-kube-api-access-8rv5v\") pod \"certified-operators-cmd7l\" (UID: \"72d8478a-b389-45e5-9de3-baafd7976396\") " pod="openshift-marketplace/certified-operators-cmd7l" Feb 23 19:02:48 crc kubenswrapper[4724]: I0223 19:02:48.541303 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cmd7l" Feb 23 19:02:49 crc kubenswrapper[4724]: I0223 19:02:49.036057 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cmd7l"] Feb 23 19:02:49 crc kubenswrapper[4724]: W0223 19:02:49.043588 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72d8478a_b389_45e5_9de3_baafd7976396.slice/crio-c6a08e6240e04d1cab8b3197faa970f6201a2b9cb2e9bc51e4f13069c97c5b40 WatchSource:0}: Error finding container c6a08e6240e04d1cab8b3197faa970f6201a2b9cb2e9bc51e4f13069c97c5b40: Status 404 returned error can't find the container with id c6a08e6240e04d1cab8b3197faa970f6201a2b9cb2e9bc51e4f13069c97c5b40 Feb 23 19:02:49 crc kubenswrapper[4724]: I0223 19:02:49.319382 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cmd7l" event={"ID":"72d8478a-b389-45e5-9de3-baafd7976396","Type":"ContainerStarted","Data":"8d0f32938593f59b5405c52b39a2df631ccd55782bc30d89039f1b0d236322ce"} Feb 23 19:02:49 crc kubenswrapper[4724]: I0223 19:02:49.319446 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cmd7l" event={"ID":"72d8478a-b389-45e5-9de3-baafd7976396","Type":"ContainerStarted","Data":"c6a08e6240e04d1cab8b3197faa970f6201a2b9cb2e9bc51e4f13069c97c5b40"} Feb 23 19:02:49 crc kubenswrapper[4724]: I0223 19:02:49.325094 4724 generic.go:334] "Generic (PLEG): container finished" podID="6c74dae2-6c14-4830-96e4-6af8d2ad583d" containerID="18e0d14207898329c3f9a6ad64097c33a7fc3e2893703b3996bb8112cdcf6d7d" exitCode=0 Feb 23 19:02:49 crc kubenswrapper[4724]: I0223 19:02:49.325139 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-prdj8" event={"ID":"6c74dae2-6c14-4830-96e4-6af8d2ad583d","Type":"ContainerDied","Data":"18e0d14207898329c3f9a6ad64097c33a7fc3e2893703b3996bb8112cdcf6d7d"} Feb 23 19:02:50 crc kubenswrapper[4724]: I0223 19:02:50.337604 4724 generic.go:334] "Generic (PLEG): container finished" podID="72d8478a-b389-45e5-9de3-baafd7976396" containerID="8d0f32938593f59b5405c52b39a2df631ccd55782bc30d89039f1b0d236322ce" exitCode=0 Feb 23 19:02:50 crc kubenswrapper[4724]: I0223 19:02:50.337652 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cmd7l" event={"ID":"72d8478a-b389-45e5-9de3-baafd7976396","Type":"ContainerDied","Data":"8d0f32938593f59b5405c52b39a2df631ccd55782bc30d89039f1b0d236322ce"} Feb 23 19:02:50 crc kubenswrapper[4724]: I0223 19:02:50.342787 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-prdj8" event={"ID":"6c74dae2-6c14-4830-96e4-6af8d2ad583d","Type":"ContainerStarted","Data":"f70af62ed6358ff3bff616aec6441f0b6025a483cfafa3812c87adeb6e70cbb2"} Feb 23 19:02:50 crc kubenswrapper[4724]: I0223 19:02:50.387036 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-prdj8" podStartSLOduration=2.944416725 podStartE2EDuration="5.387017325s" podCreationTimestamp="2026-02-23 19:02:45 +0000 UTC" firstStartedPulling="2026-02-23 19:02:47.292652932 +0000 UTC m=+5523.108852532" lastFinishedPulling="2026-02-23 19:02:49.735253532 +0000 UTC m=+5525.551453132" observedRunningTime="2026-02-23 19:02:50.381351934 +0000 UTC m=+5526.197551534" watchObservedRunningTime="2026-02-23 19:02:50.387017325 +0000 UTC m=+5526.203216925" Feb 23 19:02:51 crc kubenswrapper[4724]: I0223 19:02:51.353909 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cmd7l" event={"ID":"72d8478a-b389-45e5-9de3-baafd7976396","Type":"ContainerStarted","Data":"fb793dc88220d1959cc8c7037058946b6f886b8b602affed483eac44c5a2572a"} Feb 23 19:02:52 crc kubenswrapper[4724]: I0223 19:02:52.231505 4724 scope.go:117] "RemoveContainer" containerID="6f1c1d463f48ba4aaa2d48a4c9ac140d46992a31fde19caab9e3fac866280192" Feb 23 19:02:52 crc kubenswrapper[4724]: I0223 19:02:52.346807 4724 scope.go:117] "RemoveContainer" containerID="785952424dc738efaafd536fb00f8a3a2054c56ee243ff603778dd75b29d68ad" Feb 23 19:02:52 crc kubenswrapper[4724]: I0223 19:02:52.371709 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-78zrq_must-gather-t654n_7e70813e-32c6-4649-9ae4-5291ceed814e/copy/0.log" Feb 23 19:02:52 crc kubenswrapper[4724]: I0223 19:02:52.378193 4724 scope.go:117] "RemoveContainer" containerID="84ee3e3f59657325bcd0ef11a242405818912936714c67b72a09d175c28ef5c2" Feb 23 19:02:53 crc kubenswrapper[4724]: I0223 19:02:53.383933 4724 generic.go:334] "Generic (PLEG): container finished" podID="72d8478a-b389-45e5-9de3-baafd7976396" containerID="fb793dc88220d1959cc8c7037058946b6f886b8b602affed483eac44c5a2572a" exitCode=0 Feb 23 19:02:53 crc kubenswrapper[4724]: I0223 19:02:53.384012 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cmd7l" event={"ID":"72d8478a-b389-45e5-9de3-baafd7976396","Type":"ContainerDied","Data":"fb793dc88220d1959cc8c7037058946b6f886b8b602affed483eac44c5a2572a"} Feb 23 19:02:53 crc kubenswrapper[4724]: I0223 19:02:53.952596 4724 scope.go:117] "RemoveContainer" containerID="0445ec09f32cc3389918fbf9447b786148afd5f9c063c45817e69392efe39cd6" Feb 23 19:02:53 crc kubenswrapper[4724]: E0223 19:02:53.953427 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 19:02:54 crc kubenswrapper[4724]: I0223 19:02:54.399068 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cmd7l" event={"ID":"72d8478a-b389-45e5-9de3-baafd7976396","Type":"ContainerStarted","Data":"f54c740878ccd55ffddd1982b0bb23853dbd32b15030bf52f71d5148cde8c94d"} Feb 23 19:02:54 crc kubenswrapper[4724]: I0223 19:02:54.433312 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cmd7l" podStartSLOduration=2.994783556 podStartE2EDuration="6.433281535s" podCreationTimestamp="2026-02-23 19:02:48 +0000 UTC" firstStartedPulling="2026-02-23 19:02:50.340677824 +0000 UTC m=+5526.156877424" lastFinishedPulling="2026-02-23 19:02:53.779175803 +0000 UTC m=+5529.595375403" observedRunningTime="2026-02-23 19:02:54.420234881 +0000 UTC m=+5530.236434491" watchObservedRunningTime="2026-02-23 19:02:54.433281535 +0000 UTC m=+5530.249481175" Feb 23 19:02:56 crc kubenswrapper[4724]: I0223 19:02:56.195487 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-prdj8" Feb 23 19:02:56 crc kubenswrapper[4724]: I0223 19:02:56.195692 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-prdj8" Feb 23 19:02:56 crc kubenswrapper[4724]: I0223 19:02:56.271590 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-prdj8" Feb 23 19:02:56 crc kubenswrapper[4724]: I0223 19:02:56.487838 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-prdj8" Feb 23 19:02:57 crc kubenswrapper[4724]: I0223 19:02:57.809996 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-prdj8"] Feb 23 19:02:58 crc kubenswrapper[4724]: I0223 19:02:58.542259 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cmd7l" Feb 23 19:02:58 crc kubenswrapper[4724]: I0223 19:02:58.542655 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cmd7l" Feb 23 19:02:58 crc kubenswrapper[4724]: I0223 19:02:58.612146 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cmd7l" Feb 23 19:02:59 crc kubenswrapper[4724]: I0223 19:02:59.463226 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-prdj8" podUID="6c74dae2-6c14-4830-96e4-6af8d2ad583d" containerName="registry-server" containerID="cri-o://f70af62ed6358ff3bff616aec6441f0b6025a483cfafa3812c87adeb6e70cbb2" gracePeriod=2 Feb 23 19:02:59 crc kubenswrapper[4724]: I0223 19:02:59.558696 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cmd7l" Feb 23 19:03:00 crc kubenswrapper[4724]: I0223 19:03:00.041227 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-prdj8" Feb 23 19:03:00 crc kubenswrapper[4724]: I0223 19:03:00.151299 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c74dae2-6c14-4830-96e4-6af8d2ad583d-catalog-content\") pod \"6c74dae2-6c14-4830-96e4-6af8d2ad583d\" (UID: \"6c74dae2-6c14-4830-96e4-6af8d2ad583d\") " Feb 23 19:03:00 crc kubenswrapper[4724]: I0223 19:03:00.151441 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c74dae2-6c14-4830-96e4-6af8d2ad583d-utilities\") pod \"6c74dae2-6c14-4830-96e4-6af8d2ad583d\" (UID: \"6c74dae2-6c14-4830-96e4-6af8d2ad583d\") " Feb 23 19:03:00 crc kubenswrapper[4724]: I0223 19:03:00.151491 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5wxk\" (UniqueName: \"kubernetes.io/projected/6c74dae2-6c14-4830-96e4-6af8d2ad583d-kube-api-access-m5wxk\") pod \"6c74dae2-6c14-4830-96e4-6af8d2ad583d\" (UID: \"6c74dae2-6c14-4830-96e4-6af8d2ad583d\") " Feb 23 19:03:00 crc kubenswrapper[4724]: I0223 19:03:00.154014 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c74dae2-6c14-4830-96e4-6af8d2ad583d-utilities" (OuterVolumeSpecName: "utilities") pod "6c74dae2-6c14-4830-96e4-6af8d2ad583d" (UID: "6c74dae2-6c14-4830-96e4-6af8d2ad583d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 19:03:00 crc kubenswrapper[4724]: I0223 19:03:00.160574 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c74dae2-6c14-4830-96e4-6af8d2ad583d-kube-api-access-m5wxk" (OuterVolumeSpecName: "kube-api-access-m5wxk") pod "6c74dae2-6c14-4830-96e4-6af8d2ad583d" (UID: "6c74dae2-6c14-4830-96e4-6af8d2ad583d"). InnerVolumeSpecName "kube-api-access-m5wxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:03:00 crc kubenswrapper[4724]: I0223 19:03:00.182199 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c74dae2-6c14-4830-96e4-6af8d2ad583d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6c74dae2-6c14-4830-96e4-6af8d2ad583d" (UID: "6c74dae2-6c14-4830-96e4-6af8d2ad583d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 19:03:00 crc kubenswrapper[4724]: I0223 19:03:00.253705 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c74dae2-6c14-4830-96e4-6af8d2ad583d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 19:03:00 crc kubenswrapper[4724]: I0223 19:03:00.253995 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c74dae2-6c14-4830-96e4-6af8d2ad583d-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 19:03:00 crc kubenswrapper[4724]: I0223 19:03:00.254024 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5wxk\" (UniqueName: \"kubernetes.io/projected/6c74dae2-6c14-4830-96e4-6af8d2ad583d-kube-api-access-m5wxk\") on node \"crc\" DevicePath \"\"" Feb 23 19:03:00 crc kubenswrapper[4724]: I0223 19:03:00.404985 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cmd7l"] Feb 23 19:03:00 crc kubenswrapper[4724]: I0223 19:03:00.474693 4724 generic.go:334] "Generic (PLEG): container finished" podID="6c74dae2-6c14-4830-96e4-6af8d2ad583d" containerID="f70af62ed6358ff3bff616aec6441f0b6025a483cfafa3812c87adeb6e70cbb2" exitCode=0 Feb 23 19:03:00 crc kubenswrapper[4724]: I0223 19:03:00.474738 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-prdj8" Feb 23 19:03:00 crc kubenswrapper[4724]: I0223 19:03:00.474786 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-prdj8" event={"ID":"6c74dae2-6c14-4830-96e4-6af8d2ad583d","Type":"ContainerDied","Data":"f70af62ed6358ff3bff616aec6441f0b6025a483cfafa3812c87adeb6e70cbb2"} Feb 23 19:03:00 crc kubenswrapper[4724]: I0223 19:03:00.474859 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-prdj8" event={"ID":"6c74dae2-6c14-4830-96e4-6af8d2ad583d","Type":"ContainerDied","Data":"43841586b019f5f9bc13bb2ece209d8650bef6b14e027ffdce3439b79c4d4707"} Feb 23 19:03:00 crc kubenswrapper[4724]: I0223 19:03:00.474886 4724 scope.go:117] "RemoveContainer" containerID="f70af62ed6358ff3bff616aec6441f0b6025a483cfafa3812c87adeb6e70cbb2" Feb 23 19:03:00 crc kubenswrapper[4724]: I0223 19:03:00.502109 4724 scope.go:117] "RemoveContainer" containerID="18e0d14207898329c3f9a6ad64097c33a7fc3e2893703b3996bb8112cdcf6d7d" Feb 23 19:03:00 crc kubenswrapper[4724]: I0223 19:03:00.533693 4724 scope.go:117] "RemoveContainer" containerID="48180a84fc16e3815ee29e0ab88c594f69ecb2dc29c0fabcd9bc52084f2b102b" Feb 23 19:03:00 crc kubenswrapper[4724]: I0223 19:03:00.557972 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-prdj8"] Feb 23 19:03:00 crc kubenswrapper[4724]: I0223 19:03:00.566084 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-prdj8"] Feb 23 19:03:00 crc kubenswrapper[4724]: I0223 19:03:00.575602 4724 scope.go:117] "RemoveContainer" containerID="f70af62ed6358ff3bff616aec6441f0b6025a483cfafa3812c87adeb6e70cbb2" Feb 23 19:03:00 crc kubenswrapper[4724]: E0223 19:03:00.576198 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f70af62ed6358ff3bff616aec6441f0b6025a483cfafa3812c87adeb6e70cbb2\": container with ID starting with f70af62ed6358ff3bff616aec6441f0b6025a483cfafa3812c87adeb6e70cbb2 not found: ID does not exist" containerID="f70af62ed6358ff3bff616aec6441f0b6025a483cfafa3812c87adeb6e70cbb2" Feb 23 19:03:00 crc kubenswrapper[4724]: I0223 19:03:00.576334 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f70af62ed6358ff3bff616aec6441f0b6025a483cfafa3812c87adeb6e70cbb2"} err="failed to get container status \"f70af62ed6358ff3bff616aec6441f0b6025a483cfafa3812c87adeb6e70cbb2\": rpc error: code = NotFound desc = could not find container \"f70af62ed6358ff3bff616aec6441f0b6025a483cfafa3812c87adeb6e70cbb2\": container with ID starting with f70af62ed6358ff3bff616aec6441f0b6025a483cfafa3812c87adeb6e70cbb2 not found: ID does not exist" Feb 23 19:03:00 crc kubenswrapper[4724]: I0223 19:03:00.576452 4724 scope.go:117] "RemoveContainer" containerID="18e0d14207898329c3f9a6ad64097c33a7fc3e2893703b3996bb8112cdcf6d7d" Feb 23 19:03:00 crc kubenswrapper[4724]: E0223 19:03:00.577034 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18e0d14207898329c3f9a6ad64097c33a7fc3e2893703b3996bb8112cdcf6d7d\": container with ID starting with 18e0d14207898329c3f9a6ad64097c33a7fc3e2893703b3996bb8112cdcf6d7d not found: ID does not exist" containerID="18e0d14207898329c3f9a6ad64097c33a7fc3e2893703b3996bb8112cdcf6d7d" Feb 23 19:03:00 crc kubenswrapper[4724]: I0223 19:03:00.577101 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18e0d14207898329c3f9a6ad64097c33a7fc3e2893703b3996bb8112cdcf6d7d"} err="failed to get container status \"18e0d14207898329c3f9a6ad64097c33a7fc3e2893703b3996bb8112cdcf6d7d\": rpc error: code = NotFound desc = could not find container \"18e0d14207898329c3f9a6ad64097c33a7fc3e2893703b3996bb8112cdcf6d7d\": container with ID starting with 18e0d14207898329c3f9a6ad64097c33a7fc3e2893703b3996bb8112cdcf6d7d not found: ID does not exist" Feb 23 19:03:00 crc kubenswrapper[4724]: I0223 19:03:00.577145 4724 scope.go:117] "RemoveContainer" containerID="48180a84fc16e3815ee29e0ab88c594f69ecb2dc29c0fabcd9bc52084f2b102b" Feb 23 19:03:00 crc kubenswrapper[4724]: E0223 19:03:00.577490 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48180a84fc16e3815ee29e0ab88c594f69ecb2dc29c0fabcd9bc52084f2b102b\": container with ID starting with 48180a84fc16e3815ee29e0ab88c594f69ecb2dc29c0fabcd9bc52084f2b102b not found: ID does not exist" containerID="48180a84fc16e3815ee29e0ab88c594f69ecb2dc29c0fabcd9bc52084f2b102b" Feb 23 19:03:00 crc kubenswrapper[4724]: I0223 19:03:00.577599 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48180a84fc16e3815ee29e0ab88c594f69ecb2dc29c0fabcd9bc52084f2b102b"} err="failed to get container status \"48180a84fc16e3815ee29e0ab88c594f69ecb2dc29c0fabcd9bc52084f2b102b\": rpc error: code = NotFound desc = could not find container \"48180a84fc16e3815ee29e0ab88c594f69ecb2dc29c0fabcd9bc52084f2b102b\": container with ID starting with 48180a84fc16e3815ee29e0ab88c594f69ecb2dc29c0fabcd9bc52084f2b102b not found: ID does not exist" Feb 23 19:03:00 crc kubenswrapper[4724]: I0223 19:03:00.965717 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c74dae2-6c14-4830-96e4-6af8d2ad583d" path="/var/lib/kubelet/pods/6c74dae2-6c14-4830-96e4-6af8d2ad583d/volumes" Feb 23 19:03:01 crc kubenswrapper[4724]: I0223 19:03:01.487105 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cmd7l" podUID="72d8478a-b389-45e5-9de3-baafd7976396" containerName="registry-server" containerID="cri-o://f54c740878ccd55ffddd1982b0bb23853dbd32b15030bf52f71d5148cde8c94d" gracePeriod=2 Feb 23 19:03:01 crc kubenswrapper[4724]: E0223 19:03:01.659377 4724 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72d8478a_b389_45e5_9de3_baafd7976396.slice/crio-f54c740878ccd55ffddd1982b0bb23853dbd32b15030bf52f71d5148cde8c94d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72d8478a_b389_45e5_9de3_baafd7976396.slice/crio-conmon-f54c740878ccd55ffddd1982b0bb23853dbd32b15030bf52f71d5148cde8c94d.scope\": RecentStats: unable to find data in memory cache]" Feb 23 19:03:01 crc kubenswrapper[4724]: I0223 19:03:01.995355 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cmd7l" Feb 23 19:03:02 crc kubenswrapper[4724]: I0223 19:03:02.109227 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rv5v\" (UniqueName: \"kubernetes.io/projected/72d8478a-b389-45e5-9de3-baafd7976396-kube-api-access-8rv5v\") pod \"72d8478a-b389-45e5-9de3-baafd7976396\" (UID: \"72d8478a-b389-45e5-9de3-baafd7976396\") " Feb 23 19:03:02 crc kubenswrapper[4724]: I0223 19:03:02.109549 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72d8478a-b389-45e5-9de3-baafd7976396-utilities\") pod \"72d8478a-b389-45e5-9de3-baafd7976396\" (UID: \"72d8478a-b389-45e5-9de3-baafd7976396\") " Feb 23 19:03:02 crc kubenswrapper[4724]: I0223 19:03:02.109622 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72d8478a-b389-45e5-9de3-baafd7976396-catalog-content\") pod \"72d8478a-b389-45e5-9de3-baafd7976396\" (UID: \"72d8478a-b389-45e5-9de3-baafd7976396\") " Feb 23 19:03:02 crc kubenswrapper[4724]: I0223 19:03:02.110973 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72d8478a-b389-45e5-9de3-baafd7976396-utilities" (OuterVolumeSpecName: "utilities") pod "72d8478a-b389-45e5-9de3-baafd7976396" (UID: "72d8478a-b389-45e5-9de3-baafd7976396"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 19:03:02 crc kubenswrapper[4724]: I0223 19:03:02.114983 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72d8478a-b389-45e5-9de3-baafd7976396-kube-api-access-8rv5v" (OuterVolumeSpecName: "kube-api-access-8rv5v") pod "72d8478a-b389-45e5-9de3-baafd7976396" (UID: "72d8478a-b389-45e5-9de3-baafd7976396"). InnerVolumeSpecName "kube-api-access-8rv5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:03:02 crc kubenswrapper[4724]: I0223 19:03:02.212774 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72d8478a-b389-45e5-9de3-baafd7976396-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 19:03:02 crc kubenswrapper[4724]: I0223 19:03:02.212809 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rv5v\" (UniqueName: \"kubernetes.io/projected/72d8478a-b389-45e5-9de3-baafd7976396-kube-api-access-8rv5v\") on node \"crc\" DevicePath \"\"" Feb 23 19:03:02 crc kubenswrapper[4724]: I0223 19:03:02.277697 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72d8478a-b389-45e5-9de3-baafd7976396-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "72d8478a-b389-45e5-9de3-baafd7976396" (UID: "72d8478a-b389-45e5-9de3-baafd7976396"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 19:03:02 crc kubenswrapper[4724]: I0223 19:03:02.314307 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72d8478a-b389-45e5-9de3-baafd7976396-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 19:03:02 crc kubenswrapper[4724]: I0223 19:03:02.498292 4724 generic.go:334] "Generic (PLEG): container finished" podID="72d8478a-b389-45e5-9de3-baafd7976396" containerID="f54c740878ccd55ffddd1982b0bb23853dbd32b15030bf52f71d5148cde8c94d" exitCode=0 Feb 23 19:03:02 crc kubenswrapper[4724]: I0223 19:03:02.498371 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cmd7l" event={"ID":"72d8478a-b389-45e5-9de3-baafd7976396","Type":"ContainerDied","Data":"f54c740878ccd55ffddd1982b0bb23853dbd32b15030bf52f71d5148cde8c94d"} Feb 23 19:03:02 crc kubenswrapper[4724]: I0223 19:03:02.498425 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cmd7l" Feb 23 19:03:02 crc kubenswrapper[4724]: I0223 19:03:02.498443 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cmd7l" event={"ID":"72d8478a-b389-45e5-9de3-baafd7976396","Type":"ContainerDied","Data":"c6a08e6240e04d1cab8b3197faa970f6201a2b9cb2e9bc51e4f13069c97c5b40"} Feb 23 19:03:02 crc kubenswrapper[4724]: I0223 19:03:02.498462 4724 scope.go:117] "RemoveContainer" containerID="f54c740878ccd55ffddd1982b0bb23853dbd32b15030bf52f71d5148cde8c94d" Feb 23 19:03:02 crc kubenswrapper[4724]: I0223 19:03:02.522620 4724 scope.go:117] "RemoveContainer" containerID="fb793dc88220d1959cc8c7037058946b6f886b8b602affed483eac44c5a2572a" Feb 23 19:03:02 crc kubenswrapper[4724]: I0223 19:03:02.535691 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cmd7l"] Feb 23 19:03:02 crc kubenswrapper[4724]: I0223 19:03:02.543351 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cmd7l"] Feb 23 19:03:02 crc kubenswrapper[4724]: I0223 19:03:02.561113 4724 scope.go:117] "RemoveContainer" containerID="8d0f32938593f59b5405c52b39a2df631ccd55782bc30d89039f1b0d236322ce" Feb 23 19:03:02 crc kubenswrapper[4724]: I0223 19:03:02.603185 4724 scope.go:117] "RemoveContainer" containerID="f54c740878ccd55ffddd1982b0bb23853dbd32b15030bf52f71d5148cde8c94d" Feb 23 19:03:02 crc kubenswrapper[4724]: E0223 19:03:02.603854 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f54c740878ccd55ffddd1982b0bb23853dbd32b15030bf52f71d5148cde8c94d\": container with ID starting with f54c740878ccd55ffddd1982b0bb23853dbd32b15030bf52f71d5148cde8c94d not found: ID does not exist" containerID="f54c740878ccd55ffddd1982b0bb23853dbd32b15030bf52f71d5148cde8c94d" Feb 23 19:03:02 crc kubenswrapper[4724]: I0223 19:03:02.603910 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f54c740878ccd55ffddd1982b0bb23853dbd32b15030bf52f71d5148cde8c94d"} err="failed to get container status \"f54c740878ccd55ffddd1982b0bb23853dbd32b15030bf52f71d5148cde8c94d\": rpc error: code = NotFound desc = could not find container \"f54c740878ccd55ffddd1982b0bb23853dbd32b15030bf52f71d5148cde8c94d\": container with ID starting with f54c740878ccd55ffddd1982b0bb23853dbd32b15030bf52f71d5148cde8c94d not found: ID does not exist" Feb 23 19:03:02 crc kubenswrapper[4724]: I0223 19:03:02.603937 4724 scope.go:117] "RemoveContainer" containerID="fb793dc88220d1959cc8c7037058946b6f886b8b602affed483eac44c5a2572a" Feb 23 19:03:02 crc kubenswrapper[4724]: E0223 19:03:02.604378 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb793dc88220d1959cc8c7037058946b6f886b8b602affed483eac44c5a2572a\": container with ID starting with fb793dc88220d1959cc8c7037058946b6f886b8b602affed483eac44c5a2572a not found: ID does not exist" containerID="fb793dc88220d1959cc8c7037058946b6f886b8b602affed483eac44c5a2572a" Feb 23 19:03:02 crc kubenswrapper[4724]: I0223 19:03:02.604453 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb793dc88220d1959cc8c7037058946b6f886b8b602affed483eac44c5a2572a"} err="failed to get container status \"fb793dc88220d1959cc8c7037058946b6f886b8b602affed483eac44c5a2572a\": rpc error: code = NotFound desc = could not find container \"fb793dc88220d1959cc8c7037058946b6f886b8b602affed483eac44c5a2572a\": container with ID starting with fb793dc88220d1959cc8c7037058946b6f886b8b602affed483eac44c5a2572a not found: ID does not exist" Feb 23 19:03:02 crc kubenswrapper[4724]: I0223 19:03:02.604491 4724 scope.go:117] "RemoveContainer" containerID="8d0f32938593f59b5405c52b39a2df631ccd55782bc30d89039f1b0d236322ce" Feb 23 19:03:02 crc kubenswrapper[4724]: E0223 19:03:02.605030 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d0f32938593f59b5405c52b39a2df631ccd55782bc30d89039f1b0d236322ce\": container with ID starting with 8d0f32938593f59b5405c52b39a2df631ccd55782bc30d89039f1b0d236322ce not found: ID does not exist" containerID="8d0f32938593f59b5405c52b39a2df631ccd55782bc30d89039f1b0d236322ce" Feb 23 19:03:02 crc kubenswrapper[4724]: I0223 19:03:02.605056 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d0f32938593f59b5405c52b39a2df631ccd55782bc30d89039f1b0d236322ce"} err="failed to get container status \"8d0f32938593f59b5405c52b39a2df631ccd55782bc30d89039f1b0d236322ce\": rpc error: code = NotFound desc = could not find container \"8d0f32938593f59b5405c52b39a2df631ccd55782bc30d89039f1b0d236322ce\": container with ID starting with 8d0f32938593f59b5405c52b39a2df631ccd55782bc30d89039f1b0d236322ce not found: ID does not exist" Feb 23 19:03:02 crc kubenswrapper[4724]: I0223 19:03:02.969946 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72d8478a-b389-45e5-9de3-baafd7976396" path="/var/lib/kubelet/pods/72d8478a-b389-45e5-9de3-baafd7976396/volumes" Feb 23 19:03:07 crc kubenswrapper[4724]: I0223 19:03:07.951464 4724 scope.go:117] "RemoveContainer" containerID="0445ec09f32cc3389918fbf9447b786148afd5f9c063c45817e69392efe39cd6" Feb 23 19:03:07 crc kubenswrapper[4724]: E0223 19:03:07.952341 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 19:03:21 crc kubenswrapper[4724]: I0223 19:03:21.953152 4724 scope.go:117] "RemoveContainer" containerID="0445ec09f32cc3389918fbf9447b786148afd5f9c063c45817e69392efe39cd6" Feb 23 19:03:21 crc kubenswrapper[4724]: E0223 19:03:21.954789 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 19:03:34 crc kubenswrapper[4724]: I0223 19:03:34.959875 4724 scope.go:117] "RemoveContainer" containerID="0445ec09f32cc3389918fbf9447b786148afd5f9c063c45817e69392efe39cd6" Feb 23 19:03:34 crc kubenswrapper[4724]: E0223 19:03:34.960826 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 19:03:49 crc kubenswrapper[4724]: I0223 19:03:49.952112 4724 scope.go:117] "RemoveContainer" containerID="0445ec09f32cc3389918fbf9447b786148afd5f9c063c45817e69392efe39cd6" Feb 23 19:03:49 crc kubenswrapper[4724]: E0223 19:03:49.953509 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 19:04:00 crc kubenswrapper[4724]: I0223 19:04:00.951511 4724 scope.go:117] "RemoveContainer" containerID="0445ec09f32cc3389918fbf9447b786148afd5f9c063c45817e69392efe39cd6" Feb 23 19:04:00 crc kubenswrapper[4724]: E0223 19:04:00.953855 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 19:04:13 crc kubenswrapper[4724]: I0223 19:04:13.950962 4724 scope.go:117] "RemoveContainer" containerID="0445ec09f32cc3389918fbf9447b786148afd5f9c063c45817e69392efe39cd6" Feb 23 19:04:13 crc kubenswrapper[4724]: E0223 19:04:13.951684 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 19:04:24 crc kubenswrapper[4724]: I0223 19:04:24.957512 4724 scope.go:117] "RemoveContainer" containerID="0445ec09f32cc3389918fbf9447b786148afd5f9c063c45817e69392efe39cd6" Feb 23 19:04:24 crc kubenswrapper[4724]: E0223 19:04:24.958300 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 19:04:38 crc kubenswrapper[4724]: I0223 19:04:38.951647 4724 scope.go:117] "RemoveContainer" containerID="0445ec09f32cc3389918fbf9447b786148afd5f9c063c45817e69392efe39cd6" Feb 23 19:04:38 crc kubenswrapper[4724]: E0223 19:04:38.952479 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 19:04:49 crc kubenswrapper[4724]: I0223 19:04:49.950756 4724 scope.go:117] "RemoveContainer" containerID="0445ec09f32cc3389918fbf9447b786148afd5f9c063c45817e69392efe39cd6" Feb 23 19:04:49 crc kubenswrapper[4724]: E0223 19:04:49.952243 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 19:05:01 crc kubenswrapper[4724]: I0223 19:05:01.952010 4724 scope.go:117] "RemoveContainer" containerID="0445ec09f32cc3389918fbf9447b786148afd5f9c063c45817e69392efe39cd6" Feb 23 19:05:02 crc kubenswrapper[4724]: I0223 19:05:02.880475 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" event={"ID":"a065b197-b354-4d9b-b2e9-7d4882a3d1a2","Type":"ContainerStarted","Data":"228c15b158903cc905167a8959b9c9af81574168172f4b4bf0b36af8b0095e0e"} Feb 23 19:05:23 crc kubenswrapper[4724]: I0223 19:05:23.488152 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-k7jcr/must-gather-4d6jx"] Feb 23 19:05:23 crc kubenswrapper[4724]: E0223 19:05:23.489216 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c74dae2-6c14-4830-96e4-6af8d2ad583d" containerName="extract-content" Feb 23 19:05:23 crc kubenswrapper[4724]: I0223 19:05:23.489233 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c74dae2-6c14-4830-96e4-6af8d2ad583d" containerName="extract-content" Feb 23 19:05:23 crc kubenswrapper[4724]: E0223 19:05:23.489245 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72d8478a-b389-45e5-9de3-baafd7976396" containerName="extract-content" Feb 23 19:05:23 crc kubenswrapper[4724]: I0223 19:05:23.489252 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="72d8478a-b389-45e5-9de3-baafd7976396" containerName="extract-content" Feb 23 19:05:23 crc kubenswrapper[4724]: E0223 19:05:23.489268 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c74dae2-6c14-4830-96e4-6af8d2ad583d" containerName="extract-utilities" Feb 23 19:05:23 crc kubenswrapper[4724]: I0223 19:05:23.489275 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c74dae2-6c14-4830-96e4-6af8d2ad583d" containerName="extract-utilities" Feb 23 19:05:23 crc kubenswrapper[4724]: E0223 19:05:23.489304 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72d8478a-b389-45e5-9de3-baafd7976396" containerName="extract-utilities" Feb 23 19:05:23 crc kubenswrapper[4724]: I0223 19:05:23.489311 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="72d8478a-b389-45e5-9de3-baafd7976396" containerName="extract-utilities" Feb 23 19:05:23 crc kubenswrapper[4724]: E0223 19:05:23.489327 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72d8478a-b389-45e5-9de3-baafd7976396" containerName="registry-server" Feb 23 19:05:23 crc kubenswrapper[4724]: I0223 19:05:23.489336 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="72d8478a-b389-45e5-9de3-baafd7976396" containerName="registry-server" Feb 23 19:05:23 crc kubenswrapper[4724]: E0223 19:05:23.489369 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c74dae2-6c14-4830-96e4-6af8d2ad583d" containerName="registry-server" Feb 23 19:05:23 crc kubenswrapper[4724]: I0223 19:05:23.489376 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c74dae2-6c14-4830-96e4-6af8d2ad583d" containerName="registry-server" Feb 23 19:05:23 crc kubenswrapper[4724]: I0223 19:05:23.489652 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c74dae2-6c14-4830-96e4-6af8d2ad583d" containerName="registry-server" Feb 23 19:05:23 crc kubenswrapper[4724]: I0223 19:05:23.489680 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="72d8478a-b389-45e5-9de3-baafd7976396" containerName="registry-server" Feb 23 19:05:23 crc kubenswrapper[4724]: I0223 19:05:23.490967 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k7jcr/must-gather-4d6jx" Feb 23 19:05:23 crc kubenswrapper[4724]: W0223 19:05:23.495398 4724 reflector.go:561] object-"openshift-must-gather-k7jcr"/"default-dockercfg-49gnb": failed to list *v1.Secret: secrets "default-dockercfg-49gnb" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-must-gather-k7jcr": no relationship found between node 'crc' and this object Feb 23 19:05:23 crc kubenswrapper[4724]: E0223 19:05:23.495452 4724 reflector.go:158] "Unhandled Error" err="object-\"openshift-must-gather-k7jcr\"/\"default-dockercfg-49gnb\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"default-dockercfg-49gnb\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-must-gather-k7jcr\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 23 19:05:23 crc kubenswrapper[4724]: W0223 19:05:23.495503 4724 reflector.go:561] object-"openshift-must-gather-k7jcr"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-must-gather-k7jcr": no relationship found between node 'crc' and this object Feb 23 19:05:23 crc kubenswrapper[4724]: I0223 19:05:23.495861 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-k7jcr"/"kube-root-ca.crt" Feb 23 19:05:23 crc kubenswrapper[4724]: E0223 19:05:23.495715 4724 reflector.go:158] "Unhandled Error" err="object-\"openshift-must-gather-k7jcr\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-must-gather-k7jcr\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 23 19:05:23 crc kubenswrapper[4724]: I0223 19:05:23.502912 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-k7jcr/must-gather-4d6jx"] Feb 23 19:05:23 crc kubenswrapper[4724]: I0223 19:05:23.609711 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fdd5db2f-5d5d-4e03-9ea5-52205ebbc403-must-gather-output\") pod \"must-gather-4d6jx\" (UID: \"fdd5db2f-5d5d-4e03-9ea5-52205ebbc403\") " pod="openshift-must-gather-k7jcr/must-gather-4d6jx" Feb 23 19:05:23 crc kubenswrapper[4724]: I0223 19:05:23.609834 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2db4q\" (UniqueName: \"kubernetes.io/projected/fdd5db2f-5d5d-4e03-9ea5-52205ebbc403-kube-api-access-2db4q\") pod \"must-gather-4d6jx\" (UID: \"fdd5db2f-5d5d-4e03-9ea5-52205ebbc403\") " pod="openshift-must-gather-k7jcr/must-gather-4d6jx" Feb 23 19:05:23 crc kubenswrapper[4724]: I0223 19:05:23.712330 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2db4q\" (UniqueName: \"kubernetes.io/projected/fdd5db2f-5d5d-4e03-9ea5-52205ebbc403-kube-api-access-2db4q\") pod \"must-gather-4d6jx\" (UID: \"fdd5db2f-5d5d-4e03-9ea5-52205ebbc403\") " pod="openshift-must-gather-k7jcr/must-gather-4d6jx" Feb 23 19:05:23 crc kubenswrapper[4724]: I0223 19:05:23.712599 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fdd5db2f-5d5d-4e03-9ea5-52205ebbc403-must-gather-output\") pod \"must-gather-4d6jx\" (UID: \"fdd5db2f-5d5d-4e03-9ea5-52205ebbc403\") " pod="openshift-must-gather-k7jcr/must-gather-4d6jx" Feb 23 19:05:23 crc kubenswrapper[4724]: I0223 19:05:23.713115 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fdd5db2f-5d5d-4e03-9ea5-52205ebbc403-must-gather-output\") pod \"must-gather-4d6jx\" (UID: \"fdd5db2f-5d5d-4e03-9ea5-52205ebbc403\") " pod="openshift-must-gather-k7jcr/must-gather-4d6jx" Feb 23 19:05:24 crc kubenswrapper[4724]: I0223 19:05:24.535136 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-k7jcr"/"default-dockercfg-49gnb" Feb 23 19:05:24 crc kubenswrapper[4724]: I0223 19:05:24.681945 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-k7jcr"/"openshift-service-ca.crt" Feb 23 19:05:24 crc kubenswrapper[4724]: I0223 19:05:24.700471 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2db4q\" (UniqueName: \"kubernetes.io/projected/fdd5db2f-5d5d-4e03-9ea5-52205ebbc403-kube-api-access-2db4q\") pod \"must-gather-4d6jx\" (UID: \"fdd5db2f-5d5d-4e03-9ea5-52205ebbc403\") " pod="openshift-must-gather-k7jcr/must-gather-4d6jx" Feb 23 19:05:24 crc kubenswrapper[4724]: I0223 19:05:24.712527 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k7jcr/must-gather-4d6jx" Feb 23 19:05:25 crc kubenswrapper[4724]: W0223 19:05:25.144147 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdd5db2f_5d5d_4e03_9ea5_52205ebbc403.slice/crio-2384d3462041c0a1eb3a356caee87dbefa83937746c20cb14fb0e771954b4ec3 WatchSource:0}: Error finding container 2384d3462041c0a1eb3a356caee87dbefa83937746c20cb14fb0e771954b4ec3: Status 404 returned error can't find the container with id 2384d3462041c0a1eb3a356caee87dbefa83937746c20cb14fb0e771954b4ec3 Feb 23 19:05:25 crc kubenswrapper[4724]: I0223 19:05:25.144222 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-k7jcr/must-gather-4d6jx"] Feb 23 19:05:26 crc kubenswrapper[4724]: I0223 19:05:26.086290 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-k7jcr/must-gather-4d6jx" event={"ID":"fdd5db2f-5d5d-4e03-9ea5-52205ebbc403","Type":"ContainerStarted","Data":"96f5a033736943c7513f9a8f34a49880113a195bc8587dc4ea5bc800d71b98cf"} Feb 23 19:05:26 crc kubenswrapper[4724]: I0223 19:05:26.086965 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-k7jcr/must-gather-4d6jx" event={"ID":"fdd5db2f-5d5d-4e03-9ea5-52205ebbc403","Type":"ContainerStarted","Data":"e5813259c09f2c8a7f3e0ed93cb803f891fec90e0ac764c6318933dc581dc811"} Feb 23 19:05:26 crc kubenswrapper[4724]: I0223 19:05:26.086984 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-k7jcr/must-gather-4d6jx" event={"ID":"fdd5db2f-5d5d-4e03-9ea5-52205ebbc403","Type":"ContainerStarted","Data":"2384d3462041c0a1eb3a356caee87dbefa83937746c20cb14fb0e771954b4ec3"} Feb 23 19:05:26 crc kubenswrapper[4724]: I0223 19:05:26.117926 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-k7jcr/must-gather-4d6jx" podStartSLOduration=3.117902899 podStartE2EDuration="3.117902899s" podCreationTimestamp="2026-02-23 19:05:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 19:05:26.11313096 +0000 UTC m=+5681.929330560" watchObservedRunningTime="2026-02-23 19:05:26.117902899 +0000 UTC m=+5681.934102499" Feb 23 19:05:28 crc kubenswrapper[4724]: I0223 19:05:28.987058 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-k7jcr/crc-debug-5gd9f"] Feb 23 19:05:28 crc kubenswrapper[4724]: I0223 19:05:28.989848 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k7jcr/crc-debug-5gd9f" Feb 23 19:05:29 crc kubenswrapper[4724]: I0223 19:05:29.025687 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkh7m\" (UniqueName: \"kubernetes.io/projected/0a150892-72b7-4acc-8fc5-16e0061c8f4f-kube-api-access-xkh7m\") pod \"crc-debug-5gd9f\" (UID: \"0a150892-72b7-4acc-8fc5-16e0061c8f4f\") " pod="openshift-must-gather-k7jcr/crc-debug-5gd9f" Feb 23 19:05:29 crc kubenswrapper[4724]: I0223 19:05:29.025792 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0a150892-72b7-4acc-8fc5-16e0061c8f4f-host\") pod \"crc-debug-5gd9f\" (UID: \"0a150892-72b7-4acc-8fc5-16e0061c8f4f\") " pod="openshift-must-gather-k7jcr/crc-debug-5gd9f" Feb 23 19:05:29 crc kubenswrapper[4724]: I0223 19:05:29.128029 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0a150892-72b7-4acc-8fc5-16e0061c8f4f-host\") pod \"crc-debug-5gd9f\" (UID: \"0a150892-72b7-4acc-8fc5-16e0061c8f4f\") " pod="openshift-must-gather-k7jcr/crc-debug-5gd9f" Feb 23 19:05:29 crc kubenswrapper[4724]: I0223 19:05:29.128215 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0a150892-72b7-4acc-8fc5-16e0061c8f4f-host\") pod \"crc-debug-5gd9f\" (UID: \"0a150892-72b7-4acc-8fc5-16e0061c8f4f\") " pod="openshift-must-gather-k7jcr/crc-debug-5gd9f" Feb 23 19:05:29 crc kubenswrapper[4724]: I0223 19:05:29.128223 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkh7m\" (UniqueName: \"kubernetes.io/projected/0a150892-72b7-4acc-8fc5-16e0061c8f4f-kube-api-access-xkh7m\") pod \"crc-debug-5gd9f\" (UID: \"0a150892-72b7-4acc-8fc5-16e0061c8f4f\") " pod="openshift-must-gather-k7jcr/crc-debug-5gd9f" Feb 23 19:05:29 crc kubenswrapper[4724]: I0223 19:05:29.154755 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkh7m\" (UniqueName: \"kubernetes.io/projected/0a150892-72b7-4acc-8fc5-16e0061c8f4f-kube-api-access-xkh7m\") pod \"crc-debug-5gd9f\" (UID: \"0a150892-72b7-4acc-8fc5-16e0061c8f4f\") " pod="openshift-must-gather-k7jcr/crc-debug-5gd9f" Feb 23 19:05:29 crc kubenswrapper[4724]: I0223 19:05:29.320685 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k7jcr/crc-debug-5gd9f" Feb 23 19:05:29 crc kubenswrapper[4724]: W0223 19:05:29.363859 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a150892_72b7_4acc_8fc5_16e0061c8f4f.slice/crio-c92d1c7958d68f611208c143417319d607479f6207b32e39dd944ab5248f4ce9 WatchSource:0}: Error finding container c92d1c7958d68f611208c143417319d607479f6207b32e39dd944ab5248f4ce9: Status 404 returned error can't find the container with id c92d1c7958d68f611208c143417319d607479f6207b32e39dd944ab5248f4ce9 Feb 23 19:05:30 crc kubenswrapper[4724]: I0223 19:05:30.121799 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-k7jcr/crc-debug-5gd9f" event={"ID":"0a150892-72b7-4acc-8fc5-16e0061c8f4f","Type":"ContainerStarted","Data":"f8c06962f9fadd11d7a472af6559568c1e54b710e57181675eeba78130c08b03"} Feb 23 19:05:30 crc kubenswrapper[4724]: I0223 19:05:30.123215 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-k7jcr/crc-debug-5gd9f" event={"ID":"0a150892-72b7-4acc-8fc5-16e0061c8f4f","Type":"ContainerStarted","Data":"c92d1c7958d68f611208c143417319d607479f6207b32e39dd944ab5248f4ce9"} Feb 23 19:05:30 crc kubenswrapper[4724]: I0223 19:05:30.144313 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-k7jcr/crc-debug-5gd9f" podStartSLOduration=2.144295065 podStartE2EDuration="2.144295065s" podCreationTimestamp="2026-02-23 19:05:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 19:05:30.136495391 +0000 UTC m=+5685.952694991" watchObservedRunningTime="2026-02-23 19:05:30.144295065 +0000 UTC m=+5685.960494665" Feb 23 19:06:09 crc kubenswrapper[4724]: I0223 19:06:09.486169 4724 generic.go:334] "Generic (PLEG): container finished" podID="0a150892-72b7-4acc-8fc5-16e0061c8f4f" containerID="f8c06962f9fadd11d7a472af6559568c1e54b710e57181675eeba78130c08b03" exitCode=0 Feb 23 19:06:09 crc kubenswrapper[4724]: I0223 19:06:09.486278 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-k7jcr/crc-debug-5gd9f" event={"ID":"0a150892-72b7-4acc-8fc5-16e0061c8f4f","Type":"ContainerDied","Data":"f8c06962f9fadd11d7a472af6559568c1e54b710e57181675eeba78130c08b03"} Feb 23 19:06:10 crc kubenswrapper[4724]: I0223 19:06:10.626882 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k7jcr/crc-debug-5gd9f" Feb 23 19:06:10 crc kubenswrapper[4724]: I0223 19:06:10.664786 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-k7jcr/crc-debug-5gd9f"] Feb 23 19:06:10 crc kubenswrapper[4724]: I0223 19:06:10.678805 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-k7jcr/crc-debug-5gd9f"] Feb 23 19:06:10 crc kubenswrapper[4724]: I0223 19:06:10.705992 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0a150892-72b7-4acc-8fc5-16e0061c8f4f-host\") pod \"0a150892-72b7-4acc-8fc5-16e0061c8f4f\" (UID: \"0a150892-72b7-4acc-8fc5-16e0061c8f4f\") " Feb 23 19:06:10 crc kubenswrapper[4724]: I0223 19:06:10.706043 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkh7m\" (UniqueName: \"kubernetes.io/projected/0a150892-72b7-4acc-8fc5-16e0061c8f4f-kube-api-access-xkh7m\") pod \"0a150892-72b7-4acc-8fc5-16e0061c8f4f\" (UID: \"0a150892-72b7-4acc-8fc5-16e0061c8f4f\") " Feb 23 19:06:10 crc kubenswrapper[4724]: I0223 19:06:10.706104 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a150892-72b7-4acc-8fc5-16e0061c8f4f-host" (OuterVolumeSpecName: "host") pod "0a150892-72b7-4acc-8fc5-16e0061c8f4f" (UID: "0a150892-72b7-4acc-8fc5-16e0061c8f4f"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 19:06:10 crc kubenswrapper[4724]: I0223 19:06:10.706974 4724 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0a150892-72b7-4acc-8fc5-16e0061c8f4f-host\") on node \"crc\" DevicePath \"\"" Feb 23 19:06:10 crc kubenswrapper[4724]: I0223 19:06:10.712599 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a150892-72b7-4acc-8fc5-16e0061c8f4f-kube-api-access-xkh7m" (OuterVolumeSpecName: "kube-api-access-xkh7m") pod "0a150892-72b7-4acc-8fc5-16e0061c8f4f" (UID: "0a150892-72b7-4acc-8fc5-16e0061c8f4f"). InnerVolumeSpecName "kube-api-access-xkh7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:06:10 crc kubenswrapper[4724]: I0223 19:06:10.808840 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkh7m\" (UniqueName: \"kubernetes.io/projected/0a150892-72b7-4acc-8fc5-16e0061c8f4f-kube-api-access-xkh7m\") on node \"crc\" DevicePath \"\"" Feb 23 19:06:10 crc kubenswrapper[4724]: I0223 19:06:10.962285 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a150892-72b7-4acc-8fc5-16e0061c8f4f" path="/var/lib/kubelet/pods/0a150892-72b7-4acc-8fc5-16e0061c8f4f/volumes" Feb 23 19:06:11 crc kubenswrapper[4724]: I0223 19:06:11.506617 4724 scope.go:117] "RemoveContainer" containerID="f8c06962f9fadd11d7a472af6559568c1e54b710e57181675eeba78130c08b03" Feb 23 19:06:11 crc kubenswrapper[4724]: I0223 19:06:11.506687 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k7jcr/crc-debug-5gd9f" Feb 23 19:06:11 crc kubenswrapper[4724]: I0223 19:06:11.886542 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-k7jcr/crc-debug-lsvvm"] Feb 23 19:06:11 crc kubenswrapper[4724]: E0223 19:06:11.886958 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a150892-72b7-4acc-8fc5-16e0061c8f4f" containerName="container-00" Feb 23 19:06:11 crc kubenswrapper[4724]: I0223 19:06:11.886971 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a150892-72b7-4acc-8fc5-16e0061c8f4f" containerName="container-00" Feb 23 19:06:11 crc kubenswrapper[4724]: I0223 19:06:11.887172 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a150892-72b7-4acc-8fc5-16e0061c8f4f" containerName="container-00" Feb 23 19:06:11 crc kubenswrapper[4724]: I0223 19:06:11.887861 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k7jcr/crc-debug-lsvvm" Feb 23 19:06:12 crc kubenswrapper[4724]: I0223 19:06:12.029003 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4jbj\" (UniqueName: \"kubernetes.io/projected/23475525-4e87-4364-b3f8-c559526d0f77-kube-api-access-m4jbj\") pod \"crc-debug-lsvvm\" (UID: \"23475525-4e87-4364-b3f8-c559526d0f77\") " pod="openshift-must-gather-k7jcr/crc-debug-lsvvm" Feb 23 19:06:12 crc kubenswrapper[4724]: I0223 19:06:12.029097 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/23475525-4e87-4364-b3f8-c559526d0f77-host\") pod \"crc-debug-lsvvm\" (UID: \"23475525-4e87-4364-b3f8-c559526d0f77\") " pod="openshift-must-gather-k7jcr/crc-debug-lsvvm" Feb 23 19:06:12 crc kubenswrapper[4724]: I0223 19:06:12.130780 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4jbj\" (UniqueName: \"kubernetes.io/projected/23475525-4e87-4364-b3f8-c559526d0f77-kube-api-access-m4jbj\") pod \"crc-debug-lsvvm\" (UID: \"23475525-4e87-4364-b3f8-c559526d0f77\") " pod="openshift-must-gather-k7jcr/crc-debug-lsvvm" Feb 23 19:06:12 crc kubenswrapper[4724]: I0223 19:06:12.131059 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/23475525-4e87-4364-b3f8-c559526d0f77-host\") pod \"crc-debug-lsvvm\" (UID: \"23475525-4e87-4364-b3f8-c559526d0f77\") " pod="openshift-must-gather-k7jcr/crc-debug-lsvvm" Feb 23 19:06:12 crc kubenswrapper[4724]: I0223 19:06:12.131271 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/23475525-4e87-4364-b3f8-c559526d0f77-host\") pod \"crc-debug-lsvvm\" (UID: \"23475525-4e87-4364-b3f8-c559526d0f77\") " pod="openshift-must-gather-k7jcr/crc-debug-lsvvm" Feb 23 19:06:12 crc kubenswrapper[4724]: I0223 19:06:12.154924 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4jbj\" (UniqueName: \"kubernetes.io/projected/23475525-4e87-4364-b3f8-c559526d0f77-kube-api-access-m4jbj\") pod \"crc-debug-lsvvm\" (UID: \"23475525-4e87-4364-b3f8-c559526d0f77\") " pod="openshift-must-gather-k7jcr/crc-debug-lsvvm" Feb 23 19:06:12 crc kubenswrapper[4724]: I0223 19:06:12.203647 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k7jcr/crc-debug-lsvvm" Feb 23 19:06:12 crc kubenswrapper[4724]: I0223 19:06:12.517932 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-k7jcr/crc-debug-lsvvm" event={"ID":"23475525-4e87-4364-b3f8-c559526d0f77","Type":"ContainerStarted","Data":"cb7daa79e2982ca4a77a4e14183b50e85901cf9bdeccc22aad6a8e817d59c449"} Feb 23 19:06:12 crc kubenswrapper[4724]: I0223 19:06:12.517989 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-k7jcr/crc-debug-lsvvm" event={"ID":"23475525-4e87-4364-b3f8-c559526d0f77","Type":"ContainerStarted","Data":"782e7e92145df5b5c7045d4ca40baafb056d632214db2990e815dd0c47620a2d"} Feb 23 19:06:12 crc kubenswrapper[4724]: I0223 19:06:12.541537 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-k7jcr/crc-debug-lsvvm" podStartSLOduration=1.541518234 podStartE2EDuration="1.541518234s" podCreationTimestamp="2026-02-23 19:06:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 19:06:12.532293055 +0000 UTC m=+5728.348492665" watchObservedRunningTime="2026-02-23 19:06:12.541518234 +0000 UTC m=+5728.357717834" Feb 23 19:06:13 crc kubenswrapper[4724]: I0223 19:06:13.530686 4724 generic.go:334] "Generic (PLEG): container finished" podID="23475525-4e87-4364-b3f8-c559526d0f77" containerID="cb7daa79e2982ca4a77a4e14183b50e85901cf9bdeccc22aad6a8e817d59c449" exitCode=0 Feb 23 19:06:13 crc kubenswrapper[4724]: I0223 19:06:13.530729 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-k7jcr/crc-debug-lsvvm" event={"ID":"23475525-4e87-4364-b3f8-c559526d0f77","Type":"ContainerDied","Data":"cb7daa79e2982ca4a77a4e14183b50e85901cf9bdeccc22aad6a8e817d59c449"} Feb 23 19:06:14 crc kubenswrapper[4724]: I0223 19:06:14.647935 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k7jcr/crc-debug-lsvvm" Feb 23 19:06:14 crc kubenswrapper[4724]: I0223 19:06:14.792238 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4jbj\" (UniqueName: \"kubernetes.io/projected/23475525-4e87-4364-b3f8-c559526d0f77-kube-api-access-m4jbj\") pod \"23475525-4e87-4364-b3f8-c559526d0f77\" (UID: \"23475525-4e87-4364-b3f8-c559526d0f77\") " Feb 23 19:06:14 crc kubenswrapper[4724]: I0223 19:06:14.792314 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/23475525-4e87-4364-b3f8-c559526d0f77-host\") pod \"23475525-4e87-4364-b3f8-c559526d0f77\" (UID: \"23475525-4e87-4364-b3f8-c559526d0f77\") " Feb 23 19:06:14 crc kubenswrapper[4724]: I0223 19:06:14.792827 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23475525-4e87-4364-b3f8-c559526d0f77-host" (OuterVolumeSpecName: "host") pod "23475525-4e87-4364-b3f8-c559526d0f77" (UID: "23475525-4e87-4364-b3f8-c559526d0f77"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 19:06:14 crc kubenswrapper[4724]: I0223 19:06:14.802709 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23475525-4e87-4364-b3f8-c559526d0f77-kube-api-access-m4jbj" (OuterVolumeSpecName: "kube-api-access-m4jbj") pod "23475525-4e87-4364-b3f8-c559526d0f77" (UID: "23475525-4e87-4364-b3f8-c559526d0f77"). InnerVolumeSpecName "kube-api-access-m4jbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:06:14 crc kubenswrapper[4724]: I0223 19:06:14.841928 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-k7jcr/crc-debug-lsvvm"] Feb 23 19:06:14 crc kubenswrapper[4724]: I0223 19:06:14.854499 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-k7jcr/crc-debug-lsvvm"] Feb 23 19:06:14 crc kubenswrapper[4724]: I0223 19:06:14.894646 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4jbj\" (UniqueName: \"kubernetes.io/projected/23475525-4e87-4364-b3f8-c559526d0f77-kube-api-access-m4jbj\") on node \"crc\" DevicePath \"\"" Feb 23 19:06:14 crc kubenswrapper[4724]: I0223 19:06:14.894677 4724 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/23475525-4e87-4364-b3f8-c559526d0f77-host\") on node \"crc\" DevicePath \"\"" Feb 23 19:06:14 crc kubenswrapper[4724]: I0223 19:06:14.961995 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23475525-4e87-4364-b3f8-c559526d0f77" path="/var/lib/kubelet/pods/23475525-4e87-4364-b3f8-c559526d0f77/volumes" Feb 23 19:06:15 crc kubenswrapper[4724]: I0223 19:06:15.551384 4724 scope.go:117] "RemoveContainer" containerID="cb7daa79e2982ca4a77a4e14183b50e85901cf9bdeccc22aad6a8e817d59c449" Feb 23 19:06:15 crc kubenswrapper[4724]: I0223 19:06:15.551453 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k7jcr/crc-debug-lsvvm" Feb 23 19:06:16 crc kubenswrapper[4724]: I0223 19:06:16.052971 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-k7jcr/crc-debug-kscdv"] Feb 23 19:06:16 crc kubenswrapper[4724]: E0223 19:06:16.053446 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23475525-4e87-4364-b3f8-c559526d0f77" containerName="container-00" Feb 23 19:06:16 crc kubenswrapper[4724]: I0223 19:06:16.053459 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="23475525-4e87-4364-b3f8-c559526d0f77" containerName="container-00" Feb 23 19:06:16 crc kubenswrapper[4724]: I0223 19:06:16.053658 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="23475525-4e87-4364-b3f8-c559526d0f77" containerName="container-00" Feb 23 19:06:16 crc kubenswrapper[4724]: I0223 19:06:16.054334 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k7jcr/crc-debug-kscdv" Feb 23 19:06:16 crc kubenswrapper[4724]: I0223 19:06:16.222575 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4w4f\" (UniqueName: \"kubernetes.io/projected/56a91896-a5d5-4879-bd9f-0dd8791887d4-kube-api-access-h4w4f\") pod \"crc-debug-kscdv\" (UID: \"56a91896-a5d5-4879-bd9f-0dd8791887d4\") " pod="openshift-must-gather-k7jcr/crc-debug-kscdv" Feb 23 19:06:16 crc kubenswrapper[4724]: I0223 19:06:16.222685 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/56a91896-a5d5-4879-bd9f-0dd8791887d4-host\") pod \"crc-debug-kscdv\" (UID: \"56a91896-a5d5-4879-bd9f-0dd8791887d4\") " pod="openshift-must-gather-k7jcr/crc-debug-kscdv" Feb 23 19:06:16 crc kubenswrapper[4724]: I0223 19:06:16.324444 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/56a91896-a5d5-4879-bd9f-0dd8791887d4-host\") pod \"crc-debug-kscdv\" (UID: \"56a91896-a5d5-4879-bd9f-0dd8791887d4\") " pod="openshift-must-gather-k7jcr/crc-debug-kscdv" Feb 23 19:06:16 crc kubenswrapper[4724]: I0223 19:06:16.324598 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/56a91896-a5d5-4879-bd9f-0dd8791887d4-host\") pod \"crc-debug-kscdv\" (UID: \"56a91896-a5d5-4879-bd9f-0dd8791887d4\") " pod="openshift-must-gather-k7jcr/crc-debug-kscdv" Feb 23 19:06:16 crc kubenswrapper[4724]: I0223 19:06:16.324628 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4w4f\" (UniqueName: \"kubernetes.io/projected/56a91896-a5d5-4879-bd9f-0dd8791887d4-kube-api-access-h4w4f\") pod \"crc-debug-kscdv\" (UID: \"56a91896-a5d5-4879-bd9f-0dd8791887d4\") " pod="openshift-must-gather-k7jcr/crc-debug-kscdv" Feb 23 19:06:16 crc kubenswrapper[4724]: I0223 19:06:16.346885 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4w4f\" (UniqueName: \"kubernetes.io/projected/56a91896-a5d5-4879-bd9f-0dd8791887d4-kube-api-access-h4w4f\") pod \"crc-debug-kscdv\" (UID: \"56a91896-a5d5-4879-bd9f-0dd8791887d4\") " pod="openshift-must-gather-k7jcr/crc-debug-kscdv" Feb 23 19:06:16 crc kubenswrapper[4724]: I0223 19:06:16.371227 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k7jcr/crc-debug-kscdv" Feb 23 19:06:16 crc kubenswrapper[4724]: W0223 19:06:16.400669 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56a91896_a5d5_4879_bd9f_0dd8791887d4.slice/crio-8495b2b89ca6d211a939bf7143095545b5c0290c29fbc5bb6df9d482d2e427e3 WatchSource:0}: Error finding container 8495b2b89ca6d211a939bf7143095545b5c0290c29fbc5bb6df9d482d2e427e3: Status 404 returned error can't find the container with id 8495b2b89ca6d211a939bf7143095545b5c0290c29fbc5bb6df9d482d2e427e3 Feb 23 19:06:16 crc kubenswrapper[4724]: I0223 19:06:16.566882 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-k7jcr/crc-debug-kscdv" event={"ID":"56a91896-a5d5-4879-bd9f-0dd8791887d4","Type":"ContainerStarted","Data":"8495b2b89ca6d211a939bf7143095545b5c0290c29fbc5bb6df9d482d2e427e3"} Feb 23 19:06:17 crc kubenswrapper[4724]: I0223 19:06:17.578094 4724 generic.go:334] "Generic (PLEG): container finished" podID="56a91896-a5d5-4879-bd9f-0dd8791887d4" containerID="85e64266d67c1c113ab2a826775f37e102607d9f486e09430c02d465a19843ec" exitCode=0 Feb 23 19:06:17 crc kubenswrapper[4724]: I0223 19:06:17.578133 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-k7jcr/crc-debug-kscdv" event={"ID":"56a91896-a5d5-4879-bd9f-0dd8791887d4","Type":"ContainerDied","Data":"85e64266d67c1c113ab2a826775f37e102607d9f486e09430c02d465a19843ec"} Feb 23 19:06:17 crc kubenswrapper[4724]: I0223 19:06:17.617790 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-k7jcr/crc-debug-kscdv"] Feb 23 19:06:17 crc kubenswrapper[4724]: I0223 19:06:17.639242 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-k7jcr/crc-debug-kscdv"] Feb 23 19:06:18 crc kubenswrapper[4724]: I0223 19:06:18.735033 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k7jcr/crc-debug-kscdv" Feb 23 19:06:18 crc kubenswrapper[4724]: I0223 19:06:18.874718 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4w4f\" (UniqueName: \"kubernetes.io/projected/56a91896-a5d5-4879-bd9f-0dd8791887d4-kube-api-access-h4w4f\") pod \"56a91896-a5d5-4879-bd9f-0dd8791887d4\" (UID: \"56a91896-a5d5-4879-bd9f-0dd8791887d4\") " Feb 23 19:06:18 crc kubenswrapper[4724]: I0223 19:06:18.874828 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/56a91896-a5d5-4879-bd9f-0dd8791887d4-host\") pod \"56a91896-a5d5-4879-bd9f-0dd8791887d4\" (UID: \"56a91896-a5d5-4879-bd9f-0dd8791887d4\") " Feb 23 19:06:18 crc kubenswrapper[4724]: I0223 19:06:18.875513 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56a91896-a5d5-4879-bd9f-0dd8791887d4-host" (OuterVolumeSpecName: "host") pod "56a91896-a5d5-4879-bd9f-0dd8791887d4" (UID: "56a91896-a5d5-4879-bd9f-0dd8791887d4"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 19:06:18 crc kubenswrapper[4724]: I0223 19:06:18.891628 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56a91896-a5d5-4879-bd9f-0dd8791887d4-kube-api-access-h4w4f" (OuterVolumeSpecName: "kube-api-access-h4w4f") pod "56a91896-a5d5-4879-bd9f-0dd8791887d4" (UID: "56a91896-a5d5-4879-bd9f-0dd8791887d4"). InnerVolumeSpecName "kube-api-access-h4w4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:06:18 crc kubenswrapper[4724]: I0223 19:06:18.962988 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56a91896-a5d5-4879-bd9f-0dd8791887d4" path="/var/lib/kubelet/pods/56a91896-a5d5-4879-bd9f-0dd8791887d4/volumes" Feb 23 19:06:18 crc kubenswrapper[4724]: I0223 19:06:18.977978 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4w4f\" (UniqueName: \"kubernetes.io/projected/56a91896-a5d5-4879-bd9f-0dd8791887d4-kube-api-access-h4w4f\") on node \"crc\" DevicePath \"\"" Feb 23 19:06:18 crc kubenswrapper[4724]: I0223 19:06:18.978012 4724 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/56a91896-a5d5-4879-bd9f-0dd8791887d4-host\") on node \"crc\" DevicePath \"\"" Feb 23 19:06:19 crc kubenswrapper[4724]: I0223 19:06:19.601257 4724 scope.go:117] "RemoveContainer" containerID="85e64266d67c1c113ab2a826775f37e102607d9f486e09430c02d465a19843ec" Feb 23 19:06:19 crc kubenswrapper[4724]: I0223 19:06:19.601319 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k7jcr/crc-debug-kscdv" Feb 23 19:07:04 crc kubenswrapper[4724]: I0223 19:07:04.280305 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6f4c5b5ccd-7xcmx_e93c91f5-d9d7-4322-97c0-8d2b9ab82714/barbican-api/0.log" Feb 23 19:07:04 crc kubenswrapper[4724]: I0223 19:07:04.497651 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6f4c5b5ccd-7xcmx_e93c91f5-d9d7-4322-97c0-8d2b9ab82714/barbican-api-log/0.log" Feb 23 19:07:04 crc kubenswrapper[4724]: I0223 19:07:04.624342 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7cbfcdd8bd-6sfgm_c06fc526-bdf8-419c-8261-29fca2da229c/barbican-keystone-listener/0.log" Feb 23 19:07:04 crc kubenswrapper[4724]: I0223 19:07:04.658369 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7cbfcdd8bd-6sfgm_c06fc526-bdf8-419c-8261-29fca2da229c/barbican-keystone-listener-log/0.log" Feb 23 19:07:04 crc kubenswrapper[4724]: I0223 19:07:04.751921 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-68f84cbc4f-9ns6x_d0195d90-e7f7-4cba-b83a-75b5e0a1bcd1/barbican-worker/0.log" Feb 23 19:07:04 crc kubenswrapper[4724]: I0223 19:07:04.917656 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-68f84cbc4f-9ns6x_d0195d90-e7f7-4cba-b83a-75b5e0a1bcd1/barbican-worker-log/0.log" Feb 23 19:07:04 crc kubenswrapper[4724]: I0223 19:07:04.941421 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-4wbqf_456d50d3-b5f9-4dd4-9eec-c15f21b183e7/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 19:07:05 crc kubenswrapper[4724]: I0223 19:07:05.241051 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2ed30198-318f-476e-83b7-e93ab4c5625d/ceilometer-notification-agent/0.log" Feb 23 19:07:05 crc kubenswrapper[4724]: I0223 19:07:05.263941 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2ed30198-318f-476e-83b7-e93ab4c5625d/ceilometer-central-agent/0.log" Feb 23 19:07:05 crc kubenswrapper[4724]: I0223 19:07:05.267405 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2ed30198-318f-476e-83b7-e93ab4c5625d/proxy-httpd/0.log" Feb 23 19:07:05 crc kubenswrapper[4724]: I0223 19:07:05.294675 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2ed30198-318f-476e-83b7-e93ab4c5625d/sg-core/0.log" Feb 23 19:07:05 crc kubenswrapper[4724]: I0223 19:07:05.564497 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_6c03aee9-806f-4319-a3b8-b3226a740f4b/cinder-api-log/0.log" Feb 23 19:07:05 crc kubenswrapper[4724]: I0223 19:07:05.858987 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_55cae485-5e0f-4fb8-a19a-21f84b246733/probe/0.log" Feb 23 19:07:06 crc kubenswrapper[4724]: I0223 19:07:06.040998 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_55cae485-5e0f-4fb8-a19a-21f84b246733/cinder-backup/0.log" Feb 23 19:07:06 crc kubenswrapper[4724]: I0223 19:07:06.044381 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_6c03aee9-806f-4319-a3b8-b3226a740f4b/cinder-api/0.log" Feb 23 19:07:06 crc kubenswrapper[4724]: I0223 19:07:06.109123 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_34e8e653-ac7e-4bca-9ce1-e5f9f4b5b2f2/cinder-scheduler/0.log" Feb 23 19:07:06 crc kubenswrapper[4724]: I0223 19:07:06.170973 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_34e8e653-ac7e-4bca-9ce1-e5f9f4b5b2f2/probe/0.log" Feb 23 19:07:06 crc kubenswrapper[4724]: I0223 19:07:06.373950 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_fee47e38-5239-488d-a11c-53342802f8b1/probe/0.log" Feb 23 19:07:06 crc kubenswrapper[4724]: I0223 19:07:06.432452 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_fee47e38-5239-488d-a11c-53342802f8b1/cinder-volume/0.log" Feb 23 19:07:06 crc kubenswrapper[4724]: I0223 19:07:06.653199 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_34ef4ee9-8229-4235-bb3c-f5138b1f8d4f/probe/0.log" Feb 23 19:07:06 crc kubenswrapper[4724]: I0223 19:07:06.726751 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_34ef4ee9-8229-4235-bb3c-f5138b1f8d4f/cinder-volume/0.log" Feb 23 19:07:06 crc kubenswrapper[4724]: I0223 19:07:06.729564 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-5s9b9_a5ffe362-1a42-40ec-8cbf-ce9b83db854d/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 19:07:06 crc kubenswrapper[4724]: I0223 19:07:06.914879 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-g4758_78a23e2d-61b1-4393-95b0-e4872270628a/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 19:07:07 crc kubenswrapper[4724]: I0223 19:07:07.038867 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69644d8897-p4mmz_f47e5d73-be56-42e3-b23e-1710cfab9733/init/0.log" Feb 23 19:07:07 crc kubenswrapper[4724]: I0223 19:07:07.321545 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69644d8897-p4mmz_f47e5d73-be56-42e3-b23e-1710cfab9733/init/0.log" Feb 23 19:07:07 crc kubenswrapper[4724]: I0223 19:07:07.456889 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-255hh_15bf49cb-7015-49e6-9710-4f701dc9d6f7/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 19:07:07 crc kubenswrapper[4724]: I0223 19:07:07.500320 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69644d8897-p4mmz_f47e5d73-be56-42e3-b23e-1710cfab9733/dnsmasq-dns/0.log" Feb 23 19:07:07 crc kubenswrapper[4724]: I0223 19:07:07.634749 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_8883a549-3562-42b7-86d4-934c3076f934/glance-httpd/0.log" Feb 23 19:07:07 crc kubenswrapper[4724]: I0223 19:07:07.676875 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_8883a549-3562-42b7-86d4-934c3076f934/glance-log/0.log" Feb 23 19:07:07 crc kubenswrapper[4724]: I0223 19:07:07.804739 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_260cff26-a398-4898-9708-61ef33a6aa00/glance-httpd/0.log" Feb 23 19:07:07 crc kubenswrapper[4724]: I0223 19:07:07.840287 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_260cff26-a398-4898-9708-61ef33a6aa00/glance-log/0.log" Feb 23 19:07:08 crc kubenswrapper[4724]: I0223 19:07:08.070992 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5b4b6c94fb-ttctl_07785399-35e6-432b-8835-4412fa3ff02b/horizon/0.log" Feb 23 19:07:08 crc kubenswrapper[4724]: I0223 19:07:08.152986 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-fsscq_0e96ae5a-4689-4373-bfad-06a0f99345d2/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 19:07:08 crc kubenswrapper[4724]: I0223 19:07:08.404035 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-ktllk_ccfb9295-92e0-4f3d-a25c-a3a7f433126e/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 19:07:08 crc kubenswrapper[4724]: I0223 19:07:08.705261 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29531161-mxchr_3b373b9a-1005-41fb-92c8-22d259d8f036/keystone-cron/0.log" Feb 23 19:07:08 crc kubenswrapper[4724]: I0223 19:07:08.761738 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5b4b6c94fb-ttctl_07785399-35e6-432b-8835-4412fa3ff02b/horizon-log/0.log" Feb 23 19:07:08 crc kubenswrapper[4724]: I0223 19:07:08.983237 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_85b4f79b-e696-483e-8ee7-8653f8c07a40/kube-state-metrics/0.log" Feb 23 19:07:09 crc kubenswrapper[4724]: I0223 19:07:09.018959 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29531221-dnhc9_822d2059-6be4-4c9f-8ca8-b38ebaf5ff01/keystone-cron/0.log" Feb 23 19:07:09 crc kubenswrapper[4724]: I0223 19:07:09.043637 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5cb5799495-xxmx4_36583b8f-b74d-4f25-980e-030c8d3896c7/keystone-api/0.log" Feb 23 19:07:09 crc kubenswrapper[4724]: I0223 19:07:09.180326 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-tp7qm_3f5fa243-d790-4006-9c4c-7a1bf93a56b4/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 19:07:09 crc kubenswrapper[4724]: I0223 19:07:09.501525 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-fxkgw_ffe67500-5244-403d-8a50-59aa76582492/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 19:07:09 crc kubenswrapper[4724]: I0223 19:07:09.540143 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-84d9ddfbc9-spsrv_59037714-7bc4-4c52-95d7-a791923f67fe/neutron-api/0.log" Feb 23 19:07:09 crc kubenswrapper[4724]: I0223 19:07:09.612173 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-84d9ddfbc9-spsrv_59037714-7bc4-4c52-95d7-a791923f67fe/neutron-httpd/0.log" Feb 23 19:07:09 crc kubenswrapper[4724]: I0223 19:07:09.774857 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_notifications-rabbitmq-server-0_6e165de7-7e1a-47c3-84d2-9fc675a2224a/setup-container/0.log" Feb 23 19:07:09 crc kubenswrapper[4724]: I0223 19:07:09.973537 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_notifications-rabbitmq-server-0_6e165de7-7e1a-47c3-84d2-9fc675a2224a/setup-container/0.log" Feb 23 19:07:10 crc kubenswrapper[4724]: I0223 19:07:10.003378 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_notifications-rabbitmq-server-0_6e165de7-7e1a-47c3-84d2-9fc675a2224a/rabbitmq/0.log" Feb 23 19:07:10 crc kubenswrapper[4724]: I0223 19:07:10.610459 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_557c3e1b-ccc8-48d7-8a2c-78de846beac2/nova-cell0-conductor-conductor/0.log" Feb 23 19:07:10 crc kubenswrapper[4724]: I0223 19:07:10.872370 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_e54fa012-7969-4917-888f-a2f822eb9449/nova-cell1-conductor-conductor/0.log" Feb 23 19:07:11 crc kubenswrapper[4724]: I0223 19:07:11.369780 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_31f78c36-4f54-425d-87a6-3b0c7093a06c/nova-cell1-novncproxy-novncproxy/0.log" Feb 23 19:07:11 crc kubenswrapper[4724]: I0223 19:07:11.564544 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_4bef9e90-cdd6-4eb6-8801-3f7b07bc9363/nova-api-log/0.log" Feb 23 19:07:11 crc kubenswrapper[4724]: I0223 19:07:11.584174 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-t898c_28de6808-9434-463a-9b7f-cd4236c51c29/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 19:07:11 crc kubenswrapper[4724]: I0223 19:07:11.858190 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_9365a64c-1314-4df5-b7b2-ed56c6d7a358/nova-metadata-log/0.log" Feb 23 19:07:12 crc kubenswrapper[4724]: I0223 19:07:12.083070 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_4bef9e90-cdd6-4eb6-8801-3f7b07bc9363/nova-api-api/0.log" Feb 23 19:07:12 crc kubenswrapper[4724]: I0223 19:07:12.264347 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c7ad5fb5-517e-4249-9da4-08d99599caf0/mysql-bootstrap/0.log" Feb 23 19:07:12 crc kubenswrapper[4724]: I0223 19:07:12.477549 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_05bf40b5-2154-40d2-8714-7e7d24d42786/nova-scheduler-scheduler/0.log" Feb 23 19:07:12 crc kubenswrapper[4724]: I0223 19:07:12.493264 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c7ad5fb5-517e-4249-9da4-08d99599caf0/galera/0.log" Feb 23 19:07:12 crc kubenswrapper[4724]: I0223 19:07:12.495232 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c7ad5fb5-517e-4249-9da4-08d99599caf0/mysql-bootstrap/0.log" Feb 23 19:07:12 crc kubenswrapper[4724]: I0223 19:07:12.729008 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e48a20ad-1863-458a-ba27-6b24cee6df0c/mysql-bootstrap/0.log" Feb 23 19:07:12 crc kubenswrapper[4724]: I0223 19:07:12.906044 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e48a20ad-1863-458a-ba27-6b24cee6df0c/mysql-bootstrap/0.log" Feb 23 19:07:12 crc kubenswrapper[4724]: I0223 19:07:12.942509 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e48a20ad-1863-458a-ba27-6b24cee6df0c/galera/0.log" Feb 23 19:07:13 crc kubenswrapper[4724]: I0223 19:07:13.118460 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_f5d061d8-a5d8-48fd-8f20-45eb9def3384/openstackclient/0.log" Feb 23 19:07:13 crc kubenswrapper[4724]: I0223 19:07:13.214842 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-hh76w_8fd48d48-59c7-4470-9223-c3b3f786c8d9/ovn-controller/0.log" Feb 23 19:07:13 crc kubenswrapper[4724]: I0223 19:07:13.438482 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-8b9ks_0371ce0f-1e0f-4b9f-a5aa-971ae7d19279/openstack-network-exporter/0.log" Feb 23 19:07:13 crc kubenswrapper[4724]: I0223 19:07:13.615005 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-lzxrb_4f6f0027-7e55-407c-be1d-5dc5f57250a8/ovsdb-server-init/0.log" Feb 23 19:07:13 crc kubenswrapper[4724]: I0223 19:07:13.742733 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-lzxrb_4f6f0027-7e55-407c-be1d-5dc5f57250a8/ovsdb-server-init/0.log" Feb 23 19:07:13 crc kubenswrapper[4724]: I0223 19:07:13.823539 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-lzxrb_4f6f0027-7e55-407c-be1d-5dc5f57250a8/ovsdb-server/0.log" Feb 23 19:07:13 crc kubenswrapper[4724]: I0223 19:07:13.968672 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_9365a64c-1314-4df5-b7b2-ed56c6d7a358/nova-metadata-metadata/0.log" Feb 23 19:07:14 crc kubenswrapper[4724]: I0223 19:07:14.066821 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-wn74p_5e7e7627-560c-4959-8d79-7999e31db5be/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 19:07:14 crc kubenswrapper[4724]: I0223 19:07:14.173253 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-lzxrb_4f6f0027-7e55-407c-be1d-5dc5f57250a8/ovs-vswitchd/0.log" Feb 23 19:07:14 crc kubenswrapper[4724]: I0223 19:07:14.178759 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_46836cc7-f4d3-432c-aa3e-c448d50a212e/openstack-network-exporter/0.log" Feb 23 19:07:14 crc kubenswrapper[4724]: I0223 19:07:14.326563 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_46836cc7-f4d3-432c-aa3e-c448d50a212e/ovn-northd/0.log" Feb 23 19:07:14 crc kubenswrapper[4724]: I0223 19:07:14.394146 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_ba834afe-088c-4b0c-97f5-7986f8f9c988/ovsdbserver-nb/0.log" Feb 23 19:07:14 crc kubenswrapper[4724]: I0223 19:07:14.429123 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_ba834afe-088c-4b0c-97f5-7986f8f9c988/openstack-network-exporter/0.log" Feb 23 19:07:14 crc kubenswrapper[4724]: I0223 19:07:14.643477 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_02d0b5c7-a3f7-47d6-a52f-cff5a0946cea/ovsdbserver-sb/0.log" Feb 23 19:07:14 crc kubenswrapper[4724]: I0223 19:07:14.650703 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_02d0b5c7-a3f7-47d6-a52f-cff5a0946cea/openstack-network-exporter/0.log" Feb 23 19:07:14 crc kubenswrapper[4724]: I0223 19:07:14.923026 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_a8cb62eb-328b-4857-92b7-2ec45d3b7714/init-config-reloader/0.log" Feb 23 19:07:14 crc kubenswrapper[4724]: I0223 19:07:14.987074 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-69f7cbf768-jd6kh_1b2a00ce-727b-4065-b3b4-99f43d28b54d/placement-api/0.log" Feb 23 19:07:15 crc kubenswrapper[4724]: I0223 19:07:15.049482 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-69f7cbf768-jd6kh_1b2a00ce-727b-4065-b3b4-99f43d28b54d/placement-log/0.log" Feb 23 19:07:15 crc kubenswrapper[4724]: I0223 19:07:15.133675 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_a8cb62eb-328b-4857-92b7-2ec45d3b7714/init-config-reloader/0.log" Feb 23 19:07:15 crc kubenswrapper[4724]: I0223 19:07:15.182556 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_a8cb62eb-328b-4857-92b7-2ec45d3b7714/config-reloader/0.log" Feb 23 19:07:15 crc kubenswrapper[4724]: I0223 19:07:15.188166 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_a8cb62eb-328b-4857-92b7-2ec45d3b7714/prometheus/0.log" Feb 23 19:07:15 crc kubenswrapper[4724]: I0223 19:07:15.271801 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_a8cb62eb-328b-4857-92b7-2ec45d3b7714/thanos-sidecar/0.log" Feb 23 19:07:15 crc kubenswrapper[4724]: I0223 19:07:15.375838 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_9723ff3a-6da5-46fd-be2a-89693223d4f0/setup-container/0.log" Feb 23 19:07:15 crc kubenswrapper[4724]: I0223 19:07:15.560725 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_9723ff3a-6da5-46fd-be2a-89693223d4f0/setup-container/0.log" Feb 23 19:07:15 crc kubenswrapper[4724]: I0223 19:07:15.587511 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_1593736a-2034-4811-90f9-90645b954b2c/setup-container/0.log" Feb 23 19:07:15 crc kubenswrapper[4724]: I0223 19:07:15.643082 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_9723ff3a-6da5-46fd-be2a-89693223d4f0/rabbitmq/0.log" Feb 23 19:07:15 crc kubenswrapper[4724]: I0223 19:07:15.814646 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_1593736a-2034-4811-90f9-90645b954b2c/setup-container/0.log" Feb 23 19:07:15 crc kubenswrapper[4724]: I0223 19:07:15.875494 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_1593736a-2034-4811-90f9-90645b954b2c/rabbitmq/0.log" Feb 23 19:07:15 crc kubenswrapper[4724]: I0223 19:07:15.885109 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-g74wb_bb78bbf2-4067-4e58-b506-5dc2249d2aff/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 19:07:16 crc kubenswrapper[4724]: I0223 19:07:16.101699 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-ctwxj_190a2171-8cbd-4bb4-a22d-76d1cf634934/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 19:07:16 crc kubenswrapper[4724]: I0223 19:07:16.135476 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-ftpsr_8780dd09-5b4b-40f6-81ee-d2163bd3f066/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 19:07:16 crc kubenswrapper[4724]: I0223 19:07:16.290751 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-v2zdl_1a8e063f-7461-4365-bb92-a08b5d5c5b1f/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 19:07:16 crc kubenswrapper[4724]: I0223 19:07:16.363420 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-xldw2_3067abd3-b2db-458d-a71c-9f569c2a6bdc/ssh-known-hosts-edpm-deployment/0.log" Feb 23 19:07:16 crc kubenswrapper[4724]: I0223 19:07:16.583630 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-f447dffc7-s2mfq_46ac9fa5-f3bd-48bf-b1e6-1b570b4bda5b/proxy-server/0.log" Feb 23 19:07:16 crc kubenswrapper[4724]: I0223 19:07:16.711770 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-f447dffc7-s2mfq_46ac9fa5-f3bd-48bf-b1e6-1b570b4bda5b/proxy-httpd/0.log" Feb 23 19:07:16 crc kubenswrapper[4724]: I0223 19:07:16.770205 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-w2vrd_bc3d191e-4725-42ef-90af-16b57d7bf649/swift-ring-rebalance/0.log" Feb 23 19:07:16 crc kubenswrapper[4724]: I0223 19:07:16.815927 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3946025b-c492-4f1b-a3c3-62d2fa658586/account-auditor/0.log" Feb 23 19:07:16 crc kubenswrapper[4724]: I0223 19:07:16.934490 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3946025b-c492-4f1b-a3c3-62d2fa658586/account-reaper/0.log" Feb 23 19:07:17 crc kubenswrapper[4724]: I0223 19:07:17.018108 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3946025b-c492-4f1b-a3c3-62d2fa658586/container-auditor/0.log" Feb 23 19:07:17 crc kubenswrapper[4724]: I0223 19:07:17.037773 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3946025b-c492-4f1b-a3c3-62d2fa658586/account-server/0.log" Feb 23 19:07:17 crc kubenswrapper[4724]: I0223 19:07:17.040329 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3946025b-c492-4f1b-a3c3-62d2fa658586/account-replicator/0.log" Feb 23 19:07:17 crc kubenswrapper[4724]: I0223 19:07:17.203540 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3946025b-c492-4f1b-a3c3-62d2fa658586/container-replicator/0.log" Feb 23 19:07:17 crc kubenswrapper[4724]: I0223 19:07:17.240957 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3946025b-c492-4f1b-a3c3-62d2fa658586/container-server/0.log" Feb 23 19:07:17 crc kubenswrapper[4724]: I0223 19:07:17.272591 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3946025b-c492-4f1b-a3c3-62d2fa658586/object-auditor/0.log" Feb 23 19:07:17 crc kubenswrapper[4724]: I0223 19:07:17.303990 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3946025b-c492-4f1b-a3c3-62d2fa658586/container-updater/0.log" Feb 23 19:07:17 crc kubenswrapper[4724]: I0223 19:07:17.442148 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3946025b-c492-4f1b-a3c3-62d2fa658586/object-expirer/0.log" Feb 23 19:07:17 crc kubenswrapper[4724]: I0223 19:07:17.481672 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3946025b-c492-4f1b-a3c3-62d2fa658586/object-replicator/0.log" Feb 23 19:07:17 crc kubenswrapper[4724]: I0223 19:07:17.501035 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3946025b-c492-4f1b-a3c3-62d2fa658586/object-updater/0.log" Feb 23 19:07:17 crc kubenswrapper[4724]: I0223 19:07:17.502257 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3946025b-c492-4f1b-a3c3-62d2fa658586/object-server/0.log" Feb 23 19:07:17 crc kubenswrapper[4724]: I0223 19:07:17.618939 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3946025b-c492-4f1b-a3c3-62d2fa658586/rsync/0.log" Feb 23 19:07:17 crc kubenswrapper[4724]: I0223 19:07:17.695474 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3946025b-c492-4f1b-a3c3-62d2fa658586/swift-recon-cron/0.log" Feb 23 19:07:17 crc kubenswrapper[4724]: I0223 19:07:17.760721 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-mbgkn_3052df73-dea7-4da0-b0b1-f881cff2b747/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 19:07:17 crc kubenswrapper[4724]: I0223 19:07:17.935407 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_0d826425-e3f8-42d4-823f-2f8db766ad9a/tempest-tests-tempest-tests-runner/0.log" Feb 23 19:07:17 crc kubenswrapper[4724]: I0223 19:07:17.972250 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_978d2d70-05f0-4404-8ace-2ba6f872d25a/test-operator-logs-container/0.log" Feb 23 19:07:18 crc kubenswrapper[4724]: I0223 19:07:18.211247 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-x4j75_a46f5b1a-20be-4f6e-97fb-00662f817dc9/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 19:07:18 crc kubenswrapper[4724]: I0223 19:07:18.964288 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-applier-0_ad11f589-aa5d-493e-b431-25f6f7b0675b/watcher-applier/0.log" Feb 23 19:07:19 crc kubenswrapper[4724]: I0223 19:07:19.651886 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_dc365749-e4ec-46b3-9aa8-522dac685189/watcher-api-log/0.log" Feb 23 19:07:22 crc kubenswrapper[4724]: I0223 19:07:22.222783 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-decision-engine-0_935753ed-464b-4bac-af1f-e356a473c78f/watcher-decision-engine/0.log" Feb 23 19:07:23 crc kubenswrapper[4724]: I0223 19:07:23.248087 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_dc365749-e4ec-46b3-9aa8-522dac685189/watcher-api/0.log" Feb 23 19:07:25 crc kubenswrapper[4724]: I0223 19:07:25.813769 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_eadce7d0-a9bc-4840-919b-a341aba11ca2/memcached/0.log" Feb 23 19:07:27 crc kubenswrapper[4724]: I0223 19:07:27.752177 4724 patch_prober.go:28] interesting pod/machine-config-daemon-rw78r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 19:07:27 crc kubenswrapper[4724]: I0223 19:07:27.752563 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 19:07:44 crc kubenswrapper[4724]: I0223 19:07:44.484857 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d2abe2e0813cd7e38464a3437d4c9e1acf5147797253340570e3ac2557qwvm8_da865614-a81a-4de0-b6e4-8be443632fa5/util/0.log" Feb 23 19:07:44 crc kubenswrapper[4724]: I0223 19:07:44.609860 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d2abe2e0813cd7e38464a3437d4c9e1acf5147797253340570e3ac2557qwvm8_da865614-a81a-4de0-b6e4-8be443632fa5/util/0.log" Feb 23 19:07:44 crc kubenswrapper[4724]: I0223 19:07:44.683957 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d2abe2e0813cd7e38464a3437d4c9e1acf5147797253340570e3ac2557qwvm8_da865614-a81a-4de0-b6e4-8be443632fa5/pull/0.log" Feb 23 19:07:44 crc kubenswrapper[4724]: I0223 19:07:44.792359 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d2abe2e0813cd7e38464a3437d4c9e1acf5147797253340570e3ac2557qwvm8_da865614-a81a-4de0-b6e4-8be443632fa5/pull/0.log" Feb 23 19:07:44 crc kubenswrapper[4724]: I0223 19:07:44.974989 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d2abe2e0813cd7e38464a3437d4c9e1acf5147797253340570e3ac2557qwvm8_da865614-a81a-4de0-b6e4-8be443632fa5/util/0.log" Feb 23 19:07:45 crc kubenswrapper[4724]: I0223 19:07:45.026378 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d2abe2e0813cd7e38464a3437d4c9e1acf5147797253340570e3ac2557qwvm8_da865614-a81a-4de0-b6e4-8be443632fa5/pull/0.log" Feb 23 19:07:45 crc kubenswrapper[4724]: I0223 19:07:45.170674 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d2abe2e0813cd7e38464a3437d4c9e1acf5147797253340570e3ac2557qwvm8_da865614-a81a-4de0-b6e4-8be443632fa5/extract/0.log" Feb 23 19:07:45 crc kubenswrapper[4724]: I0223 19:07:45.372855 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-vqls9_967a6928-46e0-4a1e-90bd-cc9a204d9099/manager/0.log" Feb 23 19:07:45 crc kubenswrapper[4724]: I0223 19:07:45.720681 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-784b5bb6c5-gmdl7_dedf8817-f3cf-4630-a825-71059f681d10/manager/0.log" Feb 23 19:07:45 crc kubenswrapper[4724]: I0223 19:07:45.901783 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-f5x72_6b607306-d732-4142-83d4-92ae20c714cd/manager/0.log" Feb 23 19:07:46 crc kubenswrapper[4724]: I0223 19:07:46.158476 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-9gtq7_2bc5c9a5-0293-4efd-b5a4-0f5c85b238b5/manager/0.log" Feb 23 19:07:46 crc kubenswrapper[4724]: I0223 19:07:46.708993 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-55d77d7b5c-zm7cw_a4842ca7-909d-4d11-bba6-75555f3599b3/manager/0.log" Feb 23 19:07:46 crc kubenswrapper[4724]: I0223 19:07:46.716566 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-22lgm_dd866f81-0e85-4690-b16d-45baf5e856ed/manager/0.log" Feb 23 19:07:46 crc kubenswrapper[4724]: I0223 19:07:46.962870 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-pb2dv_7af82cf5-bcff-40d4-8c1b-4bc71e5ca9a3/manager/0.log" Feb 23 19:07:47 crc kubenswrapper[4724]: I0223 19:07:47.028326 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-djmpk_973124e7-0723-4a5d-ab81-0ef8619f8754/manager/0.log" Feb 23 19:07:47 crc kubenswrapper[4724]: I0223 19:07:47.208109 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-67d996989d-fxj7d_b906fefc-aaf5-48c0-b45b-3d11dbda1c3e/manager/0.log" Feb 23 19:07:47 crc kubenswrapper[4724]: I0223 19:07:47.473875 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-xdfp8_73da6414-95e9-4d5a-a0ca-fbeb32048153/manager/0.log" Feb 23 19:07:47 crc kubenswrapper[4724]: I0223 19:07:47.533241 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6bd4687957-9s4mk_8b193934-08d8-4435-ae40-8b4d7b4878e7/manager/0.log" Feb 23 19:07:47 crc kubenswrapper[4724]: I0223 19:07:47.830355 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-d5z2j_8bc03a47-9ded-40c0-b924-0c936950a12a/manager/0.log" Feb 23 19:07:47 crc kubenswrapper[4724]: I0223 19:07:47.919914 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-659dc6bbfc-p42tx_24d796b9-e6ea-4b70-9424-1352f71c80a6/manager/0.log" Feb 23 19:07:48 crc kubenswrapper[4724]: I0223 19:07:48.089837 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9c44nwv_63923048-2ad5-45f9-9285-9d84dc711fa7/manager/0.log" Feb 23 19:07:48 crc kubenswrapper[4724]: I0223 19:07:48.324531 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-9d7777f98-c6ttl_264513fc-f807-42c5-8089-abc30cf6404b/operator/0.log" Feb 23 19:07:48 crc kubenswrapper[4724]: I0223 19:07:48.537643 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-qnjwh_c7f91058-6754-42fd-916c-38da4dd0acd4/registry-server/0.log" Feb 23 19:07:48 crc kubenswrapper[4724]: I0223 19:07:48.796115 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-5955d8c787-92g5j_a8f9c97e-0259-4c6e-b188-33081d1706fd/manager/0.log" Feb 23 19:07:48 crc kubenswrapper[4724]: I0223 19:07:48.831264 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-szmk8_77ba1933-d39b-4b30-9d8c-1500d7293444/manager/0.log" Feb 23 19:07:49 crc kubenswrapper[4724]: I0223 19:07:49.059812 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-t5pkl_6848c8bf-d8f5-4215-90fb-454b794e33ae/operator/0.log" Feb 23 19:07:49 crc kubenswrapper[4724]: I0223 19:07:49.252414 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-wqsvk_e37a1f8b-cee7-4a13-879e-496d26735ab4/manager/0.log" Feb 23 19:07:49 crc kubenswrapper[4724]: I0223 19:07:49.476936 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5dc6794d5b-4tnw2_ca793345-c1e2-4207-844b-170dd5b70066/manager/0.log" Feb 23 19:07:49 crc kubenswrapper[4724]: I0223 19:07:49.532017 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-589c568786-d85f4_3b37faa8-6e4e-427a-9c1a-84993ed85290/manager/0.log" Feb 23 19:07:49 crc kubenswrapper[4724]: I0223 19:07:49.850529 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5cb6b78489-7tdgw_5c1a94e8-9d64-4e28-b2f9-fdd066fddd6a/manager/0.log" Feb 23 19:07:50 crc kubenswrapper[4724]: I0223 19:07:50.254646 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-bf9ddc465-xrp8k_c38380c9-1ff8-4a96-9c4a-15ed760a25db/manager/0.log" Feb 23 19:07:55 crc kubenswrapper[4724]: I0223 19:07:55.483818 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-4zgfm_70c55fa9-1fa4-415c-98c4-adfe080201d1/manager/0.log" Feb 23 19:07:57 crc kubenswrapper[4724]: I0223 19:07:57.752866 4724 patch_prober.go:28] interesting pod/machine-config-daemon-rw78r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 19:07:57 crc kubenswrapper[4724]: I0223 19:07:57.753185 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 19:08:08 crc kubenswrapper[4724]: I0223 19:08:08.620043 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-xttsp_4a9e0634-64a7-4106-8a10-bfed1ab672da/kube-rbac-proxy/0.log" Feb 23 19:08:08 crc kubenswrapper[4724]: I0223 19:08:08.623683 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-n4kjh_e842e9a3-2897-414d-8606-46bb70b207d9/control-plane-machine-set-operator/0.log" Feb 23 19:08:08 crc kubenswrapper[4724]: I0223 19:08:08.653629 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-xttsp_4a9e0634-64a7-4106-8a10-bfed1ab672da/machine-api-operator/0.log" Feb 23 19:08:19 crc kubenswrapper[4724]: I0223 19:08:19.979456 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-vlrjb_4a08b754-7169-4f53-9212-84ed962b15dd/cert-manager-controller/0.log" Feb 23 19:08:20 crc kubenswrapper[4724]: I0223 19:08:20.163730 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-zzpcv_2e8372fe-4e2d-49f4-94b7-0e6000bd0f5b/cert-manager-webhook/0.log" Feb 23 19:08:20 crc kubenswrapper[4724]: I0223 19:08:20.173283 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-pns6j_209587a2-48da-480c-93b0-17a306f362a3/cert-manager-cainjector/0.log" Feb 23 19:08:27 crc kubenswrapper[4724]: I0223 19:08:27.752036 4724 patch_prober.go:28] interesting pod/machine-config-daemon-rw78r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 19:08:27 crc kubenswrapper[4724]: I0223 19:08:27.752756 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 19:08:27 crc kubenswrapper[4724]: I0223 19:08:27.752823 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" Feb 23 19:08:27 crc kubenswrapper[4724]: I0223 19:08:27.754138 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"228c15b158903cc905167a8959b9c9af81574168172f4b4bf0b36af8b0095e0e"} pod="openshift-machine-config-operator/machine-config-daemon-rw78r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 19:08:27 crc kubenswrapper[4724]: I0223 19:08:27.754198 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" containerID="cri-o://228c15b158903cc905167a8959b9c9af81574168172f4b4bf0b36af8b0095e0e" gracePeriod=600 Feb 23 19:08:28 crc kubenswrapper[4724]: I0223 19:08:28.862090 4724 generic.go:334] "Generic (PLEG): container finished" podID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerID="228c15b158903cc905167a8959b9c9af81574168172f4b4bf0b36af8b0095e0e" exitCode=0 Feb 23 19:08:28 crc kubenswrapper[4724]: I0223 19:08:28.862171 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" event={"ID":"a065b197-b354-4d9b-b2e9-7d4882a3d1a2","Type":"ContainerDied","Data":"228c15b158903cc905167a8959b9c9af81574168172f4b4bf0b36af8b0095e0e"} Feb 23 19:08:28 crc kubenswrapper[4724]: I0223 19:08:28.862642 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" event={"ID":"a065b197-b354-4d9b-b2e9-7d4882a3d1a2","Type":"ContainerStarted","Data":"58372ce2684889e3716ae231c6d47c4b508e3590d753565fcca252a2be0ab53b"} Feb 23 19:08:28 crc kubenswrapper[4724]: I0223 19:08:28.862671 4724 scope.go:117] "RemoveContainer" containerID="0445ec09f32cc3389918fbf9447b786148afd5f9c063c45817e69392efe39cd6" Feb 23 19:08:31 crc kubenswrapper[4724]: I0223 19:08:31.518172 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-lrrzf_86c89d64-bec0-4e95-ae8c-194200a9f20c/nmstate-console-plugin/0.log" Feb 23 19:08:31 crc kubenswrapper[4724]: I0223 19:08:31.677792 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-cfznb_28356f9d-af74-4f20-ba5c-8a40fda9ef6d/nmstate-handler/0.log" Feb 23 19:08:31 crc kubenswrapper[4724]: I0223 19:08:31.755889 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-2s7zq_bce58068-4adb-427b-96f8-e289d595515d/kube-rbac-proxy/0.log" Feb 23 19:08:31 crc kubenswrapper[4724]: I0223 19:08:31.825956 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-2s7zq_bce58068-4adb-427b-96f8-e289d595515d/nmstate-metrics/0.log" Feb 23 19:08:31 crc kubenswrapper[4724]: I0223 19:08:31.926842 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-hs7ws_12ceca0c-78de-41ff-8e20-cdf172bd915e/nmstate-operator/0.log" Feb 23 19:08:32 crc kubenswrapper[4724]: I0223 19:08:32.057255 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-49gxm_1217d925-38a5-4311-a32a-49e306238283/nmstate-webhook/0.log" Feb 23 19:08:44 crc kubenswrapper[4724]: I0223 19:08:44.188322 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-5jjjl_814ddfc1-f41d-41fe-9e19-72ebf86f8950/prometheus-operator/0.log" Feb 23 19:08:44 crc kubenswrapper[4724]: I0223 19:08:44.341740 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6c699bc6b7-ldjml_88e5fd13-0f53-4516-b0e8-73f22b9837eb/prometheus-operator-admission-webhook/0.log" Feb 23 19:08:44 crc kubenswrapper[4724]: I0223 19:08:44.362198 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6c699bc6b7-lthdd_7750cf0f-feab-4fd7-a8a3-4fc9298a169e/prometheus-operator-admission-webhook/0.log" Feb 23 19:08:44 crc kubenswrapper[4724]: I0223 19:08:44.510235 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-djp7f_0a3d2d9a-1225-4ec1-ac5b-4657ca676522/operator/0.log" Feb 23 19:08:44 crc kubenswrapper[4724]: I0223 19:08:44.584634 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-6j5cq_606f1fc9-e753-4c28-8386-dfe7bb1f4eca/perses-operator/0.log" Feb 23 19:08:57 crc kubenswrapper[4724]: I0223 19:08:57.382810 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-fhn7w_8637711e-f5d2-43e1-b8f6-65df43b16ffc/kube-rbac-proxy/0.log" Feb 23 19:08:57 crc kubenswrapper[4724]: I0223 19:08:57.525262 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-fhn7w_8637711e-f5d2-43e1-b8f6-65df43b16ffc/controller/0.log" Feb 23 19:08:57 crc kubenswrapper[4724]: I0223 19:08:57.581837 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6cjbc_da6b6734-568e-4283-8df5-f8e9abbef784/cp-frr-files/0.log" Feb 23 19:08:57 crc kubenswrapper[4724]: I0223 19:08:57.815755 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6cjbc_da6b6734-568e-4283-8df5-f8e9abbef784/cp-frr-files/0.log" Feb 23 19:08:57 crc kubenswrapper[4724]: I0223 19:08:57.828420 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6cjbc_da6b6734-568e-4283-8df5-f8e9abbef784/cp-metrics/0.log" Feb 23 19:08:57 crc kubenswrapper[4724]: I0223 19:08:57.854535 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6cjbc_da6b6734-568e-4283-8df5-f8e9abbef784/cp-reloader/0.log" Feb 23 19:08:57 crc kubenswrapper[4724]: I0223 19:08:57.861833 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6cjbc_da6b6734-568e-4283-8df5-f8e9abbef784/cp-reloader/0.log" Feb 23 19:08:58 crc kubenswrapper[4724]: I0223 19:08:58.002929 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6cjbc_da6b6734-568e-4283-8df5-f8e9abbef784/cp-frr-files/0.log" Feb 23 19:08:58 crc kubenswrapper[4724]: I0223 19:08:58.025987 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6cjbc_da6b6734-568e-4283-8df5-f8e9abbef784/cp-reloader/0.log" Feb 23 19:08:58 crc kubenswrapper[4724]: I0223 19:08:58.046465 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6cjbc_da6b6734-568e-4283-8df5-f8e9abbef784/cp-metrics/0.log" Feb 23 19:08:58 crc kubenswrapper[4724]: I0223 19:08:58.060423 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6cjbc_da6b6734-568e-4283-8df5-f8e9abbef784/cp-metrics/0.log" Feb 23 19:08:58 crc kubenswrapper[4724]: I0223 19:08:58.196025 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6cjbc_da6b6734-568e-4283-8df5-f8e9abbef784/cp-reloader/0.log" Feb 23 19:08:58 crc kubenswrapper[4724]: I0223 19:08:58.202527 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6cjbc_da6b6734-568e-4283-8df5-f8e9abbef784/cp-frr-files/0.log" Feb 23 19:08:58 crc kubenswrapper[4724]: I0223 19:08:58.233301 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6cjbc_da6b6734-568e-4283-8df5-f8e9abbef784/controller/0.log" Feb 23 19:08:58 crc kubenswrapper[4724]: I0223 19:08:58.234040 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6cjbc_da6b6734-568e-4283-8df5-f8e9abbef784/cp-metrics/0.log" Feb 23 19:08:58 crc kubenswrapper[4724]: I0223 19:08:58.428036 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6cjbc_da6b6734-568e-4283-8df5-f8e9abbef784/kube-rbac-proxy/0.log" Feb 23 19:08:58 crc kubenswrapper[4724]: I0223 19:08:58.446710 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6cjbc_da6b6734-568e-4283-8df5-f8e9abbef784/frr-metrics/0.log" Feb 23 19:08:58 crc kubenswrapper[4724]: I0223 19:08:58.457055 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6cjbc_da6b6734-568e-4283-8df5-f8e9abbef784/kube-rbac-proxy-frr/0.log" Feb 23 19:08:58 crc kubenswrapper[4724]: I0223 19:08:58.592864 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6cjbc_da6b6734-568e-4283-8df5-f8e9abbef784/reloader/0.log" Feb 23 19:08:58 crc kubenswrapper[4724]: I0223 19:08:58.698157 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-kcvmh_39fa75d7-3799-41ce-9a9e-ebf9dd8c347b/frr-k8s-webhook-server/0.log" Feb 23 19:08:58 crc kubenswrapper[4724]: I0223 19:08:58.918373 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5bb6655d58-zmrrz_8fb21fbd-388b-4b8f-a0ec-78f2396bf456/manager/0.log" Feb 23 19:08:59 crc kubenswrapper[4724]: I0223 19:08:59.103123 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-745c85d5d8-v6vwt_7e8b0053-5568-4e4e-8021-f2351dc9f4df/webhook-server/0.log" Feb 23 19:08:59 crc kubenswrapper[4724]: I0223 19:08:59.249354 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-dxbt6_a55b73c4-da87-4ce8-8418-3d6d854c0b0e/kube-rbac-proxy/0.log" Feb 23 19:08:59 crc kubenswrapper[4724]: I0223 19:08:59.890919 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-dxbt6_a55b73c4-da87-4ce8-8418-3d6d854c0b0e/speaker/0.log" Feb 23 19:09:00 crc kubenswrapper[4724]: I0223 19:09:00.080942 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6cjbc_da6b6734-568e-4283-8df5-f8e9abbef784/frr/0.log" Feb 23 19:09:11 crc kubenswrapper[4724]: I0223 19:09:11.915822 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08x48lj_ca8d1d6c-2638-493f-8aed-775dd9bd326d/util/0.log" Feb 23 19:09:12 crc kubenswrapper[4724]: I0223 19:09:12.088507 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08x48lj_ca8d1d6c-2638-493f-8aed-775dd9bd326d/util/0.log" Feb 23 19:09:12 crc kubenswrapper[4724]: I0223 19:09:12.108261 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08x48lj_ca8d1d6c-2638-493f-8aed-775dd9bd326d/pull/0.log" Feb 23 19:09:12 crc kubenswrapper[4724]: I0223 19:09:12.116241 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08x48lj_ca8d1d6c-2638-493f-8aed-775dd9bd326d/pull/0.log" Feb 23 19:09:12 crc kubenswrapper[4724]: I0223 19:09:12.291897 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08x48lj_ca8d1d6c-2638-493f-8aed-775dd9bd326d/util/0.log" Feb 23 19:09:12 crc kubenswrapper[4724]: I0223 19:09:12.296538 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08x48lj_ca8d1d6c-2638-493f-8aed-775dd9bd326d/pull/0.log" Feb 23 19:09:12 crc kubenswrapper[4724]: I0223 19:09:12.297151 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08x48lj_ca8d1d6c-2638-493f-8aed-775dd9bd326d/extract/0.log" Feb 23 19:09:12 crc kubenswrapper[4724]: I0223 19:09:12.459473 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21324dbf_371db8f9-a502-40cb-a0cd-256b481c12aa/util/0.log" Feb 23 19:09:12 crc kubenswrapper[4724]: I0223 19:09:12.612126 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21324dbf_371db8f9-a502-40cb-a0cd-256b481c12aa/util/0.log" Feb 23 19:09:12 crc kubenswrapper[4724]: I0223 19:09:12.628892 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21324dbf_371db8f9-a502-40cb-a0cd-256b481c12aa/pull/0.log" Feb 23 19:09:12 crc kubenswrapper[4724]: I0223 19:09:12.629167 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21324dbf_371db8f9-a502-40cb-a0cd-256b481c12aa/pull/0.log" Feb 23 19:09:12 crc kubenswrapper[4724]: I0223 19:09:12.806851 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21324dbf_371db8f9-a502-40cb-a0cd-256b481c12aa/pull/0.log" Feb 23 19:09:12 crc kubenswrapper[4724]: I0223 19:09:12.811850 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21324dbf_371db8f9-a502-40cb-a0cd-256b481c12aa/util/0.log" Feb 23 19:09:12 crc kubenswrapper[4724]: I0223 19:09:12.820424 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc21324dbf_371db8f9-a502-40cb-a0cd-256b481c12aa/extract/0.log" Feb 23 19:09:13 crc kubenswrapper[4724]: I0223 19:09:13.003992 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pmtdz_7bba3085-58d9-4a69-b93b-f4b0034fa2ec/extract-utilities/0.log" Feb 23 19:09:13 crc kubenswrapper[4724]: I0223 19:09:13.199794 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pmtdz_7bba3085-58d9-4a69-b93b-f4b0034fa2ec/extract-content/0.log" Feb 23 19:09:13 crc kubenswrapper[4724]: I0223 19:09:13.223783 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pmtdz_7bba3085-58d9-4a69-b93b-f4b0034fa2ec/extract-content/0.log" Feb 23 19:09:13 crc kubenswrapper[4724]: I0223 19:09:13.230757 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pmtdz_7bba3085-58d9-4a69-b93b-f4b0034fa2ec/extract-utilities/0.log" Feb 23 19:09:13 crc kubenswrapper[4724]: I0223 19:09:13.425725 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pmtdz_7bba3085-58d9-4a69-b93b-f4b0034fa2ec/extract-utilities/0.log" Feb 23 19:09:13 crc kubenswrapper[4724]: I0223 19:09:13.443737 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pmtdz_7bba3085-58d9-4a69-b93b-f4b0034fa2ec/extract-content/0.log" Feb 23 19:09:13 crc kubenswrapper[4724]: I0223 19:09:13.629801 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xwjqh_24d64c3d-d544-4a74-ae90-36b17131a812/extract-utilities/0.log" Feb 23 19:09:13 crc kubenswrapper[4724]: I0223 19:09:13.854737 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xwjqh_24d64c3d-d544-4a74-ae90-36b17131a812/extract-utilities/0.log" Feb 23 19:09:13 crc kubenswrapper[4724]: I0223 19:09:13.926793 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xwjqh_24d64c3d-d544-4a74-ae90-36b17131a812/extract-content/0.log" Feb 23 19:09:14 crc kubenswrapper[4724]: I0223 19:09:14.038272 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xwjqh_24d64c3d-d544-4a74-ae90-36b17131a812/extract-content/0.log" Feb 23 19:09:14 crc kubenswrapper[4724]: I0223 19:09:14.101402 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xwjqh_24d64c3d-d544-4a74-ae90-36b17131a812/extract-utilities/0.log" Feb 23 19:09:14 crc kubenswrapper[4724]: I0223 19:09:14.154217 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xwjqh_24d64c3d-d544-4a74-ae90-36b17131a812/extract-content/0.log" Feb 23 19:09:14 crc kubenswrapper[4724]: I0223 19:09:14.264953 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pmtdz_7bba3085-58d9-4a69-b93b-f4b0034fa2ec/registry-server/0.log" Feb 23 19:09:14 crc kubenswrapper[4724]: I0223 19:09:14.370075 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatt28t_af1905ab-fece-4f3b-8f30-f96b5022bb3d/util/0.log" Feb 23 19:09:14 crc kubenswrapper[4724]: I0223 19:09:14.617675 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatt28t_af1905ab-fece-4f3b-8f30-f96b5022bb3d/pull/0.log" Feb 23 19:09:14 crc kubenswrapper[4724]: I0223 19:09:14.619169 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatt28t_af1905ab-fece-4f3b-8f30-f96b5022bb3d/pull/0.log" Feb 23 19:09:14 crc kubenswrapper[4724]: I0223 19:09:14.650537 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatt28t_af1905ab-fece-4f3b-8f30-f96b5022bb3d/util/0.log" Feb 23 19:09:14 crc kubenswrapper[4724]: I0223 19:09:14.879486 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatt28t_af1905ab-fece-4f3b-8f30-f96b5022bb3d/util/0.log" Feb 23 19:09:14 crc kubenswrapper[4724]: I0223 19:09:14.892769 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatt28t_af1905ab-fece-4f3b-8f30-f96b5022bb3d/pull/0.log" Feb 23 19:09:14 crc kubenswrapper[4724]: I0223 19:09:14.916778 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatt28t_af1905ab-fece-4f3b-8f30-f96b5022bb3d/extract/0.log" Feb 23 19:09:15 crc kubenswrapper[4724]: I0223 19:09:15.013938 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xwjqh_24d64c3d-d544-4a74-ae90-36b17131a812/registry-server/0.log" Feb 23 19:09:15 crc kubenswrapper[4724]: I0223 19:09:15.068995 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-w8klm_67588304-35a3-404e-bd48-9f7bc0ec5a44/marketplace-operator/0.log" Feb 23 19:09:15 crc kubenswrapper[4724]: I0223 19:09:15.186350 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mb467_0fd0393f-f7c2-45b6-9bcd-d83bb0e7988d/extract-utilities/0.log" Feb 23 19:09:15 crc kubenswrapper[4724]: I0223 19:09:15.369988 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mb467_0fd0393f-f7c2-45b6-9bcd-d83bb0e7988d/extract-utilities/0.log" Feb 23 19:09:15 crc kubenswrapper[4724]: I0223 19:09:15.370108 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mb467_0fd0393f-f7c2-45b6-9bcd-d83bb0e7988d/extract-content/0.log" Feb 23 19:09:15 crc kubenswrapper[4724]: I0223 19:09:15.400660 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mb467_0fd0393f-f7c2-45b6-9bcd-d83bb0e7988d/extract-content/0.log" Feb 23 19:09:15 crc kubenswrapper[4724]: I0223 19:09:15.507461 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mb467_0fd0393f-f7c2-45b6-9bcd-d83bb0e7988d/extract-utilities/0.log" Feb 23 19:09:15 crc kubenswrapper[4724]: I0223 19:09:15.579106 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mb467_0fd0393f-f7c2-45b6-9bcd-d83bb0e7988d/extract-content/0.log" Feb 23 19:09:15 crc kubenswrapper[4724]: I0223 19:09:15.731306 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mb467_0fd0393f-f7c2-45b6-9bcd-d83bb0e7988d/registry-server/0.log" Feb 23 19:09:15 crc kubenswrapper[4724]: I0223 19:09:15.756635 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z9smz_474bcfee-4643-4fdc-b7c9-d823ecb79b90/extract-utilities/0.log" Feb 23 19:09:15 crc kubenswrapper[4724]: I0223 19:09:15.915310 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z9smz_474bcfee-4643-4fdc-b7c9-d823ecb79b90/extract-utilities/0.log" Feb 23 19:09:15 crc kubenswrapper[4724]: I0223 19:09:15.943701 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z9smz_474bcfee-4643-4fdc-b7c9-d823ecb79b90/extract-content/0.log" Feb 23 19:09:15 crc kubenswrapper[4724]: I0223 19:09:15.945028 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z9smz_474bcfee-4643-4fdc-b7c9-d823ecb79b90/extract-content/0.log" Feb 23 19:09:16 crc kubenswrapper[4724]: I0223 19:09:16.142010 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z9smz_474bcfee-4643-4fdc-b7c9-d823ecb79b90/extract-utilities/0.log" Feb 23 19:09:16 crc kubenswrapper[4724]: I0223 19:09:16.155777 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z9smz_474bcfee-4643-4fdc-b7c9-d823ecb79b90/extract-content/0.log" Feb 23 19:09:16 crc kubenswrapper[4724]: I0223 19:09:16.802096 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z9smz_474bcfee-4643-4fdc-b7c9-d823ecb79b90/registry-server/0.log" Feb 23 19:09:28 crc kubenswrapper[4724]: I0223 19:09:28.152476 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6c699bc6b7-ldjml_88e5fd13-0f53-4516-b0e8-73f22b9837eb/prometheus-operator-admission-webhook/0.log" Feb 23 19:09:28 crc kubenswrapper[4724]: I0223 19:09:28.155980 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-5jjjl_814ddfc1-f41d-41fe-9e19-72ebf86f8950/prometheus-operator/0.log" Feb 23 19:09:28 crc kubenswrapper[4724]: I0223 19:09:28.171192 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6c699bc6b7-lthdd_7750cf0f-feab-4fd7-a8a3-4fc9298a169e/prometheus-operator-admission-webhook/0.log" Feb 23 19:09:28 crc kubenswrapper[4724]: I0223 19:09:28.326936 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-6j5cq_606f1fc9-e753-4c28-8386-dfe7bb1f4eca/perses-operator/0.log" Feb 23 19:09:28 crc kubenswrapper[4724]: I0223 19:09:28.338845 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-djp7f_0a3d2d9a-1225-4ec1-ac5b-4657ca676522/operator/0.log" Feb 23 19:10:57 crc kubenswrapper[4724]: I0223 19:10:57.752759 4724 patch_prober.go:28] interesting pod/machine-config-daemon-rw78r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 19:10:57 crc kubenswrapper[4724]: I0223 19:10:57.753348 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 19:11:27 crc kubenswrapper[4724]: I0223 19:11:27.752822 4724 patch_prober.go:28] interesting pod/machine-config-daemon-rw78r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 19:11:27 crc kubenswrapper[4724]: I0223 19:11:27.753461 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 19:11:34 crc kubenswrapper[4724]: I0223 19:11:34.089329 4724 generic.go:334] "Generic (PLEG): container finished" podID="fdd5db2f-5d5d-4e03-9ea5-52205ebbc403" containerID="e5813259c09f2c8a7f3e0ed93cb803f891fec90e0ac764c6318933dc581dc811" exitCode=0 Feb 23 19:11:34 crc kubenswrapper[4724]: I0223 19:11:34.089419 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-k7jcr/must-gather-4d6jx" event={"ID":"fdd5db2f-5d5d-4e03-9ea5-52205ebbc403","Type":"ContainerDied","Data":"e5813259c09f2c8a7f3e0ed93cb803f891fec90e0ac764c6318933dc581dc811"} Feb 23 19:11:34 crc kubenswrapper[4724]: I0223 19:11:34.090760 4724 scope.go:117] "RemoveContainer" containerID="e5813259c09f2c8a7f3e0ed93cb803f891fec90e0ac764c6318933dc581dc811" Feb 23 19:11:34 crc kubenswrapper[4724]: I0223 19:11:34.268283 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-k7jcr_must-gather-4d6jx_fdd5db2f-5d5d-4e03-9ea5-52205ebbc403/gather/0.log" Feb 23 19:11:46 crc kubenswrapper[4724]: I0223 19:11:46.961442 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-k7jcr/must-gather-4d6jx"] Feb 23 19:11:46 crc kubenswrapper[4724]: I0223 19:11:46.962157 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-k7jcr/must-gather-4d6jx" podUID="fdd5db2f-5d5d-4e03-9ea5-52205ebbc403" containerName="copy" containerID="cri-o://96f5a033736943c7513f9a8f34a49880113a195bc8587dc4ea5bc800d71b98cf" gracePeriod=2 Feb 23 19:11:46 crc kubenswrapper[4724]: I0223 19:11:46.963653 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-k7jcr/must-gather-4d6jx"] Feb 23 19:11:47 crc kubenswrapper[4724]: I0223 19:11:47.200094 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-k7jcr_must-gather-4d6jx_fdd5db2f-5d5d-4e03-9ea5-52205ebbc403/copy/0.log" Feb 23 19:11:47 crc kubenswrapper[4724]: I0223 19:11:47.200516 4724 generic.go:334] "Generic (PLEG): container finished" podID="fdd5db2f-5d5d-4e03-9ea5-52205ebbc403" containerID="96f5a033736943c7513f9a8f34a49880113a195bc8587dc4ea5bc800d71b98cf" exitCode=143 Feb 23 19:11:47 crc kubenswrapper[4724]: I0223 19:11:47.581679 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-k7jcr_must-gather-4d6jx_fdd5db2f-5d5d-4e03-9ea5-52205ebbc403/copy/0.log" Feb 23 19:11:47 crc kubenswrapper[4724]: I0223 19:11:47.582280 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k7jcr/must-gather-4d6jx" Feb 23 19:11:47 crc kubenswrapper[4724]: I0223 19:11:47.753190 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fdd5db2f-5d5d-4e03-9ea5-52205ebbc403-must-gather-output\") pod \"fdd5db2f-5d5d-4e03-9ea5-52205ebbc403\" (UID: \"fdd5db2f-5d5d-4e03-9ea5-52205ebbc403\") " Feb 23 19:11:47 crc kubenswrapper[4724]: I0223 19:11:47.753425 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2db4q\" (UniqueName: \"kubernetes.io/projected/fdd5db2f-5d5d-4e03-9ea5-52205ebbc403-kube-api-access-2db4q\") pod \"fdd5db2f-5d5d-4e03-9ea5-52205ebbc403\" (UID: \"fdd5db2f-5d5d-4e03-9ea5-52205ebbc403\") " Feb 23 19:11:47 crc kubenswrapper[4724]: I0223 19:11:47.759354 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdd5db2f-5d5d-4e03-9ea5-52205ebbc403-kube-api-access-2db4q" (OuterVolumeSpecName: "kube-api-access-2db4q") pod "fdd5db2f-5d5d-4e03-9ea5-52205ebbc403" (UID: "fdd5db2f-5d5d-4e03-9ea5-52205ebbc403"). InnerVolumeSpecName "kube-api-access-2db4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:11:47 crc kubenswrapper[4724]: I0223 19:11:47.855564 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2db4q\" (UniqueName: \"kubernetes.io/projected/fdd5db2f-5d5d-4e03-9ea5-52205ebbc403-kube-api-access-2db4q\") on node \"crc\" DevicePath \"\"" Feb 23 19:11:47 crc kubenswrapper[4724]: I0223 19:11:47.966627 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdd5db2f-5d5d-4e03-9ea5-52205ebbc403-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "fdd5db2f-5d5d-4e03-9ea5-52205ebbc403" (UID: "fdd5db2f-5d5d-4e03-9ea5-52205ebbc403"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 19:11:48 crc kubenswrapper[4724]: I0223 19:11:48.060976 4724 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fdd5db2f-5d5d-4e03-9ea5-52205ebbc403-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 23 19:11:48 crc kubenswrapper[4724]: I0223 19:11:48.210137 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-k7jcr_must-gather-4d6jx_fdd5db2f-5d5d-4e03-9ea5-52205ebbc403/copy/0.log" Feb 23 19:11:48 crc kubenswrapper[4724]: I0223 19:11:48.210486 4724 scope.go:117] "RemoveContainer" containerID="96f5a033736943c7513f9a8f34a49880113a195bc8587dc4ea5bc800d71b98cf" Feb 23 19:11:48 crc kubenswrapper[4724]: I0223 19:11:48.210631 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k7jcr/must-gather-4d6jx" Feb 23 19:11:48 crc kubenswrapper[4724]: I0223 19:11:48.237970 4724 scope.go:117] "RemoveContainer" containerID="e5813259c09f2c8a7f3e0ed93cb803f891fec90e0ac764c6318933dc581dc811" Feb 23 19:11:48 crc kubenswrapper[4724]: I0223 19:11:48.982757 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdd5db2f-5d5d-4e03-9ea5-52205ebbc403" path="/var/lib/kubelet/pods/fdd5db2f-5d5d-4e03-9ea5-52205ebbc403/volumes" Feb 23 19:11:57 crc kubenswrapper[4724]: I0223 19:11:57.752009 4724 patch_prober.go:28] interesting pod/machine-config-daemon-rw78r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 19:11:57 crc kubenswrapper[4724]: I0223 19:11:57.752497 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 19:11:57 crc kubenswrapper[4724]: I0223 19:11:57.752556 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" Feb 23 19:11:57 crc kubenswrapper[4724]: I0223 19:11:57.753533 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"58372ce2684889e3716ae231c6d47c4b508e3590d753565fcca252a2be0ab53b"} pod="openshift-machine-config-operator/machine-config-daemon-rw78r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 19:11:57 crc kubenswrapper[4724]: I0223 19:11:57.753605 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerName="machine-config-daemon" containerID="cri-o://58372ce2684889e3716ae231c6d47c4b508e3590d753565fcca252a2be0ab53b" gracePeriod=600 Feb 23 19:11:57 crc kubenswrapper[4724]: E0223 19:11:57.873904 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 19:11:58 crc kubenswrapper[4724]: I0223 19:11:58.434706 4724 generic.go:334] "Generic (PLEG): container finished" podID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" containerID="58372ce2684889e3716ae231c6d47c4b508e3590d753565fcca252a2be0ab53b" exitCode=0 Feb 23 19:11:58 crc kubenswrapper[4724]: I0223 19:11:58.434759 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" event={"ID":"a065b197-b354-4d9b-b2e9-7d4882a3d1a2","Type":"ContainerDied","Data":"58372ce2684889e3716ae231c6d47c4b508e3590d753565fcca252a2be0ab53b"} Feb 23 19:11:58 crc kubenswrapper[4724]: I0223 19:11:58.435101 4724 scope.go:117] "RemoveContainer" containerID="228c15b158903cc905167a8959b9c9af81574168172f4b4bf0b36af8b0095e0e" Feb 23 19:11:58 crc kubenswrapper[4724]: I0223 19:11:58.436002 4724 scope.go:117] "RemoveContainer" containerID="58372ce2684889e3716ae231c6d47c4b508e3590d753565fcca252a2be0ab53b" Feb 23 19:11:58 crc kubenswrapper[4724]: E0223 19:11:58.436362 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 19:12:12 crc kubenswrapper[4724]: I0223 19:12:12.951982 4724 scope.go:117] "RemoveContainer" containerID="58372ce2684889e3716ae231c6d47c4b508e3590d753565fcca252a2be0ab53b" Feb 23 19:12:12 crc kubenswrapper[4724]: E0223 19:12:12.953554 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 19:12:15 crc kubenswrapper[4724]: I0223 19:12:15.707346 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-52r94"] Feb 23 19:12:15 crc kubenswrapper[4724]: E0223 19:12:15.708209 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdd5db2f-5d5d-4e03-9ea5-52205ebbc403" containerName="gather" Feb 23 19:12:15 crc kubenswrapper[4724]: I0223 19:12:15.708228 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdd5db2f-5d5d-4e03-9ea5-52205ebbc403" containerName="gather" Feb 23 19:12:15 crc kubenswrapper[4724]: E0223 19:12:15.708240 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdd5db2f-5d5d-4e03-9ea5-52205ebbc403" containerName="copy" Feb 23 19:12:15 crc kubenswrapper[4724]: I0223 19:12:15.708247 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdd5db2f-5d5d-4e03-9ea5-52205ebbc403" containerName="copy" Feb 23 19:12:15 crc kubenswrapper[4724]: E0223 19:12:15.708268 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56a91896-a5d5-4879-bd9f-0dd8791887d4" containerName="container-00" Feb 23 19:12:15 crc kubenswrapper[4724]: I0223 19:12:15.708275 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="56a91896-a5d5-4879-bd9f-0dd8791887d4" containerName="container-00" Feb 23 19:12:15 crc kubenswrapper[4724]: I0223 19:12:15.708571 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="56a91896-a5d5-4879-bd9f-0dd8791887d4" containerName="container-00" Feb 23 19:12:15 crc kubenswrapper[4724]: I0223 19:12:15.708586 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdd5db2f-5d5d-4e03-9ea5-52205ebbc403" containerName="copy" Feb 23 19:12:15 crc kubenswrapper[4724]: I0223 19:12:15.708605 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdd5db2f-5d5d-4e03-9ea5-52205ebbc403" containerName="gather" Feb 23 19:12:15 crc kubenswrapper[4724]: I0223 19:12:15.710893 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-52r94" Feb 23 19:12:15 crc kubenswrapper[4724]: I0223 19:12:15.743868 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-52r94"] Feb 23 19:12:15 crc kubenswrapper[4724]: I0223 19:12:15.845691 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c53bdf94-a632-4e5f-bbad-bfc225f3fb01-utilities\") pod \"community-operators-52r94\" (UID: \"c53bdf94-a632-4e5f-bbad-bfc225f3fb01\") " pod="openshift-marketplace/community-operators-52r94" Feb 23 19:12:15 crc kubenswrapper[4724]: I0223 19:12:15.845798 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qjd9\" (UniqueName: \"kubernetes.io/projected/c53bdf94-a632-4e5f-bbad-bfc225f3fb01-kube-api-access-5qjd9\") pod \"community-operators-52r94\" (UID: \"c53bdf94-a632-4e5f-bbad-bfc225f3fb01\") " pod="openshift-marketplace/community-operators-52r94" Feb 23 19:12:15 crc kubenswrapper[4724]: I0223 19:12:15.845817 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c53bdf94-a632-4e5f-bbad-bfc225f3fb01-catalog-content\") pod \"community-operators-52r94\" (UID: \"c53bdf94-a632-4e5f-bbad-bfc225f3fb01\") " pod="openshift-marketplace/community-operators-52r94" Feb 23 19:12:15 crc kubenswrapper[4724]: I0223 19:12:15.947828 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c53bdf94-a632-4e5f-bbad-bfc225f3fb01-utilities\") pod \"community-operators-52r94\" (UID: \"c53bdf94-a632-4e5f-bbad-bfc225f3fb01\") " pod="openshift-marketplace/community-operators-52r94" Feb 23 19:12:15 crc kubenswrapper[4724]: I0223 19:12:15.947968 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qjd9\" (UniqueName: \"kubernetes.io/projected/c53bdf94-a632-4e5f-bbad-bfc225f3fb01-kube-api-access-5qjd9\") pod \"community-operators-52r94\" (UID: \"c53bdf94-a632-4e5f-bbad-bfc225f3fb01\") " pod="openshift-marketplace/community-operators-52r94" Feb 23 19:12:15 crc kubenswrapper[4724]: I0223 19:12:15.947991 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c53bdf94-a632-4e5f-bbad-bfc225f3fb01-catalog-content\") pod \"community-operators-52r94\" (UID: \"c53bdf94-a632-4e5f-bbad-bfc225f3fb01\") " pod="openshift-marketplace/community-operators-52r94" Feb 23 19:12:15 crc kubenswrapper[4724]: I0223 19:12:15.948487 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c53bdf94-a632-4e5f-bbad-bfc225f3fb01-utilities\") pod \"community-operators-52r94\" (UID: \"c53bdf94-a632-4e5f-bbad-bfc225f3fb01\") " pod="openshift-marketplace/community-operators-52r94" Feb 23 19:12:15 crc kubenswrapper[4724]: I0223 19:12:15.948648 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c53bdf94-a632-4e5f-bbad-bfc225f3fb01-catalog-content\") pod \"community-operators-52r94\" (UID: \"c53bdf94-a632-4e5f-bbad-bfc225f3fb01\") " pod="openshift-marketplace/community-operators-52r94" Feb 23 19:12:15 crc kubenswrapper[4724]: I0223 19:12:15.970378 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qjd9\" (UniqueName: \"kubernetes.io/projected/c53bdf94-a632-4e5f-bbad-bfc225f3fb01-kube-api-access-5qjd9\") pod \"community-operators-52r94\" (UID: \"c53bdf94-a632-4e5f-bbad-bfc225f3fb01\") " pod="openshift-marketplace/community-operators-52r94" Feb 23 19:12:16 crc kubenswrapper[4724]: I0223 19:12:16.040084 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-52r94" Feb 23 19:12:16 crc kubenswrapper[4724]: I0223 19:12:16.579982 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-52r94"] Feb 23 19:12:16 crc kubenswrapper[4724]: I0223 19:12:16.642111 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52r94" event={"ID":"c53bdf94-a632-4e5f-bbad-bfc225f3fb01","Type":"ContainerStarted","Data":"a84e8b9dc84e23fd108507084581da30fb0136b159727442a2068ae4da9a001d"} Feb 23 19:12:17 crc kubenswrapper[4724]: I0223 19:12:17.670442 4724 generic.go:334] "Generic (PLEG): container finished" podID="c53bdf94-a632-4e5f-bbad-bfc225f3fb01" containerID="00a34f6dd71dc3cbabe6118c76764913d8e37a6cfeeb05aa59adc88e58aed8a8" exitCode=0 Feb 23 19:12:17 crc kubenswrapper[4724]: I0223 19:12:17.670498 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52r94" event={"ID":"c53bdf94-a632-4e5f-bbad-bfc225f3fb01","Type":"ContainerDied","Data":"00a34f6dd71dc3cbabe6118c76764913d8e37a6cfeeb05aa59adc88e58aed8a8"} Feb 23 19:12:17 crc kubenswrapper[4724]: I0223 19:12:17.673524 4724 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 19:12:18 crc kubenswrapper[4724]: I0223 19:12:18.680245 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52r94" event={"ID":"c53bdf94-a632-4e5f-bbad-bfc225f3fb01","Type":"ContainerStarted","Data":"f723956b9b2871fa3d46df1ef434b9cc1d39722e1b0a5d0249f3cff313d7fbea"} Feb 23 19:12:20 crc kubenswrapper[4724]: I0223 19:12:20.700114 4724 generic.go:334] "Generic (PLEG): container finished" podID="c53bdf94-a632-4e5f-bbad-bfc225f3fb01" containerID="f723956b9b2871fa3d46df1ef434b9cc1d39722e1b0a5d0249f3cff313d7fbea" exitCode=0 Feb 23 19:12:20 crc kubenswrapper[4724]: I0223 19:12:20.700197 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52r94" event={"ID":"c53bdf94-a632-4e5f-bbad-bfc225f3fb01","Type":"ContainerDied","Data":"f723956b9b2871fa3d46df1ef434b9cc1d39722e1b0a5d0249f3cff313d7fbea"} Feb 23 19:12:21 crc kubenswrapper[4724]: I0223 19:12:21.712895 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52r94" event={"ID":"c53bdf94-a632-4e5f-bbad-bfc225f3fb01","Type":"ContainerStarted","Data":"01781de7b7a7e0393995ac2973cda28e87cd653dd65b8bd5e4ce414b2c9abdb3"} Feb 23 19:12:21 crc kubenswrapper[4724]: I0223 19:12:21.745879 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-52r94" podStartSLOduration=3.270838568 podStartE2EDuration="6.745853017s" podCreationTimestamp="2026-02-23 19:12:15 +0000 UTC" firstStartedPulling="2026-02-23 19:12:17.67325882 +0000 UTC m=+6093.489458410" lastFinishedPulling="2026-02-23 19:12:21.148273259 +0000 UTC m=+6096.964472859" observedRunningTime="2026-02-23 19:12:21.733570111 +0000 UTC m=+6097.549769711" watchObservedRunningTime="2026-02-23 19:12:21.745853017 +0000 UTC m=+6097.562052617" Feb 23 19:12:26 crc kubenswrapper[4724]: I0223 19:12:26.040169 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-52r94" Feb 23 19:12:26 crc kubenswrapper[4724]: I0223 19:12:26.040475 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-52r94" Feb 23 19:12:26 crc kubenswrapper[4724]: I0223 19:12:26.085185 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-52r94" Feb 23 19:12:26 crc kubenswrapper[4724]: I0223 19:12:26.824950 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-52r94" Feb 23 19:12:26 crc kubenswrapper[4724]: I0223 19:12:26.887979 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-52r94"] Feb 23 19:12:26 crc kubenswrapper[4724]: I0223 19:12:26.950706 4724 scope.go:117] "RemoveContainer" containerID="58372ce2684889e3716ae231c6d47c4b508e3590d753565fcca252a2be0ab53b" Feb 23 19:12:26 crc kubenswrapper[4724]: E0223 19:12:26.950930 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 19:12:28 crc kubenswrapper[4724]: I0223 19:12:28.775023 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-52r94" podUID="c53bdf94-a632-4e5f-bbad-bfc225f3fb01" containerName="registry-server" containerID="cri-o://01781de7b7a7e0393995ac2973cda28e87cd653dd65b8bd5e4ce414b2c9abdb3" gracePeriod=2 Feb 23 19:12:29 crc kubenswrapper[4724]: I0223 19:12:29.315649 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-52r94" Feb 23 19:12:29 crc kubenswrapper[4724]: I0223 19:12:29.389250 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qjd9\" (UniqueName: \"kubernetes.io/projected/c53bdf94-a632-4e5f-bbad-bfc225f3fb01-kube-api-access-5qjd9\") pod \"c53bdf94-a632-4e5f-bbad-bfc225f3fb01\" (UID: \"c53bdf94-a632-4e5f-bbad-bfc225f3fb01\") " Feb 23 19:12:29 crc kubenswrapper[4724]: I0223 19:12:29.389529 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c53bdf94-a632-4e5f-bbad-bfc225f3fb01-utilities\") pod \"c53bdf94-a632-4e5f-bbad-bfc225f3fb01\" (UID: \"c53bdf94-a632-4e5f-bbad-bfc225f3fb01\") " Feb 23 19:12:29 crc kubenswrapper[4724]: I0223 19:12:29.389625 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c53bdf94-a632-4e5f-bbad-bfc225f3fb01-catalog-content\") pod \"c53bdf94-a632-4e5f-bbad-bfc225f3fb01\" (UID: \"c53bdf94-a632-4e5f-bbad-bfc225f3fb01\") " Feb 23 19:12:29 crc kubenswrapper[4724]: I0223 19:12:29.390686 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c53bdf94-a632-4e5f-bbad-bfc225f3fb01-utilities" (OuterVolumeSpecName: "utilities") pod "c53bdf94-a632-4e5f-bbad-bfc225f3fb01" (UID: "c53bdf94-a632-4e5f-bbad-bfc225f3fb01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 19:12:29 crc kubenswrapper[4724]: I0223 19:12:29.403074 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c53bdf94-a632-4e5f-bbad-bfc225f3fb01-kube-api-access-5qjd9" (OuterVolumeSpecName: "kube-api-access-5qjd9") pod "c53bdf94-a632-4e5f-bbad-bfc225f3fb01" (UID: "c53bdf94-a632-4e5f-bbad-bfc225f3fb01"). InnerVolumeSpecName "kube-api-access-5qjd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:12:29 crc kubenswrapper[4724]: I0223 19:12:29.492614 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c53bdf94-a632-4e5f-bbad-bfc225f3fb01-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 19:12:29 crc kubenswrapper[4724]: I0223 19:12:29.492651 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qjd9\" (UniqueName: \"kubernetes.io/projected/c53bdf94-a632-4e5f-bbad-bfc225f3fb01-kube-api-access-5qjd9\") on node \"crc\" DevicePath \"\"" Feb 23 19:12:29 crc kubenswrapper[4724]: I0223 19:12:29.787641 4724 generic.go:334] "Generic (PLEG): container finished" podID="c53bdf94-a632-4e5f-bbad-bfc225f3fb01" containerID="01781de7b7a7e0393995ac2973cda28e87cd653dd65b8bd5e4ce414b2c9abdb3" exitCode=0 Feb 23 19:12:29 crc kubenswrapper[4724]: I0223 19:12:29.787684 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52r94" event={"ID":"c53bdf94-a632-4e5f-bbad-bfc225f3fb01","Type":"ContainerDied","Data":"01781de7b7a7e0393995ac2973cda28e87cd653dd65b8bd5e4ce414b2c9abdb3"} Feb 23 19:12:29 crc kubenswrapper[4724]: I0223 19:12:29.787712 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52r94" event={"ID":"c53bdf94-a632-4e5f-bbad-bfc225f3fb01","Type":"ContainerDied","Data":"a84e8b9dc84e23fd108507084581da30fb0136b159727442a2068ae4da9a001d"} Feb 23 19:12:29 crc kubenswrapper[4724]: I0223 19:12:29.787729 4724 scope.go:117] "RemoveContainer" containerID="01781de7b7a7e0393995ac2973cda28e87cd653dd65b8bd5e4ce414b2c9abdb3" Feb 23 19:12:29 crc kubenswrapper[4724]: I0223 19:12:29.787872 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-52r94" Feb 23 19:12:29 crc kubenswrapper[4724]: I0223 19:12:29.807859 4724 scope.go:117] "RemoveContainer" containerID="f723956b9b2871fa3d46df1ef434b9cc1d39722e1b0a5d0249f3cff313d7fbea" Feb 23 19:12:29 crc kubenswrapper[4724]: I0223 19:12:29.830979 4724 scope.go:117] "RemoveContainer" containerID="00a34f6dd71dc3cbabe6118c76764913d8e37a6cfeeb05aa59adc88e58aed8a8" Feb 23 19:12:29 crc kubenswrapper[4724]: I0223 19:12:29.875012 4724 scope.go:117] "RemoveContainer" containerID="01781de7b7a7e0393995ac2973cda28e87cd653dd65b8bd5e4ce414b2c9abdb3" Feb 23 19:12:29 crc kubenswrapper[4724]: E0223 19:12:29.875486 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01781de7b7a7e0393995ac2973cda28e87cd653dd65b8bd5e4ce414b2c9abdb3\": container with ID starting with 01781de7b7a7e0393995ac2973cda28e87cd653dd65b8bd5e4ce414b2c9abdb3 not found: ID does not exist" containerID="01781de7b7a7e0393995ac2973cda28e87cd653dd65b8bd5e4ce414b2c9abdb3" Feb 23 19:12:29 crc kubenswrapper[4724]: I0223 19:12:29.875521 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01781de7b7a7e0393995ac2973cda28e87cd653dd65b8bd5e4ce414b2c9abdb3"} err="failed to get container status \"01781de7b7a7e0393995ac2973cda28e87cd653dd65b8bd5e4ce414b2c9abdb3\": rpc error: code = NotFound desc = could not find container \"01781de7b7a7e0393995ac2973cda28e87cd653dd65b8bd5e4ce414b2c9abdb3\": container with ID starting with 01781de7b7a7e0393995ac2973cda28e87cd653dd65b8bd5e4ce414b2c9abdb3 not found: ID does not exist" Feb 23 19:12:29 crc kubenswrapper[4724]: I0223 19:12:29.875540 4724 scope.go:117] "RemoveContainer" containerID="f723956b9b2871fa3d46df1ef434b9cc1d39722e1b0a5d0249f3cff313d7fbea" Feb 23 19:12:29 crc kubenswrapper[4724]: E0223 19:12:29.875818 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f723956b9b2871fa3d46df1ef434b9cc1d39722e1b0a5d0249f3cff313d7fbea\": container with ID starting with f723956b9b2871fa3d46df1ef434b9cc1d39722e1b0a5d0249f3cff313d7fbea not found: ID does not exist" containerID="f723956b9b2871fa3d46df1ef434b9cc1d39722e1b0a5d0249f3cff313d7fbea" Feb 23 19:12:29 crc kubenswrapper[4724]: I0223 19:12:29.875840 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f723956b9b2871fa3d46df1ef434b9cc1d39722e1b0a5d0249f3cff313d7fbea"} err="failed to get container status \"f723956b9b2871fa3d46df1ef434b9cc1d39722e1b0a5d0249f3cff313d7fbea\": rpc error: code = NotFound desc = could not find container \"f723956b9b2871fa3d46df1ef434b9cc1d39722e1b0a5d0249f3cff313d7fbea\": container with ID starting with f723956b9b2871fa3d46df1ef434b9cc1d39722e1b0a5d0249f3cff313d7fbea not found: ID does not exist" Feb 23 19:12:29 crc kubenswrapper[4724]: I0223 19:12:29.875852 4724 scope.go:117] "RemoveContainer" containerID="00a34f6dd71dc3cbabe6118c76764913d8e37a6cfeeb05aa59adc88e58aed8a8" Feb 23 19:12:29 crc kubenswrapper[4724]: E0223 19:12:29.876184 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00a34f6dd71dc3cbabe6118c76764913d8e37a6cfeeb05aa59adc88e58aed8a8\": container with ID starting with 00a34f6dd71dc3cbabe6118c76764913d8e37a6cfeeb05aa59adc88e58aed8a8 not found: ID does not exist" containerID="00a34f6dd71dc3cbabe6118c76764913d8e37a6cfeeb05aa59adc88e58aed8a8" Feb 23 19:12:29 crc kubenswrapper[4724]: I0223 19:12:29.876205 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00a34f6dd71dc3cbabe6118c76764913d8e37a6cfeeb05aa59adc88e58aed8a8"} err="failed to get container status \"00a34f6dd71dc3cbabe6118c76764913d8e37a6cfeeb05aa59adc88e58aed8a8\": rpc error: code = NotFound desc = could not find container \"00a34f6dd71dc3cbabe6118c76764913d8e37a6cfeeb05aa59adc88e58aed8a8\": container with ID starting with 00a34f6dd71dc3cbabe6118c76764913d8e37a6cfeeb05aa59adc88e58aed8a8 not found: ID does not exist" Feb 23 19:12:30 crc kubenswrapper[4724]: I0223 19:12:30.514212 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c53bdf94-a632-4e5f-bbad-bfc225f3fb01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c53bdf94-a632-4e5f-bbad-bfc225f3fb01" (UID: "c53bdf94-a632-4e5f-bbad-bfc225f3fb01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 19:12:30 crc kubenswrapper[4724]: I0223 19:12:30.615490 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c53bdf94-a632-4e5f-bbad-bfc225f3fb01-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 19:12:30 crc kubenswrapper[4724]: I0223 19:12:30.738731 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-52r94"] Feb 23 19:12:30 crc kubenswrapper[4724]: I0223 19:12:30.750817 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-52r94"] Feb 23 19:12:30 crc kubenswrapper[4724]: I0223 19:12:30.962863 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c53bdf94-a632-4e5f-bbad-bfc225f3fb01" path="/var/lib/kubelet/pods/c53bdf94-a632-4e5f-bbad-bfc225f3fb01/volumes" Feb 23 19:12:38 crc kubenswrapper[4724]: I0223 19:12:38.951284 4724 scope.go:117] "RemoveContainer" containerID="58372ce2684889e3716ae231c6d47c4b508e3590d753565fcca252a2be0ab53b" Feb 23 19:12:38 crc kubenswrapper[4724]: E0223 19:12:38.952060 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 19:12:51 crc kubenswrapper[4724]: I0223 19:12:51.951657 4724 scope.go:117] "RemoveContainer" containerID="58372ce2684889e3716ae231c6d47c4b508e3590d753565fcca252a2be0ab53b" Feb 23 19:12:51 crc kubenswrapper[4724]: E0223 19:12:51.952429 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 19:13:05 crc kubenswrapper[4724]: I0223 19:13:05.951913 4724 scope.go:117] "RemoveContainer" containerID="58372ce2684889e3716ae231c6d47c4b508e3590d753565fcca252a2be0ab53b" Feb 23 19:13:05 crc kubenswrapper[4724]: E0223 19:13:05.952967 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 19:13:15 crc kubenswrapper[4724]: I0223 19:13:15.099141 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-z4rpk"] Feb 23 19:13:15 crc kubenswrapper[4724]: E0223 19:13:15.100158 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c53bdf94-a632-4e5f-bbad-bfc225f3fb01" containerName="extract-content" Feb 23 19:13:15 crc kubenswrapper[4724]: I0223 19:13:15.100176 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="c53bdf94-a632-4e5f-bbad-bfc225f3fb01" containerName="extract-content" Feb 23 19:13:15 crc kubenswrapper[4724]: E0223 19:13:15.100186 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c53bdf94-a632-4e5f-bbad-bfc225f3fb01" containerName="registry-server" Feb 23 19:13:15 crc kubenswrapper[4724]: I0223 19:13:15.100192 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="c53bdf94-a632-4e5f-bbad-bfc225f3fb01" containerName="registry-server" Feb 23 19:13:15 crc kubenswrapper[4724]: E0223 19:13:15.100204 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c53bdf94-a632-4e5f-bbad-bfc225f3fb01" containerName="extract-utilities" Feb 23 19:13:15 crc kubenswrapper[4724]: I0223 19:13:15.100211 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="c53bdf94-a632-4e5f-bbad-bfc225f3fb01" containerName="extract-utilities" Feb 23 19:13:15 crc kubenswrapper[4724]: I0223 19:13:15.100429 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="c53bdf94-a632-4e5f-bbad-bfc225f3fb01" containerName="registry-server" Feb 23 19:13:15 crc kubenswrapper[4724]: I0223 19:13:15.102071 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z4rpk" Feb 23 19:13:15 crc kubenswrapper[4724]: I0223 19:13:15.153907 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z4rpk"] Feb 23 19:13:15 crc kubenswrapper[4724]: I0223 19:13:15.189819 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb03318b-3389-4ceb-baf6-4c123c6abf5f-catalog-content\") pod \"redhat-marketplace-z4rpk\" (UID: \"fb03318b-3389-4ceb-baf6-4c123c6abf5f\") " pod="openshift-marketplace/redhat-marketplace-z4rpk" Feb 23 19:13:15 crc kubenswrapper[4724]: I0223 19:13:15.190674 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd7n4\" (UniqueName: \"kubernetes.io/projected/fb03318b-3389-4ceb-baf6-4c123c6abf5f-kube-api-access-zd7n4\") pod \"redhat-marketplace-z4rpk\" (UID: \"fb03318b-3389-4ceb-baf6-4c123c6abf5f\") " pod="openshift-marketplace/redhat-marketplace-z4rpk" Feb 23 19:13:15 crc kubenswrapper[4724]: I0223 19:13:15.191568 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb03318b-3389-4ceb-baf6-4c123c6abf5f-utilities\") pod \"redhat-marketplace-z4rpk\" (UID: \"fb03318b-3389-4ceb-baf6-4c123c6abf5f\") " pod="openshift-marketplace/redhat-marketplace-z4rpk" Feb 23 19:13:15 crc kubenswrapper[4724]: I0223 19:13:15.293802 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd7n4\" (UniqueName: \"kubernetes.io/projected/fb03318b-3389-4ceb-baf6-4c123c6abf5f-kube-api-access-zd7n4\") pod \"redhat-marketplace-z4rpk\" (UID: \"fb03318b-3389-4ceb-baf6-4c123c6abf5f\") " pod="openshift-marketplace/redhat-marketplace-z4rpk" Feb 23 19:13:15 crc kubenswrapper[4724]: I0223 19:13:15.293882 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb03318b-3389-4ceb-baf6-4c123c6abf5f-utilities\") pod \"redhat-marketplace-z4rpk\" (UID: \"fb03318b-3389-4ceb-baf6-4c123c6abf5f\") " pod="openshift-marketplace/redhat-marketplace-z4rpk" Feb 23 19:13:15 crc kubenswrapper[4724]: I0223 19:13:15.294016 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb03318b-3389-4ceb-baf6-4c123c6abf5f-catalog-content\") pod \"redhat-marketplace-z4rpk\" (UID: \"fb03318b-3389-4ceb-baf6-4c123c6abf5f\") " pod="openshift-marketplace/redhat-marketplace-z4rpk" Feb 23 19:13:15 crc kubenswrapper[4724]: I0223 19:13:15.294598 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb03318b-3389-4ceb-baf6-4c123c6abf5f-catalog-content\") pod \"redhat-marketplace-z4rpk\" (UID: \"fb03318b-3389-4ceb-baf6-4c123c6abf5f\") " pod="openshift-marketplace/redhat-marketplace-z4rpk" Feb 23 19:13:15 crc kubenswrapper[4724]: I0223 19:13:15.294640 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb03318b-3389-4ceb-baf6-4c123c6abf5f-utilities\") pod \"redhat-marketplace-z4rpk\" (UID: \"fb03318b-3389-4ceb-baf6-4c123c6abf5f\") " pod="openshift-marketplace/redhat-marketplace-z4rpk" Feb 23 19:13:15 crc kubenswrapper[4724]: I0223 19:13:15.314126 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd7n4\" (UniqueName: \"kubernetes.io/projected/fb03318b-3389-4ceb-baf6-4c123c6abf5f-kube-api-access-zd7n4\") pod \"redhat-marketplace-z4rpk\" (UID: \"fb03318b-3389-4ceb-baf6-4c123c6abf5f\") " pod="openshift-marketplace/redhat-marketplace-z4rpk" Feb 23 19:13:15 crc kubenswrapper[4724]: I0223 19:13:15.434999 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z4rpk" Feb 23 19:13:15 crc kubenswrapper[4724]: I0223 19:13:15.896298 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z4rpk"] Feb 23 19:13:16 crc kubenswrapper[4724]: I0223 19:13:16.249534 4724 generic.go:334] "Generic (PLEG): container finished" podID="fb03318b-3389-4ceb-baf6-4c123c6abf5f" containerID="ccfeeca793a61440573db045b803adaba989101e163b21370b1190e0b14a8e24" exitCode=0 Feb 23 19:13:16 crc kubenswrapper[4724]: I0223 19:13:16.249739 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z4rpk" event={"ID":"fb03318b-3389-4ceb-baf6-4c123c6abf5f","Type":"ContainerDied","Data":"ccfeeca793a61440573db045b803adaba989101e163b21370b1190e0b14a8e24"} Feb 23 19:13:16 crc kubenswrapper[4724]: I0223 19:13:16.251220 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z4rpk" event={"ID":"fb03318b-3389-4ceb-baf6-4c123c6abf5f","Type":"ContainerStarted","Data":"c9221b0dade7a0de701b447eca87c587293bff0a0844170b80fd27d8fa548458"} Feb 23 19:13:17 crc kubenswrapper[4724]: I0223 19:13:17.272113 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z4rpk" event={"ID":"fb03318b-3389-4ceb-baf6-4c123c6abf5f","Type":"ContainerStarted","Data":"11a0ac874d6ff9b595e7bb26d4e27a9b066ff9dad4cf3b6d8e22c7a45b017e44"} Feb 23 19:13:18 crc kubenswrapper[4724]: I0223 19:13:18.285591 4724 generic.go:334] "Generic (PLEG): container finished" podID="fb03318b-3389-4ceb-baf6-4c123c6abf5f" containerID="11a0ac874d6ff9b595e7bb26d4e27a9b066ff9dad4cf3b6d8e22c7a45b017e44" exitCode=0 Feb 23 19:13:18 crc kubenswrapper[4724]: I0223 19:13:18.285737 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z4rpk" event={"ID":"fb03318b-3389-4ceb-baf6-4c123c6abf5f","Type":"ContainerDied","Data":"11a0ac874d6ff9b595e7bb26d4e27a9b066ff9dad4cf3b6d8e22c7a45b017e44"} Feb 23 19:13:19 crc kubenswrapper[4724]: I0223 19:13:19.301796 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z4rpk" event={"ID":"fb03318b-3389-4ceb-baf6-4c123c6abf5f","Type":"ContainerStarted","Data":"4903aa679220674472d980fbefaf58f65843b7b3d837289b00e39e87624f11ac"} Feb 23 19:13:19 crc kubenswrapper[4724]: I0223 19:13:19.326370 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-z4rpk" podStartSLOduration=1.838130026 podStartE2EDuration="4.326344958s" podCreationTimestamp="2026-02-23 19:13:15 +0000 UTC" firstStartedPulling="2026-02-23 19:13:16.251230335 +0000 UTC m=+6152.067429935" lastFinishedPulling="2026-02-23 19:13:18.739445227 +0000 UTC m=+6154.555644867" observedRunningTime="2026-02-23 19:13:19.322694207 +0000 UTC m=+6155.138893817" watchObservedRunningTime="2026-02-23 19:13:19.326344958 +0000 UTC m=+6155.142544578" Feb 23 19:13:19 crc kubenswrapper[4724]: I0223 19:13:19.952597 4724 scope.go:117] "RemoveContainer" containerID="58372ce2684889e3716ae231c6d47c4b508e3590d753565fcca252a2be0ab53b" Feb 23 19:13:19 crc kubenswrapper[4724]: E0223 19:13:19.953415 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 19:13:25 crc kubenswrapper[4724]: I0223 19:13:25.435720 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-z4rpk" Feb 23 19:13:25 crc kubenswrapper[4724]: I0223 19:13:25.436608 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-z4rpk" Feb 23 19:13:25 crc kubenswrapper[4724]: I0223 19:13:25.519716 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-z4rpk" Feb 23 19:13:26 crc kubenswrapper[4724]: I0223 19:13:26.431738 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-z4rpk" Feb 23 19:13:26 crc kubenswrapper[4724]: I0223 19:13:26.498005 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z4rpk"] Feb 23 19:13:28 crc kubenswrapper[4724]: I0223 19:13:28.384237 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-z4rpk" podUID="fb03318b-3389-4ceb-baf6-4c123c6abf5f" containerName="registry-server" containerID="cri-o://4903aa679220674472d980fbefaf58f65843b7b3d837289b00e39e87624f11ac" gracePeriod=2 Feb 23 19:13:28 crc kubenswrapper[4724]: I0223 19:13:28.847945 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z4rpk" Feb 23 19:13:28 crc kubenswrapper[4724]: I0223 19:13:28.991565 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb03318b-3389-4ceb-baf6-4c123c6abf5f-catalog-content\") pod \"fb03318b-3389-4ceb-baf6-4c123c6abf5f\" (UID: \"fb03318b-3389-4ceb-baf6-4c123c6abf5f\") " Feb 23 19:13:28 crc kubenswrapper[4724]: I0223 19:13:28.991651 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zd7n4\" (UniqueName: \"kubernetes.io/projected/fb03318b-3389-4ceb-baf6-4c123c6abf5f-kube-api-access-zd7n4\") pod \"fb03318b-3389-4ceb-baf6-4c123c6abf5f\" (UID: \"fb03318b-3389-4ceb-baf6-4c123c6abf5f\") " Feb 23 19:13:28 crc kubenswrapper[4724]: I0223 19:13:28.991969 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb03318b-3389-4ceb-baf6-4c123c6abf5f-utilities\") pod \"fb03318b-3389-4ceb-baf6-4c123c6abf5f\" (UID: \"fb03318b-3389-4ceb-baf6-4c123c6abf5f\") " Feb 23 19:13:28 crc kubenswrapper[4724]: I0223 19:13:28.992621 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb03318b-3389-4ceb-baf6-4c123c6abf5f-utilities" (OuterVolumeSpecName: "utilities") pod "fb03318b-3389-4ceb-baf6-4c123c6abf5f" (UID: "fb03318b-3389-4ceb-baf6-4c123c6abf5f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 19:13:28 crc kubenswrapper[4724]: I0223 19:13:28.993538 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb03318b-3389-4ceb-baf6-4c123c6abf5f-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 19:13:28 crc kubenswrapper[4724]: I0223 19:13:28.998725 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb03318b-3389-4ceb-baf6-4c123c6abf5f-kube-api-access-zd7n4" (OuterVolumeSpecName: "kube-api-access-zd7n4") pod "fb03318b-3389-4ceb-baf6-4c123c6abf5f" (UID: "fb03318b-3389-4ceb-baf6-4c123c6abf5f"). InnerVolumeSpecName "kube-api-access-zd7n4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:13:29 crc kubenswrapper[4724]: I0223 19:13:29.015883 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb03318b-3389-4ceb-baf6-4c123c6abf5f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fb03318b-3389-4ceb-baf6-4c123c6abf5f" (UID: "fb03318b-3389-4ceb-baf6-4c123c6abf5f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 19:13:29 crc kubenswrapper[4724]: I0223 19:13:29.095519 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb03318b-3389-4ceb-baf6-4c123c6abf5f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 19:13:29 crc kubenswrapper[4724]: I0223 19:13:29.095822 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zd7n4\" (UniqueName: \"kubernetes.io/projected/fb03318b-3389-4ceb-baf6-4c123c6abf5f-kube-api-access-zd7n4\") on node \"crc\" DevicePath \"\"" Feb 23 19:13:29 crc kubenswrapper[4724]: I0223 19:13:29.395759 4724 generic.go:334] "Generic (PLEG): container finished" podID="fb03318b-3389-4ceb-baf6-4c123c6abf5f" containerID="4903aa679220674472d980fbefaf58f65843b7b3d837289b00e39e87624f11ac" exitCode=0 Feb 23 19:13:29 crc kubenswrapper[4724]: I0223 19:13:29.395808 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z4rpk" event={"ID":"fb03318b-3389-4ceb-baf6-4c123c6abf5f","Type":"ContainerDied","Data":"4903aa679220674472d980fbefaf58f65843b7b3d837289b00e39e87624f11ac"} Feb 23 19:13:29 crc kubenswrapper[4724]: I0223 19:13:29.395842 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z4rpk" event={"ID":"fb03318b-3389-4ceb-baf6-4c123c6abf5f","Type":"ContainerDied","Data":"c9221b0dade7a0de701b447eca87c587293bff0a0844170b80fd27d8fa548458"} Feb 23 19:13:29 crc kubenswrapper[4724]: I0223 19:13:29.395860 4724 scope.go:117] "RemoveContainer" containerID="4903aa679220674472d980fbefaf58f65843b7b3d837289b00e39e87624f11ac" Feb 23 19:13:29 crc kubenswrapper[4724]: I0223 19:13:29.395865 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z4rpk" Feb 23 19:13:29 crc kubenswrapper[4724]: I0223 19:13:29.436938 4724 scope.go:117] "RemoveContainer" containerID="11a0ac874d6ff9b595e7bb26d4e27a9b066ff9dad4cf3b6d8e22c7a45b017e44" Feb 23 19:13:29 crc kubenswrapper[4724]: I0223 19:13:29.448825 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z4rpk"] Feb 23 19:13:29 crc kubenswrapper[4724]: I0223 19:13:29.460897 4724 scope.go:117] "RemoveContainer" containerID="ccfeeca793a61440573db045b803adaba989101e163b21370b1190e0b14a8e24" Feb 23 19:13:29 crc kubenswrapper[4724]: I0223 19:13:29.462904 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-z4rpk"] Feb 23 19:13:29 crc kubenswrapper[4724]: I0223 19:13:29.530633 4724 scope.go:117] "RemoveContainer" containerID="4903aa679220674472d980fbefaf58f65843b7b3d837289b00e39e87624f11ac" Feb 23 19:13:29 crc kubenswrapper[4724]: E0223 19:13:29.531722 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4903aa679220674472d980fbefaf58f65843b7b3d837289b00e39e87624f11ac\": container with ID starting with 4903aa679220674472d980fbefaf58f65843b7b3d837289b00e39e87624f11ac not found: ID does not exist" containerID="4903aa679220674472d980fbefaf58f65843b7b3d837289b00e39e87624f11ac" Feb 23 19:13:29 crc kubenswrapper[4724]: I0223 19:13:29.531763 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4903aa679220674472d980fbefaf58f65843b7b3d837289b00e39e87624f11ac"} err="failed to get container status \"4903aa679220674472d980fbefaf58f65843b7b3d837289b00e39e87624f11ac\": rpc error: code = NotFound desc = could not find container \"4903aa679220674472d980fbefaf58f65843b7b3d837289b00e39e87624f11ac\": container with ID starting with 4903aa679220674472d980fbefaf58f65843b7b3d837289b00e39e87624f11ac not found: ID does not exist" Feb 23 19:13:29 crc kubenswrapper[4724]: I0223 19:13:29.531790 4724 scope.go:117] "RemoveContainer" containerID="11a0ac874d6ff9b595e7bb26d4e27a9b066ff9dad4cf3b6d8e22c7a45b017e44" Feb 23 19:13:29 crc kubenswrapper[4724]: E0223 19:13:29.532341 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11a0ac874d6ff9b595e7bb26d4e27a9b066ff9dad4cf3b6d8e22c7a45b017e44\": container with ID starting with 11a0ac874d6ff9b595e7bb26d4e27a9b066ff9dad4cf3b6d8e22c7a45b017e44 not found: ID does not exist" containerID="11a0ac874d6ff9b595e7bb26d4e27a9b066ff9dad4cf3b6d8e22c7a45b017e44" Feb 23 19:13:29 crc kubenswrapper[4724]: I0223 19:13:29.532373 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11a0ac874d6ff9b595e7bb26d4e27a9b066ff9dad4cf3b6d8e22c7a45b017e44"} err="failed to get container status \"11a0ac874d6ff9b595e7bb26d4e27a9b066ff9dad4cf3b6d8e22c7a45b017e44\": rpc error: code = NotFound desc = could not find container \"11a0ac874d6ff9b595e7bb26d4e27a9b066ff9dad4cf3b6d8e22c7a45b017e44\": container with ID starting with 11a0ac874d6ff9b595e7bb26d4e27a9b066ff9dad4cf3b6d8e22c7a45b017e44 not found: ID does not exist" Feb 23 19:13:29 crc kubenswrapper[4724]: I0223 19:13:29.532724 4724 scope.go:117] "RemoveContainer" containerID="ccfeeca793a61440573db045b803adaba989101e163b21370b1190e0b14a8e24" Feb 23 19:13:29 crc kubenswrapper[4724]: E0223 19:13:29.533363 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccfeeca793a61440573db045b803adaba989101e163b21370b1190e0b14a8e24\": container with ID starting with ccfeeca793a61440573db045b803adaba989101e163b21370b1190e0b14a8e24 not found: ID does not exist" containerID="ccfeeca793a61440573db045b803adaba989101e163b21370b1190e0b14a8e24" Feb 23 19:13:29 crc kubenswrapper[4724]: I0223 19:13:29.533418 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccfeeca793a61440573db045b803adaba989101e163b21370b1190e0b14a8e24"} err="failed to get container status \"ccfeeca793a61440573db045b803adaba989101e163b21370b1190e0b14a8e24\": rpc error: code = NotFound desc = could not find container \"ccfeeca793a61440573db045b803adaba989101e163b21370b1190e0b14a8e24\": container with ID starting with ccfeeca793a61440573db045b803adaba989101e163b21370b1190e0b14a8e24 not found: ID does not exist" Feb 23 19:13:30 crc kubenswrapper[4724]: I0223 19:13:30.965003 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb03318b-3389-4ceb-baf6-4c123c6abf5f" path="/var/lib/kubelet/pods/fb03318b-3389-4ceb-baf6-4c123c6abf5f/volumes" Feb 23 19:13:34 crc kubenswrapper[4724]: I0223 19:13:34.964689 4724 scope.go:117] "RemoveContainer" containerID="58372ce2684889e3716ae231c6d47c4b508e3590d753565fcca252a2be0ab53b" Feb 23 19:13:34 crc kubenswrapper[4724]: E0223 19:13:34.965360 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 19:13:44 crc kubenswrapper[4724]: I0223 19:13:44.368168 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-t9dkq"] Feb 23 19:13:44 crc kubenswrapper[4724]: E0223 19:13:44.369258 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb03318b-3389-4ceb-baf6-4c123c6abf5f" containerName="extract-content" Feb 23 19:13:44 crc kubenswrapper[4724]: I0223 19:13:44.369274 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb03318b-3389-4ceb-baf6-4c123c6abf5f" containerName="extract-content" Feb 23 19:13:44 crc kubenswrapper[4724]: E0223 19:13:44.369283 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb03318b-3389-4ceb-baf6-4c123c6abf5f" containerName="extract-utilities" Feb 23 19:13:44 crc kubenswrapper[4724]: I0223 19:13:44.369289 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb03318b-3389-4ceb-baf6-4c123c6abf5f" containerName="extract-utilities" Feb 23 19:13:44 crc kubenswrapper[4724]: E0223 19:13:44.369312 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb03318b-3389-4ceb-baf6-4c123c6abf5f" containerName="registry-server" Feb 23 19:13:44 crc kubenswrapper[4724]: I0223 19:13:44.369318 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb03318b-3389-4ceb-baf6-4c123c6abf5f" containerName="registry-server" Feb 23 19:13:44 crc kubenswrapper[4724]: I0223 19:13:44.369563 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb03318b-3389-4ceb-baf6-4c123c6abf5f" containerName="registry-server" Feb 23 19:13:44 crc kubenswrapper[4724]: I0223 19:13:44.371000 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t9dkq" Feb 23 19:13:44 crc kubenswrapper[4724]: I0223 19:13:44.380402 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t9dkq"] Feb 23 19:13:44 crc kubenswrapper[4724]: I0223 19:13:44.537803 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bd9e94a-300f-4795-bec3-86a4bcc1b67d-utilities\") pod \"certified-operators-t9dkq\" (UID: \"4bd9e94a-300f-4795-bec3-86a4bcc1b67d\") " pod="openshift-marketplace/certified-operators-t9dkq" Feb 23 19:13:44 crc kubenswrapper[4724]: I0223 19:13:44.537891 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb5qz\" (UniqueName: \"kubernetes.io/projected/4bd9e94a-300f-4795-bec3-86a4bcc1b67d-kube-api-access-sb5qz\") pod \"certified-operators-t9dkq\" (UID: \"4bd9e94a-300f-4795-bec3-86a4bcc1b67d\") " pod="openshift-marketplace/certified-operators-t9dkq" Feb 23 19:13:44 crc kubenswrapper[4724]: I0223 19:13:44.537974 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bd9e94a-300f-4795-bec3-86a4bcc1b67d-catalog-content\") pod \"certified-operators-t9dkq\" (UID: \"4bd9e94a-300f-4795-bec3-86a4bcc1b67d\") " pod="openshift-marketplace/certified-operators-t9dkq" Feb 23 19:13:44 crc kubenswrapper[4724]: I0223 19:13:44.639435 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bd9e94a-300f-4795-bec3-86a4bcc1b67d-utilities\") pod \"certified-operators-t9dkq\" (UID: \"4bd9e94a-300f-4795-bec3-86a4bcc1b67d\") " pod="openshift-marketplace/certified-operators-t9dkq" Feb 23 19:13:44 crc kubenswrapper[4724]: I0223 19:13:44.639492 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb5qz\" (UniqueName: \"kubernetes.io/projected/4bd9e94a-300f-4795-bec3-86a4bcc1b67d-kube-api-access-sb5qz\") pod \"certified-operators-t9dkq\" (UID: \"4bd9e94a-300f-4795-bec3-86a4bcc1b67d\") " pod="openshift-marketplace/certified-operators-t9dkq" Feb 23 19:13:44 crc kubenswrapper[4724]: I0223 19:13:44.639541 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bd9e94a-300f-4795-bec3-86a4bcc1b67d-catalog-content\") pod \"certified-operators-t9dkq\" (UID: \"4bd9e94a-300f-4795-bec3-86a4bcc1b67d\") " pod="openshift-marketplace/certified-operators-t9dkq" Feb 23 19:13:44 crc kubenswrapper[4724]: I0223 19:13:44.640050 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bd9e94a-300f-4795-bec3-86a4bcc1b67d-utilities\") pod \"certified-operators-t9dkq\" (UID: \"4bd9e94a-300f-4795-bec3-86a4bcc1b67d\") " pod="openshift-marketplace/certified-operators-t9dkq" Feb 23 19:13:44 crc kubenswrapper[4724]: I0223 19:13:44.640093 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bd9e94a-300f-4795-bec3-86a4bcc1b67d-catalog-content\") pod \"certified-operators-t9dkq\" (UID: \"4bd9e94a-300f-4795-bec3-86a4bcc1b67d\") " pod="openshift-marketplace/certified-operators-t9dkq" Feb 23 19:13:44 crc kubenswrapper[4724]: I0223 19:13:44.664770 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb5qz\" (UniqueName: \"kubernetes.io/projected/4bd9e94a-300f-4795-bec3-86a4bcc1b67d-kube-api-access-sb5qz\") pod \"certified-operators-t9dkq\" (UID: \"4bd9e94a-300f-4795-bec3-86a4bcc1b67d\") " pod="openshift-marketplace/certified-operators-t9dkq" Feb 23 19:13:44 crc kubenswrapper[4724]: I0223 19:13:44.706993 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t9dkq" Feb 23 19:13:45 crc kubenswrapper[4724]: I0223 19:13:45.193542 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t9dkq"] Feb 23 19:13:45 crc kubenswrapper[4724]: I0223 19:13:45.552433 4724 generic.go:334] "Generic (PLEG): container finished" podID="4bd9e94a-300f-4795-bec3-86a4bcc1b67d" containerID="46b02026f44ccac811fd0ebd436cfdb3a02db5c40565af5d286b1fce7cd70961" exitCode=0 Feb 23 19:13:45 crc kubenswrapper[4724]: I0223 19:13:45.552509 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t9dkq" event={"ID":"4bd9e94a-300f-4795-bec3-86a4bcc1b67d","Type":"ContainerDied","Data":"46b02026f44ccac811fd0ebd436cfdb3a02db5c40565af5d286b1fce7cd70961"} Feb 23 19:13:45 crc kubenswrapper[4724]: I0223 19:13:45.552714 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t9dkq" event={"ID":"4bd9e94a-300f-4795-bec3-86a4bcc1b67d","Type":"ContainerStarted","Data":"66fb2c39259cde84a30d8dc83f1854613812d36c1aec6643d1e86c07e1180aa8"} Feb 23 19:13:46 crc kubenswrapper[4724]: I0223 19:13:46.571491 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t9dkq" event={"ID":"4bd9e94a-300f-4795-bec3-86a4bcc1b67d","Type":"ContainerStarted","Data":"9f160d8e5a79505322dd4eba8e9a62f3a67423c8b0b68048bf6d6bf3dd31fdc0"} Feb 23 19:13:46 crc kubenswrapper[4724]: I0223 19:13:46.950527 4724 scope.go:117] "RemoveContainer" containerID="58372ce2684889e3716ae231c6d47c4b508e3590d753565fcca252a2be0ab53b" Feb 23 19:13:46 crc kubenswrapper[4724]: E0223 19:13:46.950783 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 19:13:48 crc kubenswrapper[4724]: I0223 19:13:48.594478 4724 generic.go:334] "Generic (PLEG): container finished" podID="4bd9e94a-300f-4795-bec3-86a4bcc1b67d" containerID="9f160d8e5a79505322dd4eba8e9a62f3a67423c8b0b68048bf6d6bf3dd31fdc0" exitCode=0 Feb 23 19:13:48 crc kubenswrapper[4724]: I0223 19:13:48.594554 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t9dkq" event={"ID":"4bd9e94a-300f-4795-bec3-86a4bcc1b67d","Type":"ContainerDied","Data":"9f160d8e5a79505322dd4eba8e9a62f3a67423c8b0b68048bf6d6bf3dd31fdc0"} Feb 23 19:13:49 crc kubenswrapper[4724]: I0223 19:13:49.617784 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t9dkq" event={"ID":"4bd9e94a-300f-4795-bec3-86a4bcc1b67d","Type":"ContainerStarted","Data":"9848d1ac0d547a3e81864e91d1897e1a573103ad84134ba7aee58abe070ed91a"} Feb 23 19:13:49 crc kubenswrapper[4724]: I0223 19:13:49.649381 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-t9dkq" podStartSLOduration=2.208294206 podStartE2EDuration="5.64935683s" podCreationTimestamp="2026-02-23 19:13:44 +0000 UTC" firstStartedPulling="2026-02-23 19:13:45.555489961 +0000 UTC m=+6181.371689591" lastFinishedPulling="2026-02-23 19:13:48.996552605 +0000 UTC m=+6184.812752215" observedRunningTime="2026-02-23 19:13:49.639258958 +0000 UTC m=+6185.455458578" watchObservedRunningTime="2026-02-23 19:13:49.64935683 +0000 UTC m=+6185.465556440" Feb 23 19:13:54 crc kubenswrapper[4724]: I0223 19:13:54.707610 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-t9dkq" Feb 23 19:13:54 crc kubenswrapper[4724]: I0223 19:13:54.708219 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-t9dkq" Feb 23 19:13:54 crc kubenswrapper[4724]: I0223 19:13:54.775599 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-t9dkq" Feb 23 19:13:55 crc kubenswrapper[4724]: I0223 19:13:55.748965 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-t9dkq" Feb 23 19:13:55 crc kubenswrapper[4724]: I0223 19:13:55.814824 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t9dkq"] Feb 23 19:13:57 crc kubenswrapper[4724]: I0223 19:13:57.714326 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-t9dkq" podUID="4bd9e94a-300f-4795-bec3-86a4bcc1b67d" containerName="registry-server" containerID="cri-o://9848d1ac0d547a3e81864e91d1897e1a573103ad84134ba7aee58abe070ed91a" gracePeriod=2 Feb 23 19:13:58 crc kubenswrapper[4724]: I0223 19:13:58.190086 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t9dkq" Feb 23 19:13:58 crc kubenswrapper[4724]: I0223 19:13:58.279037 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bd9e94a-300f-4795-bec3-86a4bcc1b67d-utilities\") pod \"4bd9e94a-300f-4795-bec3-86a4bcc1b67d\" (UID: \"4bd9e94a-300f-4795-bec3-86a4bcc1b67d\") " Feb 23 19:13:58 crc kubenswrapper[4724]: I0223 19:13:58.279281 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb5qz\" (UniqueName: \"kubernetes.io/projected/4bd9e94a-300f-4795-bec3-86a4bcc1b67d-kube-api-access-sb5qz\") pod \"4bd9e94a-300f-4795-bec3-86a4bcc1b67d\" (UID: \"4bd9e94a-300f-4795-bec3-86a4bcc1b67d\") " Feb 23 19:13:58 crc kubenswrapper[4724]: I0223 19:13:58.279322 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bd9e94a-300f-4795-bec3-86a4bcc1b67d-catalog-content\") pod \"4bd9e94a-300f-4795-bec3-86a4bcc1b67d\" (UID: \"4bd9e94a-300f-4795-bec3-86a4bcc1b67d\") " Feb 23 19:13:58 crc kubenswrapper[4724]: I0223 19:13:58.280198 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bd9e94a-300f-4795-bec3-86a4bcc1b67d-utilities" (OuterVolumeSpecName: "utilities") pod "4bd9e94a-300f-4795-bec3-86a4bcc1b67d" (UID: "4bd9e94a-300f-4795-bec3-86a4bcc1b67d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 19:13:58 crc kubenswrapper[4724]: I0223 19:13:58.284960 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bd9e94a-300f-4795-bec3-86a4bcc1b67d-kube-api-access-sb5qz" (OuterVolumeSpecName: "kube-api-access-sb5qz") pod "4bd9e94a-300f-4795-bec3-86a4bcc1b67d" (UID: "4bd9e94a-300f-4795-bec3-86a4bcc1b67d"). InnerVolumeSpecName "kube-api-access-sb5qz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:13:58 crc kubenswrapper[4724]: I0223 19:13:58.355932 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bd9e94a-300f-4795-bec3-86a4bcc1b67d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4bd9e94a-300f-4795-bec3-86a4bcc1b67d" (UID: "4bd9e94a-300f-4795-bec3-86a4bcc1b67d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 19:13:58 crc kubenswrapper[4724]: I0223 19:13:58.381668 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb5qz\" (UniqueName: \"kubernetes.io/projected/4bd9e94a-300f-4795-bec3-86a4bcc1b67d-kube-api-access-sb5qz\") on node \"crc\" DevicePath \"\"" Feb 23 19:13:58 crc kubenswrapper[4724]: I0223 19:13:58.381710 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bd9e94a-300f-4795-bec3-86a4bcc1b67d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 19:13:58 crc kubenswrapper[4724]: I0223 19:13:58.381723 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bd9e94a-300f-4795-bec3-86a4bcc1b67d-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 19:13:58 crc kubenswrapper[4724]: I0223 19:13:58.726250 4724 generic.go:334] "Generic (PLEG): container finished" podID="4bd9e94a-300f-4795-bec3-86a4bcc1b67d" containerID="9848d1ac0d547a3e81864e91d1897e1a573103ad84134ba7aee58abe070ed91a" exitCode=0 Feb 23 19:13:58 crc kubenswrapper[4724]: I0223 19:13:58.726316 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t9dkq" Feb 23 19:13:58 crc kubenswrapper[4724]: I0223 19:13:58.726343 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t9dkq" event={"ID":"4bd9e94a-300f-4795-bec3-86a4bcc1b67d","Type":"ContainerDied","Data":"9848d1ac0d547a3e81864e91d1897e1a573103ad84134ba7aee58abe070ed91a"} Feb 23 19:13:58 crc kubenswrapper[4724]: I0223 19:13:58.728027 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t9dkq" event={"ID":"4bd9e94a-300f-4795-bec3-86a4bcc1b67d","Type":"ContainerDied","Data":"66fb2c39259cde84a30d8dc83f1854613812d36c1aec6643d1e86c07e1180aa8"} Feb 23 19:13:58 crc kubenswrapper[4724]: I0223 19:13:58.728055 4724 scope.go:117] "RemoveContainer" containerID="9848d1ac0d547a3e81864e91d1897e1a573103ad84134ba7aee58abe070ed91a" Feb 23 19:13:58 crc kubenswrapper[4724]: I0223 19:13:58.772679 4724 scope.go:117] "RemoveContainer" containerID="9f160d8e5a79505322dd4eba8e9a62f3a67423c8b0b68048bf6d6bf3dd31fdc0" Feb 23 19:13:58 crc kubenswrapper[4724]: I0223 19:13:58.779587 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t9dkq"] Feb 23 19:13:58 crc kubenswrapper[4724]: I0223 19:13:58.798786 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-t9dkq"] Feb 23 19:13:58 crc kubenswrapper[4724]: I0223 19:13:58.816304 4724 scope.go:117] "RemoveContainer" containerID="46b02026f44ccac811fd0ebd436cfdb3a02db5c40565af5d286b1fce7cd70961" Feb 23 19:13:58 crc kubenswrapper[4724]: I0223 19:13:58.893061 4724 scope.go:117] "RemoveContainer" containerID="9848d1ac0d547a3e81864e91d1897e1a573103ad84134ba7aee58abe070ed91a" Feb 23 19:13:58 crc kubenswrapper[4724]: E0223 19:13:58.894334 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9848d1ac0d547a3e81864e91d1897e1a573103ad84134ba7aee58abe070ed91a\": container with ID starting with 9848d1ac0d547a3e81864e91d1897e1a573103ad84134ba7aee58abe070ed91a not found: ID does not exist" containerID="9848d1ac0d547a3e81864e91d1897e1a573103ad84134ba7aee58abe070ed91a" Feb 23 19:13:58 crc kubenswrapper[4724]: I0223 19:13:58.894541 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9848d1ac0d547a3e81864e91d1897e1a573103ad84134ba7aee58abe070ed91a"} err="failed to get container status \"9848d1ac0d547a3e81864e91d1897e1a573103ad84134ba7aee58abe070ed91a\": rpc error: code = NotFound desc = could not find container \"9848d1ac0d547a3e81864e91d1897e1a573103ad84134ba7aee58abe070ed91a\": container with ID starting with 9848d1ac0d547a3e81864e91d1897e1a573103ad84134ba7aee58abe070ed91a not found: ID does not exist" Feb 23 19:13:58 crc kubenswrapper[4724]: I0223 19:13:58.894643 4724 scope.go:117] "RemoveContainer" containerID="9f160d8e5a79505322dd4eba8e9a62f3a67423c8b0b68048bf6d6bf3dd31fdc0" Feb 23 19:13:58 crc kubenswrapper[4724]: E0223 19:13:58.897387 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f160d8e5a79505322dd4eba8e9a62f3a67423c8b0b68048bf6d6bf3dd31fdc0\": container with ID starting with 9f160d8e5a79505322dd4eba8e9a62f3a67423c8b0b68048bf6d6bf3dd31fdc0 not found: ID does not exist" containerID="9f160d8e5a79505322dd4eba8e9a62f3a67423c8b0b68048bf6d6bf3dd31fdc0" Feb 23 19:13:58 crc kubenswrapper[4724]: I0223 19:13:58.897448 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f160d8e5a79505322dd4eba8e9a62f3a67423c8b0b68048bf6d6bf3dd31fdc0"} err="failed to get container status \"9f160d8e5a79505322dd4eba8e9a62f3a67423c8b0b68048bf6d6bf3dd31fdc0\": rpc error: code = NotFound desc = could not find container \"9f160d8e5a79505322dd4eba8e9a62f3a67423c8b0b68048bf6d6bf3dd31fdc0\": container with ID starting with 9f160d8e5a79505322dd4eba8e9a62f3a67423c8b0b68048bf6d6bf3dd31fdc0 not found: ID does not exist" Feb 23 19:13:58 crc kubenswrapper[4724]: I0223 19:13:58.897483 4724 scope.go:117] "RemoveContainer" containerID="46b02026f44ccac811fd0ebd436cfdb3a02db5c40565af5d286b1fce7cd70961" Feb 23 19:13:58 crc kubenswrapper[4724]: E0223 19:13:58.898807 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46b02026f44ccac811fd0ebd436cfdb3a02db5c40565af5d286b1fce7cd70961\": container with ID starting with 46b02026f44ccac811fd0ebd436cfdb3a02db5c40565af5d286b1fce7cd70961 not found: ID does not exist" containerID="46b02026f44ccac811fd0ebd436cfdb3a02db5c40565af5d286b1fce7cd70961" Feb 23 19:13:58 crc kubenswrapper[4724]: I0223 19:13:58.899070 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46b02026f44ccac811fd0ebd436cfdb3a02db5c40565af5d286b1fce7cd70961"} err="failed to get container status \"46b02026f44ccac811fd0ebd436cfdb3a02db5c40565af5d286b1fce7cd70961\": rpc error: code = NotFound desc = could not find container \"46b02026f44ccac811fd0ebd436cfdb3a02db5c40565af5d286b1fce7cd70961\": container with ID starting with 46b02026f44ccac811fd0ebd436cfdb3a02db5c40565af5d286b1fce7cd70961 not found: ID does not exist" Feb 23 19:13:58 crc kubenswrapper[4724]: I0223 19:13:58.962940 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bd9e94a-300f-4795-bec3-86a4bcc1b67d" path="/var/lib/kubelet/pods/4bd9e94a-300f-4795-bec3-86a4bcc1b67d/volumes" Feb 23 19:13:59 crc kubenswrapper[4724]: I0223 19:13:59.951907 4724 scope.go:117] "RemoveContainer" containerID="58372ce2684889e3716ae231c6d47c4b508e3590d753565fcca252a2be0ab53b" Feb 23 19:13:59 crc kubenswrapper[4724]: E0223 19:13:59.952157 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 19:14:10 crc kubenswrapper[4724]: I0223 19:14:10.952007 4724 scope.go:117] "RemoveContainer" containerID="58372ce2684889e3716ae231c6d47c4b508e3590d753565fcca252a2be0ab53b" Feb 23 19:14:10 crc kubenswrapper[4724]: E0223 19:14:10.952733 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 19:14:22 crc kubenswrapper[4724]: I0223 19:14:22.951325 4724 scope.go:117] "RemoveContainer" containerID="58372ce2684889e3716ae231c6d47c4b508e3590d753565fcca252a2be0ab53b" Feb 23 19:14:22 crc kubenswrapper[4724]: E0223 19:14:22.952564 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 19:14:37 crc kubenswrapper[4724]: I0223 19:14:37.951458 4724 scope.go:117] "RemoveContainer" containerID="58372ce2684889e3716ae231c6d47c4b508e3590d753565fcca252a2be0ab53b" Feb 23 19:14:37 crc kubenswrapper[4724]: E0223 19:14:37.952253 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2" Feb 23 19:14:50 crc kubenswrapper[4724]: I0223 19:14:50.951890 4724 scope.go:117] "RemoveContainer" containerID="58372ce2684889e3716ae231c6d47c4b508e3590d753565fcca252a2be0ab53b" Feb 23 19:14:50 crc kubenswrapper[4724]: E0223 19:14:50.954055 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rw78r_openshift-machine-config-operator(a065b197-b354-4d9b-b2e9-7d4882a3d1a2)\"" pod="openshift-machine-config-operator/machine-config-daemon-rw78r" podUID="a065b197-b354-4d9b-b2e9-7d4882a3d1a2"